• 1 Post
  • 507 Comments
Joined 2 years ago
cake
Cake day: June 23rd, 2023

help-circle




  • That’s it.

    Sounds like a Chinese geek tried to make something useful, did a lot of dirty hacks to get it going.

    And couldn’t properly explain because his social skills and English weren’t great.

    The blobs weren’t super suspicious, just some gpld tools, basically busybox kind of stuff.

    The real problem is what he made was so fucking insanely useful and needed by everyone that the standards for software skyrocketed.

    Like you make a cure for cancer and everyone starts screaming at you because one of the side effects is temporary impotence.


  • Google is pushing av1 because of patents, but 266 is just plain better tech, even if it’s harder to encode.

    This same shit happened with 265 and vp9, and before that, and before that with vorbis/opus/aac.

    They’ll come back because it’s a standard, and has higher quality.

    Maybe this is the one time somehow av1 wins out on patents, but I’m encoding av1 and I’m really not impressed, it’s literally just dressed up hevc, maybe a 10% improvement max.

    I’ve seen vvc and it’s really flexible, it shifts gears on a dime between high motion and deep detail, which is basically what your brain sees most, while av1 is actually kind of worse than hevc at that to me, it’s sluggish at the shifts, even if it is better overall.




  • Yeah, I think it’s because that’s where the model originated, and that’s basically what it’s supposed to be, but having almost everyone synchronized on time gives us a new trick because we can just generate ‘keys’ and have them expire, so even if you manage to get one by force, it’s only valid a short window. Instead of one time pad they often call them one time passwords.

    You need extended access to a generator over time to be able to use it, which gives the user a chance to report it for invalidation.

    Not perfect, but it does its job fine especially compared to passwords or sms (where you’re at the mercy of the minimum wage kid down at the mall’s Verizon kiosk).




  • Firstly, we’ll get there in time.

    Secondly, having baseload vastly reduces the amount of batteries needed, and overall is helpful, and nuclear is one of the best baseloads there is.

    By any logic we should work on fusion research because it’s the actual solution, but the enemy isn’t nuclear or renewables, it’s fossil fuels, they must be killed as brutally as possible, not just for their ecological impact, but also for their political impact, which may be the most toxic of all.

    Imagine the politics of this country if Texas wasn’t “Saudi Oil Money” rich and didn’t try to screw over our politics on a constant basis. They’re the reason we don’t have nuclear already, they’d much rather keep everyone on the dinosaur habit than let us move forward an inch.






  • No, it doesn’t make sense to do it.

    I worked on platform enablement for armv8, bringing all the ecosystem to 64 bit arm. Was an everest, so much code was expecting x86, lots of secret asm and other assumptions like memory model.

    But once it was done, we did it again for riscv in no time, all the work was done, it was basically setting defines, maybe adding tsc/rdcycle (now rdtime).

    Architectures don’t really matter anymore, but also the overhead of architectures are pretty minor, riscv will probably win because it’s basically free and single thread performance isn’t as critical on client devices, lot of work goes to the GPU too, and servers do other heavy lifting. Qualcomm scared everybody too, and China is going their own way which means even more riscv.

    Basically, nothing matters except cost now, we’ll figure out how to run things on a potato, we’ve gotten good at it.



  • It actually can, the thing we learned is that the unpleasant bits of x86 scale well, so we spent 30% of the die doing uop decode, but that’s now just 1-2% because we blow so much on more registers and cache.

    Also we can play games like soft-deprecating instructions and features so they exist, but are stupid slow in microcode.

    We used to think only risc could run fast at low power, but our current cisc decoded to risc works fine, Intel just got stupid lazy.

    Apple just took all the tradeoffs Intel was too cheap to spend silicon on and turned them to 11, we could have had them before but all the arm guys were basically buying ip and didn’t invest in physically optimized designs, but now that tsmc is the main game in town (fallback to gf was nice for price), there’s a lot more room to rely on their cell libraries.

    Intel got so insanely arrogant, just like Boeing and all the other catastrophic American failures right now, we just need to correct for that and we can be decent again.