• 1 Post
  • 174 Comments
Joined 3 years ago
cake
Cake day: June 20th, 2023

help-circle

  • Did you even read the article? Even under the VERY GENEROUS interpretation of contract law that contracts can’t be predatory (which is not a particularly popular philosophical stance outside of cyberpunk fiction), AWS MENA fell short of even their typical termination procedures because they accidentally nuked it while doing a dry-run.

    I don’t know where you work but if we did that to a paying customer, even IF there was a technicality through which we could deny responsibility, we would be trying to make it right.


  • The author put it well:

    What if you have petabytes of data? How do you backup a backup? What happens when that backup contains HIPAA-protected information or client data? The whole promise of cloud computing collapses into complexity.

    Multi-region cloud computing is already difficult and expensive enough, multi-cloud is not only technically complex but financially and legally fraught with uncertainties. At that point you’re giving up so much of the promise of cloud computing that you might as well rent rack space somewhere, install bare-metal infra, and pay someone to drive there to manually backup to tape every 3 months.

    This level of technical purity is economically unfeasible for virtually everyone, that’s the whole point of paying a vendor to deal with it for us. And you know who doesn’t need to put up with the insane overhead of multi-cloud setups? That’s right, Amazon, Microsoft, and Google, who will be getting paid for hosting everyone else’s multi-cloud setups while they get to run their huge infra on their own cloud without fear. The last thing GAFAM competitors - especially OSS projects - need is even fewer economies of scale.

    Stop with the victim-blaming, this blunder is squarely on AWS.


  • It’s one of a plethora of scripting languages from the '90s which were designed to be the antithesis of “fail fast” and kept going no matter what.

    I guess what with C/C++ being the Mainstream Option at the time, not having to deal with a strict compiler must have felt like freedom. As someone who has had to maintain, cleanup and migrate ancient PHP code, I call it folly. That mindset of “let the programmer just do whatever and keep trucking” breeds awful programming practices and renders static analysis varying degrees of useless, which makes large-scale refactoring hard to automate which is just amazing when your major versions aren’t even remotely FUCKING BACKWARDS COMPATIBLE.

    PHP’s original design is just fundamentally atrocious. It became popular in large part because unmaintainable code is usually someone else’s problem.

    A language that I would definitely use for server-side rendering and that was already good from its first stable release is Go. It was thoughtfully designed and lends itself really well to static analysis, while still being easy to write and decently performant.


  • It can do both, lossiness is toggleable.

    If you’ve seen a picture on Lemmy, you’ve almost certainly seen a WebP. A fair bit of software – most egregiously from Microsoft – refuses to decode them still, but every major browser has supported WebP for years and since superior data efficiency compared to JPG/PNG means is already very widely used on the web. Bandwidth is not that cheap.


  • Nowadays “buggy” is not how I’d describe it, though there were certainly teething issues at the beginning. By now other DEs have learned to deal with it.

    However it’s still true that the GTK4 design is ill-fitting, and very opinionated. Quite exemplary of this are the applications that hardcode the GTK file picker (like Firefox and chrome) even though it’s inferior in every way to the Qt file picker and forces the infuriating GTK “design” choice of doing fuzzy search when you type in the file list instead of jumping to the relevant file. Very annoying when dealing with organized directories especially when no other file browser on my system works that way!


  • It can either work very well or terribly I think.

    It would have been terrible in TW3. There are too many damn quests to keep track of; when you get to Novigrad you spend the first couple hours being bombarded by quest hooks, some of which are not supposed to be resolved until Geralt gains 10 more levels (for instance Hattori’s quest line). Having to turn down a quest hook or fail a quest because of time constraints would be punishing through no fault of the player and therefore bad game design. Book Geralt would ignore all the side-quests and focus on finding Ciri, but that’d make for a very different game. Also 75 % of the quest hooks where you’re supposed to meet someone “at the docks tonight” are just a narrative shortcut. In real-life you’d say “sorry I already have a nightwraith contract, can you do tomorrow night instead?”.

    If the reasons why you have to turn down a quest are well integrated to the narration and the player can only fail a quest because of actual time mismanagement, then it makes sense. IMO this seems most doable in a game with a reduced scope, up to 20 hours of content, where every quest is distinct and meaningful and can be kept in mind. Which I’m very down for because I don’t have much time for 100+ hour main story games anymore.


  • Honestly the metro design language didn’t look particularly attractive for touch screens either. I knew someone with a Nokia Windows Phone, the interface seemed… clunky. Quirky but not in the right ways.

    It has to cater to mice and fingers, and so ends up with the lowest common denominator. Can’t have information density because of the butter fingers, can’t have neat swiping gestures because of the mice and especially trackpads. So, big squares and huge buttons, repeat ad nauseum. Like a DUPLO set.

    Surely the UI/UX designers and Microsoft knew this, but I guess Ballmer had his way. Meanwhile Valve didn’t have to contend with cranky executives, so they just slapped Big Picture on top of KDE and let use decide when to switch between console mode and desktop mode.


  • I didn’t play TW3 right on launch but CP77 was… fine, on PC. Played it day one, nothing game-breaking.

    However four years later the open world still disappoints compared to the masterclass that was TW3. The world feels smaller, the driving sucks ass, and NC doesn’t feel nearly as lively or polished as Novigrad (though it is gorgeous and I did have a great time).

    Even two years later, CP2077 was a technical regression from TW3. Bugs aside, can CDPR really pull it together and improve upon TW3 and not repeat the mistakes of CP2077, despite having to learn entirely new engine? I wouldn’t bet too much on it.



  • Well “Going private” doesn’t mean anything. It can mean PE. It can mean “traditional” personal/family ownership (e.g. Musk with Twitter). It can also mean moving to a co-op model (theoretically I don’t think anything stops a bankrupt publicly-traded company being bought by its workers). “Private” doesn’t sit anywhere on the political spectrum; even Marxists can generally agree that co-operatives are in principle better than publicly-traded companies.

    Unfortunately PE firms are usually the ones who win the bid when a company “goes private” because the PE business model is driven by speculation and leveraged buyouts, and (at least in the US) supported by advantageous tax rates. Even from a purely capitalist perspective it’s an objective failure that harms the macro-economy. It’s not even capitalism anymore; it’s oligarchic.



  • I wasn’t very old then but the main thing was RAM. Fuckers in Microsoft sales/marketing made 1 GB the minimum requirement for OEMs to install Vista.

    So guess what? Every OEM installed Vista with 1 GB of RAM and a 5200 RPM hard drive (the “standard” config for XP which is what most of those SKUs were meant to target). That hard drive would inevitably spend its short life thrashing because if you opened IE it would immediately start swapping. Even worse with OEM bloat, but even a clean Vista install would swap real bad under light web browsing.

    It was utterly unusable. Like, everything would be unbearably slow and all you could do was (slowly) open task manager and say “yep, literally nothing running, all nonessential programs killed, only got two tabs open, still swapping like it’s the sex party of the century”.

    “Fixing” those hellspawns by adding a spare DDR2 stick is a big part of how I learned to fix computer hardware. All ya had to do was chuck 30 € of RAM in there and suddenly Vista went from actually unusable to buttery smooth.

    By the time the OEMs wised up to Microsoft’s bullshit, Seven was around the corner so everyone thought Seven “fixed” the performance issues. It didn’t, it’s just that 2 GB of RAM had become the bare minimum standard by then.

    EDIT: Just installed a Vista VM because I ain’t got nothing better to do at 2 am apparently. Not connected to the internet, didn’t install a thing, got all of 12 processes listed by task manager, and it already uses 500 MB of RAM. Aero didn’t even enable as I didn’t configure graphics acceleration.


  • Bro I wouldn’t trust most companies not to store their only copy of super_duper_important_financial_data_2024.xlsx on an old AliExpress thumb drive attached to the CFO’s laptop in a coffee shop while he’s taking a shit.

    If your company has an actual DRP for if your datacenter catches fire or your cloud provider disappears, you are already doing better than 98 % of your competitors, and these aren’t far-fetched disaster scenarios. Maintaining an entire separate pen-and-paper shadow process, training people for it? That’s orders of magnitude more expensive than the simplest of DRPs most companies already don’t have.

    Friendly wave to all the companies currently paying millions a year extra to Broadcom/VMWare because their tools and processes are too rigid to use with literally any other hypervisor when realistically all their needs could be covered by the free tier of ProxMox and/or OpenStack.




  • Wait until you learn about debhelper.

    If you use a debian-based system, unless you have actively looked at the DH source, the one thing that built virtually every package on your system, you do not get to say anything about “bloat” or “KISS”.

    DH is a monstrous pile of perl scripts, only partially documented, with a core design that revolves around a spaghetti of complex defaults, unique syntax, and enough surprising side effects and crazy heuristics to spook even the most grizzled greybeards. The number of times I’ve had to look at the DH perl source to understand a (badly/un)documented behavior while packaging something is not insignificant.

    But when we replaced a bazillion bash scripts with a (admittedly opinionated but also stable and well documented) daemon suddenly the greybeards acted like Debian was going to collapse under the weight of its own complexity.


  • Congrats. So you think that since you can do it (as a clearly very tech-literate person) the government shouldn’t do anything? Do you think it’s because they all researched the issues with these companies and decided to actively support them, or is it because their apathy should be considered an encouragement to continue?

    You are so haughty you’ve circled back around to being libertarian. This is genuinely a terrible but unfortunately common take that is honestly entirely indistinguishable from the kind of shit you’d hear coming from a FAANG lobby group.