• 0 Posts
  • 227 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • When I switched to Ubuntu, they just had more up to date packages, and with two releases a year (sort of), stayed up to date with other software, which is a good thing for a system I actually use. From then on, I just stayed on it, because I don’t reinstall my OS until something’s broken. I’ve been moving the same one for a decade now.

    If I had to install a new desktop system I’ll probably go with mint, for the same reason : more frequent software update.

    Note that this is all for desktop (and some specialized systems). Servers are all running debian, because stability is preferable and frequent software change is not what I want in these environments.


  • If made correctly (which is hilariously easy), it’s a clean install and uninstall process, support some level of potential conflict regarding files that are shared with other packages/commands, support dependencies out of the box, and with minimal work can be made easy to update for the user (even automatically updates, depending on the user’s choices) by having an (again, very easy to setup for a dev) repository. With the added value of authenticity checks before updating.

    All this in a standardized way that requires no tinkering, compatibility stuff, etc, because all these checks are built-in.

    Note that some of this probably applies to other system package management solutions, it’s not exclusive to .deb.


  • Ubuntu support online (I mean, the size of the community) can be useful. And besides the snap and “ubuntu advantage” thing, they’re mostly a more up to date vanilla Debian, which is extremely convenient because, Debian.

    It’s obviously good for people used to Debian, but it’s also great for other, because of the regular updates. But in fairness with your point I’ve been thinking about moving to mint since it’s basically a de-snapped ubuntu.




  • Until it doesn’t work. There’s a lot of subtlety, and at some point you’ll have to match what the OS provide. Even containers are not “run absolutely anywhere” but “run mostly anywhere”.

    That doesn’t change the point, of course; software that are dependent on the actual kernel/low level library to provide something will be hard to get working in unexpected situations anyway, but the “silver bullet” argument irks me.


  • No, they hate flatpak, one of the many option to distribute software, which is not the only one even if you consider the “must run on many distro” restriction (which isn’t 100% true, kinda like the Java write once run anywhere). There are other options, some more involved, some simpler, to do so.

    They didn’t say they hate devs, that’s on you, grabbing a febble occasion to tell someone that voiced his opinion to “fuck off”.





  • You’re right, they aren’t google. Not for lack of trying though.

    You see posts putting some shade over Mozilla, and your immediate reaction is “it feels almost coordinated”. Well, that may be. But it would be hard to distinguish a “coordinated attack” from a “that’s just the things they’re doing, and there’s report on it” article, no? Especially when most of it can be fact-checked.

    In this particular case, those abandoned projects got picked up by other… sometimes. And sometimes not. But they were abandoned. There’s no denying that.

    If you want some more hot water for Mozilla, since you’re talking about privacy and security, you’d be interested in their recent switch regarding these points. Sure, the PR is all about protecting privacy and users, but looking into the acts, the message is a bit more diluted. And there’s always a fair amount of people that are ready to do the opposite of what you claims; namely discarding all criticism because “Mozilla”, when the same criticism are totally fair play when talking about other big companies.

    Being keen on maintaining user privacy, system security, and trust, is not the same as picking a “champion” and sticking to it until the end. Mozilla have been doing shady things for half a decade now, and they should not get a free pass because they’re still the lesser evil for now.




  • It is perfectly possible to run anti-cheat that are roughly as good (or as bad, as it often turns out) without full admin privilege and running as kernel-level drivers. Coupled with server-side validation, which seems to be a dying breed, you’d also weed out a ton of cheaters while missing the most motivated of them.

    As someone who lurks around in different communities (to some extent; Steam forums, reddit, lemmy, mastodon, and a few game-centered discord servers), the issue is not much against anti-cheat for online play. It’s the nature of these piece of software that is the issue. It would not be the same if the anti-cheat was also forced on solo gameplay, but it is not the case here.

    (bonus points for systems that allow playing on non-protected servers, but that’s asking a bit too much from some publishers I suppose)


  • Aside from it being code you don’t want on your machine

    Code you don’t want on your machine, that have sometimes more permissions than you yourself have on your own files, is completely opaque, and have the legitimacy to keep constant outgoing network data that you can’t audit.

    Yes, aside for that, no reason at all. No problem with a huge risk on your privacy for moderate results that don’t particularly benefit you in the long run.

    (and all that is assuming that they’re not nefarious to begin with, which is almost impossible to prove)


  • systemd, as a service manager, is decent. Not necessarily a huge improvement for most use cases.

    systemd, the feature creep that decides to pull every single possible use case into itself to manage everything in one place, with qwirks because making a “generic, do everything” piece of software is not a good idea, is not that great.

    systemd, the group of tools that decided to manage everything by rewriting everything from scratch and suffering from the same issue that were fixed decades ago, just because “we can do better” while changing all well known interfaces and causing a schism with either double workload or dropping support for half the landscape from other software developer is really stupid.

    If half the energy that got spent in the “systemd” ecosystem was spent in existing projects and solutions that already addressed these same issues, it’s likely we’d be in a far better place. Alas, it’s a new ecosystem, so we spend a lot of energy getting to the same point we were before. And it’s likely that when we get close to that, something new will show up and start the cycle again.


    • issues with model training sources
    • business sending their whole codebase to third party (copilot etc.) instead of local models
    • time gain is not that substantial in most case, as the actual “writing code” part is not the part that takes most time, thinking and checking it is
    • “chatting” in natural language to describe something that have a precise spec is less efficient than just writing code for most tasks as long as you’re half-competent. We’ve known that since customer/developer meetings have existed.
    • the dev have to actually be competent enough to review the changes/output. In a way, “peer reviewing” becomes mandatory; it’s long, can be fastidious, and generated code really needs to be double checked at every corner (talking from experience here; even a generated one-liner can have issues)
    • some business thinking that LLM outputs are “good enough”, firing/moving away people that can actually do said review, leading to more issues down the line
    • actual debugging of non-trivial problems ends up sending me in a lot of directions, getting a useful output is unreliable at best
    • making new things will sometimes confuse LLM, making them a time loss at best, and producing even worst code sometimes
    • using code chatbot to help with common, menial tasks is irrelevant, as these tasks have already been done and sort of “optimized out” in library and reusable code. At best you could pull some of this in your own codebase, making it worst to maintain in the long term

    Those are the downside I can think of on the top of my head, for having used AI coding assistance (mostly local solutions for privacy reasons). There are upsides too:

    • sometimes, it does produce useful output in which I only have to edit a few parts to make it works
    • local autocomplete is sometimes almost as useful as the regular contextual autocomplete
    • the chatbot turning short code into longer “natural language” explanations can sometimes act as a rubber duck in aiding for debugging

    Note the “sometimes”. I don’t have actual numbers because tracking that would be like, hell, but the times it does something actually impressive are rare enough that I still bother my coworker with it when it happens. For most of the downside, it’s not even a matter of the tool becoming better, it’s the usefulness to begin with that’s uncertain. It does, however, come at a large cost (money, privacy in some cases, time, and apparently ecological too) that is not at all outweighed by the rare “gains”.