• 0 Posts
  • 330 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle


  • I relatively recently (a year or so?) switched from Ubuntu to Debian.

    I felt like Ubuntu was bloating up and that sadly those decisions were done through the enshitification process. I went then “back to basics” and I don’t regret it at all.

    I had the (wrong) preconception that Debian was “behind” or “slow” for “new” stuff but truth is, despite being “stable” most of what I care about is already in, even for things like gaming in VR. For the rest if I need something “edgy” then I can get the software via another mean than the package manager.

    So… what made me change is a desire for more minimalism and the ability to test safely (files saved).


  • Maybe I misunderstood but the vulnerability was unknown to them but the class of vulnerability, let’s say “bugs like that”, are well known and published by the security community, aren’t there?

    My point being that if it’s previously unknown and reproducible (not just “luck”) is major, if it’s well known in other projects, even though unknown to this specific user, then it’s unsurprising.

    Edit: I’m not a security researcher but I believe there are already a lot of tools doing static and dynamic analysis. IMHO It’d be helpful to know how those perform already versus LLMs used here, namely across which dimensions (reliability, speed, coverage e.g. exotic programming languages, accuracy of reporting e.g. hallucinations, computation complexity and thus energy costs, openness, etc) is each solution better or worst than the other. I’m always wary of “ex nihilo” demonstrations. Apologies if there is benchmark against existing tools and if I missed that.






  • I wouldn’t say blindly, rather my heuristic is, the most long term and popular a project is, the less I’ll bother.

    If I do though get a random script from a random repository, rather than from say Debian official package manager from main contrib sources, then I will check.

    If it’s another repository, say Firefox from Mozilla or Blender then I won’t check but I’ll make sure it genuinely comes from there, maybe not a mirror or that the mirror does have a checksum that gets validated.

    So… investment on verifying trust us is roughly proportional to how little I expect others to check.






  • I wouldn’t build anything significant on the RPi Zero and instead would try to build elsewhere, namely on a more powerful machine with the same architecture, or cross-build as others suggested.

    That being said, what’s interesting IMHO with container image building is that you can rely on layers. So… my advice would be to find the existing image supported by your architecture then rely on it to layer on top of it. This way you only build on the RPi what is truly not available elsewhere.


  • Didn’t watch the video… but the premise “The biggest barrier for the new Linux user isn’t the installer” is exactly why Microsoft is, sadly, dominating the end-user (not servers) market.

    What Microsoft managed to do with OEMs is NOT to have an installer at all! People buy (or get, via their work) a computer and… use it. There is not installation step for the vast majority of people.

    I’m not saying that’s good, only that strategy wise, if the single metric is adoption rate, no installer is a winning strategy.


  • utopiah@lemmy.mltoLinux@lemmy.mlSelfhost offline software
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 month ago

    So… I’ve done that May 2023 for a holiday trip.

    I left with my RPi4 and few gadgets but no Internet.

    There I built https://git.benetou.fr/utopiah/offline-octopus/ and my main take away is

    • you can build what is missing

    and more importantly the meta take away is

    • you need to iterate preparations

    because just like first aid you need to be actually ready when needed and knowledge change over time. You need to actually try though, test your setup and yourself genuinely otherwise it is intellectual masturbation.

    Have fun!