• 2 Posts
  • 463 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2024

help-circle


  • I don’t disagree, but if it’s a case where the janky file problem ONLY appears in Jellyfin but not Plex, then, well, jank or not, that’s still Jellyfin doing something weird.

    No reason why Jellyfin would decide the French audio track should be played every 3rd episode, or that it should just pick a random subtitle track when Plex isn’t doing it on exactly the same files.


  • If you share access with your media to anyone you’d consider even remotely non-technical, do not drop Jellyfin in their laps.

    The clients aren’t nearly as good as plex, they’re not as universally supported as plex, and the whole thing just has the needs-another-year-or-two-of-polish vibes.

    And before the pitchfork crowd shows up, I’m using Jellyfin exclusively, but I also don’t have people using it who can’t figure out why half the episodes in a tv season pick a different language, or why the subtitles are somtimes english, and sometimes german, or why some videos occasionally don’t have proper audio (l and r are swapped) and how to take care of all of those things.

    I’d also agree your thought that docker is the right approach to go: you don’t need docker swarm, or kubernetes, or whatever other nonsense for your personal plex install, unless you want to learn those technologies.

    Install a base debian via netinstall, install docker, install plex, done.



  • Because they’re ancient, depreciated, and technically obsolete.

    For example: usenet groups are essentially unmoderated, which allows spammers, trolls, and bad actors free reign to do what it is they do. This was not a design consideration when usenet was being developed, because the assumption was all the users would have a name, email, and traceable identity so if you acted like a stupid shit, everyone already knew exactly who you were, where you worked/went to school, and could apply actual real-world social pressure to you to stop being a stupid fuck.

    This, of course, does not work anymore, and has basically been the primary driver of why usenet has just plain died as a discussion forum because you just can’t have an unmoderated anything without it turning into the worst of 4chan, twitter, and insert-nazi-site-of-choice-here combined with a nonstop flood of spam and scams.

    So it died, everyone moved on, and I don’t think that there’s really anyone who thinks the global usenet backbone is salvagable as a communications method.

    HOWEVER, you can of course run your own NNTP server and limit access via local accounts and simply not take the big global feed. It’s useful as a protocol, but then, at that point, why use NNTP over a forum software, or Lemmy (even if it’s not federating), or whatever?


  • It’s probably fairer to say, ‘It’s hard for me to get into’.

    Rodents and animals like pigs and cows and horses and deer and goats and such are primary seed spreaders, and if you’ve ever dealt with a rat or a pig or goat, you know there’s absolutely nothing they can’t eat: plants, fruits, wood, metal…

    We’re bad at it, but shockingly humans aren’t the best at everything ;)

    (Also: be careful, because the pineapple is just as interested in eating you as you are in eating it.)






  • So, this is a ~15 year old laptop?

    The first two things that immediately come to mind when you’re kernel panicing is bad ram, and bad cpu temperatures.

    Thermal paste doesn’t last forever, and it’s worth checking if your CPU or GPU are overheating, and repasting if so.

    And, as always, a memtest is a quick and easy step to rule that out - I’d say half the “weird crashes” I’ve ever seen ends up being bad ram and well, at least it’s cheap and easy to replace?







  • competition in the x86 OS space back then

    Oh yeah: there were a stuuuupid amount of OSes.

    On the DOS side you had MS, IBM, and Digital Research.

    You also had a bunch of commercial UNIXes: NextStep, Solaris, Xenix/SCO, etc. along with Linux and a variety of BSDs. There were also a ton of Sys4/5 implementations that were single-vendor specific so they could sell their hardware (which was x86 and not something more exotic) that have vanished to time because that business model only worked for a couple of years, if that.

    There was of course two different Windows (NT, 9x), OS/2 which of course could also run (some) Windows apps, and a whole host of oddballs like QNX and BeOS and Plan9 or even CP/M86.

    It was a lot less of a stodgy Linux-or-Windows monoculture, and I miss it.


  • Seconding that’s a not-how-things-were.

    The lovely thing with legacy architectures (6502, 68k, x86, z80, etc.) that were in use during that time is that they were very very simple: all you needed to do was put executable code on a ROM at the correct memory address, and the system would boot it.

    There wasn’t anything required other than making sure the code was where the CPU would go looking for it, and then it’d handle it from there.

    Sure, booting an OS meant that you needed whatever booted the CPU to then chain into the OS bootloader and provide all the things the OS was expecting (BIOS functions, etc.) but the actual bootstrap from ‘off’ to ‘running code’ was literally just an EPROM burner away.

    It’s a lot more complicated now, but users would, for the most part, not tolerate removing the ability to boot any OS they feel like, so there’s enough pressure that locked shit won’t migrate down to all consumer hardware.


  • basic needs of the average office and home user

    I mean, ARM chips have been at that level of performance for at least a decade by now. Normal people’s most strenuous activity is watching Youtube, which every cellphone since what? 2005? could do.

    power consumption in relation to computational power

    The thing is that’s very much not the actual situation for most people.

    Only Apple really has high performance, very low power ARM chips you can’t really outclass.

    Qualcomm’s stuff is within single-digit percentage points of the current-gen AMD and Intel chips both in power usage, performance, and battery life.

    I mean, that’s a FANTASTIC achievement for a 1st gen product, but like, it’s not nearly as good as it should be.

    The problem is that the current tradeoff is that huge amounts of the software you’ve been using just does not work, and a huge portion of it might NEVER work, because nobody is going to invest time in making it behave.

    (Edit: assuming the software you need doesn’t work in the emulation layer, of course.) You might get Photoshop, but you won’t get that version of CS3 you actually own updated. You might get new games, but you probably won’t get that 10 year old one you like playing twice a year. And so on.

    The future might be ARM, but only Apple has a real hat in the ring, still.

    (Please someone make better ARM chips than Apple, thanks.)___