• 0 Posts
  • 88 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • The only true “roadblock” I have experienced was when running on the raspberry pi, where the CPU was too slow to do any transcoding at all, and the memory was too small and unupgradable to be able to run much at the same time.

    As soon as I had migrated to a proper desktop (the i7-920) I could run basically everything I would regularly want. And from then on it was a piece of cake upgrading. Shut the machine down, unplug, swap the parts, plug in, turn on. Linux has happily booted up with no trouble with the new hardware.

    Since my first server was a classic bios, and the later machines was UEFI, then that step required a reinstall… But after the reinstall, I actually just copied all the contents of the root partition over, and it just worked.

    The main limiting factors for me has been the amount of memory, the amount of SATA connectors for disks, and whether the hardware supported hardware transcoding.

    For memory, ensure the motherboard has 4 sockets for memory, that makes it easy to start out with a bit of memory and upgrade later. For example you could start out with 2x 4GB sticks for a total of 8GB, and then later when you feel like you need more, you buy 2x 8GB sticks. Now you have a total of 24 GB.

    For SATA ports, ensure the motherboard has enough ports for your needs, and I would also strongly recommend looking for a motherboard with at least 2 PCIe 16x slots, as that will allow you too add many more SATA or SAS ports via a SAS card.

    Hardware encoding is far from a must. It’s only really necessary if you have a lot of media in unsupported formats by the client devices. 95% of my library is h.264 in 1080p, which is supported on pretty much everything, so it will play directly and not require any transcoding. Most 1080p media is encoded in h.264, so it’s usually a non-issue. 4k media however often come in HEVC (h.265), which many devices do not support. These files will require transcoding to be playable on devices that do not support it, but a CPU can still transcode it using “software transcoding”, it’s just much slower and less responsive. So I would consider it a nice convenience, but definitely not a must, and it depends entirely on the encoding of the media library.

    EDIT: Oh, I just remembered… Beware of non-standard hardware. For example motherboards from Dell and IBM/Lenovo. These often come with non-standard fan mounts and headers, which means you can’t replace the fans. They also often have non-standard power supplies, in non-standard form factors, which means that if the power supply dies, it’s nearly impossibly to replace, and when you upgrade your motherboard you are likely forced to replace the power supply as well, and since the size of the power supply isn’t standard, the new power supply will not fix in the case… Many of their motherboards also have non-standard mounts for the motherboards, which means that you are forced to replace the case when upgrading the motherboard… You can often find companies selling their old workstations for dirt-cheap, which can be a great way to get started, but often these workstations are so non-standard that you practically can’t upgrade them… Often the only standard components in these are harddrives, SSDs, optical disc drives, memory, and any installed PCIe cards.


  • As long as it’s capable of booting into Linux, then you can start building a homelab…

    Initially I had a 2-bay Synology NAS, and a Raspberry Pi 3B… It was very modest, but enough to stream media to my TV and run a bunch of different stuff in docker containers.

    In my house, computer hardware is handed down. I buy something to upgrade my desktop, and whatever falls off that machine is handed down to my wife or my daughter’s machines, then finally it’s handed down to the server.

    At some point my old Core i7-920 ended up in the server. This was plenty to upgrade the server to running Kubernetes with even more stuff, and even software transcoding some media for streaming. Running BTRFS gave me the flexibility to add various used disks over time.

    At some point the CPU went bad, so I bought an upgrade for my desktop, and handed my old CPU donown the can, which released an Intel Core i5-2400F for the server. At this point storage and memory started to become the main limiting factor, so I added a PCI SAS card in IT mode to add more disks.

    As this point my wife needed a faster CPU, so I bought a newer used CPU for her, and her old Intel Core i7-3770 was handed down to the server. That gave quite a boost in raw CPU power.

    I ended up with a spare Intel Core i5-7600 because the first motherboard I bought for my wife was dead, so I looked up and found that for very cheap I could buy a motherboard to match, so I upgraded the server which opened up proper hardware transcoding.

    I have since added 2 Intel NUCs to have a highly available control plane for my cluster.

    This is where my server is at right now, and it’s way beyond sufficient for the media streaming, photo library, various game servers, a lot of self-hosted smart home stuff, and all sorts of other random bits and pieces I want to run.

    My suggestion would be to start out by finding the cheapest possible option, and then learn what your needs are.

    What do you want your server to do? What software do you want to run? What hardware do you want to connect to it? All of this will evolve as you start using your server more and more, and you will learn what you need to buy to achieve what you want to.







  • The OP made the argument that Zuckerberg wanted to know their passwords, such that if the users reused the same passwords elsewhere, then he would be able to log in there and check out their accounts.

    For example he could have seen a profile he was interested in, nabbed their password and looked into their email.

    Not that he wouldn’t have godmode on their Facebook account, and needed their password to access their account, because of course he could have just accessed those accounts without needing the password.

    I have not heard this rumor before, though I wouldn’t be completely surprised if it was true.




  • Are these restrictions set out by the ISP or the dorm?

    If you don’t do business with the ISP, then you don’t have to agree to and follow their terms.

    So as long as the dorms doesn’t have rules against setting up your own WiFi, then you should be well within your rights to purchase an Internet connection from another provider, but since you are likely not allowed to get your own line installed, you are probably restricted to ISPs that provide a service over the cellular network.

    Of course using a cellular connection will give you worse latencies for online games, but at least you can have your own WiFi with low latency for your VR.

    If you want to be nice, you could then run as much of your Internet network over ethernet as possible, so you congest the air waves as little as possible, possibly only running the VR headset over WiFi, and maybe even only enabling the WiFi radio when you want to play VR.

    To lower the chance of someone complaining about your WiFi, you should configure it as a “hidden network”, such that it doesn’t broadcast an SSID, and therefore doesn’t show up when people are looking for WiFi networks to connect to.


  • It kinda depends a bit on the user’s background… For someone who is used to windows and how computers in general works, I would probably agree with you.

    But for people who are more phone/tablet native, I don’t think something like Fedora Silverblue is actually that bad of a choice. It comes natively with Gnome 3, which isn’t too dissimilar to Android or iOS. Updates are installed in one fell swoop with a reboot, just like Android or iOS. Flatpaks behave much more like an App on Android or iOS, they are self contained, and don’t affect eachother.

    I just set up my daughters (9 y/o) first school laptop, and picked Fedora Silverblue, and apart from learning about the save icon, and learning how to store files in a filesystem, she was pretty much instantaneously functional, having most of her prior computing experience on an Android phone.


  • I really don’t see much benefit to running two clusters.

    I’m also running single clusters with multiple ingress controllers both at home and at work.

    If you are concerned with blast radius, you should probably first look into setting up Network Policies to ensure that pods can’t talk to things they shouldn’t.

    There is of course still the risk of something escaping the container, but the risk is rather low in comparison. There are options out there for hardening the container runtime further.

    You might also look into adding things that can monitor the cluster for intrusions or prevent them. Stuff like running CrowdSec on your ingresses, and using Falco to watch for various malicious behaviour.




  • So as far as I understand, you have

    • Outer router (Comcast), which has WiFi enabled
    • Inner router (your own), which has WiFi enabled, and further meshes with other WiFi mesh devices (or is the mesh separate?)
    • A plain switch, for stuff you want cabled and fast

    Is that correct?

    Why not get the WiFi in the Comcast router disabled, and use your inner network exclusively, such that both WiFi and ethernet devices are on the same network?

    That’s what I did with my network, and I even got the ISP to put their modem/router into bridge mode, so it’s completely transparent.