• 1 Post
  • 35 Comments
Joined 2 years ago
cake
Cake day: September 10th, 2023

help-circle
  • I would price it out vs an aoostar wtr max (699 with 6 drive case and decent amd mobile cpu plus a lot of expandability options) https://aoostar.com/products/aoostar-wtr-max-amd-r7-pro-8845hs-11-bays-mini-pc?variant=50067345932586 I bought one of those for an NVR setup (it runs proxmox on my cluster running a single vm that uses most of the system resources and has a hailo 8 passed to it plus an intel gpu passed to it through occuplink and writes to 6 drives in a raidz. I also have the nvme slots all filled for various things unrelated to this project directly like being another node in my ceph cluster). it runs surprisingly cool and very stable so far, leaving me very impressed, especially since I have it totally loaded in all nvme slots and hard drive slots (exos drives). I’m not sure how loud it is because it’s in a server rack on a shelf with lots of other noisy gear in my basement.

    The reasons not to go this route: your priced option is way cheaper or you are uncomfortable with the eratic nature of cheap chinese manufacturer bios updates.
















  • you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.

    `version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”

    cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `



  • jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).