

Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.


Almost none now that i automated updates and a few other things with kestra and ansible. I need to figure out alerting in wazuh and then it will probably drop to none.


What is KPW4?
Thats not the feature i would port to paperless.paperless needs an o counter lol.


50 watts is maybe halfof one of my 10 gig switches…
More like he buys powerball ticket in his country and numbers win equivalent prize in lucky guys country


I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product


Just came here to say this, it workson a 10 dollar a year racknerd vps for me no problem. Matrix chugs on my much bigger vps, although it is sharing that with a bunch of other things, overall it should have mich more resources.


Those are puny mortal numbers… my backup nas is more than twice that…


I use rss-bridge for the popular stuff but I’ve found rss-funnel to be nicer for creating my own scrapes (mostly taking rss feeds that link to the website instead of the article and adding a link to the article mentioned on the website (https://github.com/shouya/rss-funnel)


Pretty sure that title is firmly held by mcafe, even now.


Pretty much this. I don’t even bother with watchtower anymore. I just run this script from cron pointed at the directory I keep my directories of active docker containers and their compose files:
#/bin/sh for d in /home/USERNAME/stacks/*/ do (cd “$d” && docker compose pull && docker compose up -d --force-recreate) done; for e in /home/USERNAME/dockge/ do (cd “$e” && docker compose pull && docker compose up -d --force-recreate) done;
docker image prune -a -f


Does yours have 8 sata ports or dual external sf8088 ports per chance and moreram?
Nevee saw that on wireguard once i foind the better connections for my location, weird
Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.
Portability and backup are dead simple.
you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.
`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”
cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `
dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .


jellyfin has a spot for each library folder to specify a shared network folder, except everything just ignores the shared network folder and has jellyfin stream it from https. Direct streaming should play from the specified network source, or at least be easily configurable to do so for situations where the files are on a nas seperate from the docker instance so that you avoid streaming the data from the nas to the jellyfin docker image on a different computer and then back out to the third computer/phone/whatever that is the client. This matters for situations where the nas has a beefy network connection but the virtualization server has much less/is sharing among many vms/docker containers (i.e. I have 10 gig networking on my nas, 2.5 gig on my virtualization servers that is currently hamgstrung to 1 gig while I wait for a 2.5 gig switch to show up) They have the correct settings to do this right built into jellyfin and yet they snatched defeat from the jaws of victory (a common theme for jellyfin unfortunately).
Sorry about that, my reply was from my phone and therefore terrible. Here’s the app: https://github.com/louislam/dockge
That’s the dude who was butt hurt about something this dude did: https://github.com/iamadamdev/bypass-paywalls-chrome
and so forked it and arguably does a better job, lol.
I would price it out vs an aoostar wtr max (699 with 6 drive case and decent amd mobile cpu plus a lot of expandability options) https://aoostar.com/products/aoostar-wtr-max-amd-r7-pro-8845hs-11-bays-mini-pc?variant=50067345932586 I bought one of those for an NVR setup (it runs proxmox on my cluster running a single vm that uses most of the system resources and has a hailo 8 passed to it plus an intel gpu passed to it through occuplink and writes to 6 drives in a raidz. I also have the nvme slots all filled for various things unrelated to this project directly like being another node in my ceph cluster). it runs surprisingly cool and very stable so far, leaving me very impressed, especially since I have it totally loaded in all nvme slots and hard drive slots (exos drives). I’m not sure how loud it is because it’s in a server rack on a shelf with lots of other noisy gear in my basement.
The reasons not to go this route: your priced option is way cheaper or you are uncomfortable with the eratic nature of cheap chinese manufacturer bios updates.