• 0 Posts
  • 24 Comments
Joined 2 years ago
cake
Cake day: September 25th, 2023

help-circle
  • lorentz@feddit.ittoSelfhosted@lemmy.worldTesting vs Prod
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I don’t have a testing environment, but essentially all my services are on docker saving their data in a directory mounted on the local filesystem. The dockerfile reads the sha version of the image from an env file. I have a shell script which:

    1. Triggers a new btrfs snapshot of the volume containing everyithing
    2. Pulls the new docker images and stores their hashes in the env file
    3. Restarts all the containers.

    if a new Docker version is broken rolling back is as simple as copying the old version in the env file and recreating the container. If data gets corrupted I can just copy the last working status from an old snaphot.

    The whole os is on a btrfs volume which is snapshotted regularly, so ideally if an update fucks it up beyond recovery I can always boot from a rescue image and restore an old snapshot. But I honestly feel this is extra precaution: in years that I run debian on all my computers, it never reached the point of being not bootable.


  • If you want to encrypt only the data partition you can use an approach like https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/#encrypted-zfs to ulock it at boot.

    TL;DR: store half of the decryption key on the computer and another half online and write a script that at boot fetches the second half and decrypt the drive. There is a timewindow where a thief could decrypt your data before you remove the key if they connect your computer to the network, but depending on your thread model can be acceptable. you can also decrypt the root portion with a similar approach but you need to store the script in the initramfs and it is not trivial.

    Another option I’ve seen suggested is storing the decryption key on a USB pendrive and connect it with a long extension cord to the server. The assumption is that a thief would unplug all the cables before stealing your server.



  • If security is one of your concerns, search for “HTTP client side certificates”. TL;DR: you can create certificates to authenticate the client and configure the server to allow connections only from trusted devices. It adds extra security because attackers cannot leverage known vulnerabilities on the services you host since they are blocked at http level.

    It is a little difficult to find good and updated documentation but I managed to make it work with nginx. The downside is that Firefox mobile doesn’t support them, but Firefox PC and Chrome have no issues.

    Of course you want also a server side certificate, the easiest way is to get it from Let’s Encrypt


  • You can configure caddy to use 80 and be a reverse proxy for both the services, serving one site or the other depending on the name (you will need a second DNS entry pointing to the same IP). about not exposing 443, I really doubt that caddy can automatically retrieve SSL certificates for you if not running on the default port. Check the documentation, if I’m right either you open an empty website on 443 just for the sake of getting SSL certs to run https, and manually configure the other port to do the same, or you get the certificates manually using the DNS verification (check let’s encrypt documentation) and configure caddy to use them.


  • Could it be that the domain name has both IPv4 and IPv6 and depending on the network you try to reach one or another? Wireguard can work on both protocols, but from my experience it doesn’t try both to see which one works (like browsers do). So if at the first try the dns resolves the “wrong” IP version, wireguard cannot connect and doesn’t fallback trying the alternative.




  • A lot of technical aspects here, but IMHO the biggest drawback is liability. Do you offer free storage connected to internet to a group of “random tech nerds”. Do you trust all of them to use it properly? Are you really sure that none of them will store and distribute illegal stuff with it? Do you know them in person so you can forward the police to them in case they came knocking at your door?


  • Yes, you can do it on your server with a simple iptable rule.

    I’m a little rusted, but something like this should work.

    iptables -t nat -A PREROUTING -d [your IP] -p tcp --dport 11500 -j DNAT --to-destination [your IP:443]

    You can find more information searching for “iptables dnat”. What you are saying here is: in the prerouting table (ie: before we decide what to do with this packet) tcp connections to my IP at the port 11500 must be forwarded to my IP at port 443.


  • For automatically unlock encrypted drives I followed the approach described in https://michael.stapelberg.ch/posts/2023-10-25-my-all-flash-zfs-network-storage-build/#auto-crypto-unlock

    The password is split half in the server itself and half in a file on the web. During boot the server retrieves the second half via http, concatenates the two halves and use the result to unlock the drive. In this way I can always remove the online key and block the automatic decryption.

    Another approach that I’ve considered was to store the decryption keys on a USB drive connected with a long extension cable. The idea is that if someone will steal your server likely won’t bother to get the cables too.

    TPM is a different beast I didn’t study yet, but my understand is that it protects you in case someone steals your drives or tries to read them from another computer. But as long as they are on your server it will always decrypt them automatically. Therefore you delegate the safety of your data to all the software that starts on boot: your photos may still be fully encrypted at rest so a thief cannot get them out from the disk directly, but if you have an open smb share they can just boot your stolen server and get them out from there





  • I remember this blog post (I cannot find right now) where the person split the decryption password in two: half stored on the server itself and half on a different http server. And there was an init script which downloaded the second half to decrypt the drive. There is a small window of time between when you realize that the server is stolen and when you take off the other half of the password where an attacker could decrypt your data. But if you want to protect from random thieves this should be safe enough as long as the two servers are in different locations and not likely to be stolen toghether.


  • TPM solves a sigthly different threat model: if you dispose the hd or if someone takes it out from your computer it is fully encrypted and safe. But if someone steals your whole server it can start and decrypt the drive. So you have to trust you have good passwords and protection for each service you run. depending on what you want to protect for this is either great solution or sub optimal





  • Yes, you are right, I already use DNS validation. But it is just it is easier to request a single wildcard certificate for my domain and have all the subdomains that I use for the local services defined only in my local DNS. I cannot fully automate the certificate renewal because namecheap requires to allowlist the IP that can call its API, and my ip is dynamic. So renewing a single certificate saves me time. Also, the wildcard certificate is installed on a single machine, so it is not the I increase a lot the attack surface by not having different certificates for each virtual host.