• 0 Posts
  • 65 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle

  • Sure, I set it up in nixos though this is the short form of that:

    spoiler
    1. Install Podman and passt + slirp4netns for networking
    2. Setup subuid and subgid usermod --add-subuids 100000-165535 --add-subgids 100000-165535 johndoe
    3. I’m using quadlet’s so we need to create those: $HOME/.config/containers/systemd/immich-database.container
    [Unit]
    Description=Immich Database
    Requires=immich-redis.service immich-network.service
    
    [Container]
    AutoUpdate=registry
    EnvironmentFile=${immich-config} # add your environment variables file here
    Image=registry.hub.docker.com/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0 # hash from the official docker-compose, has to be updated from time to time
    Label=registry
    Pull=newer # update to newest image, though this image is specified by hash and will never update to another version unless the hash is changed
    Network=immich.network # attach to the podman network
    UserNS=keep-id:uid=999,gid=999 # This makes uid 999 and gid 999 map to the user running the service, this is so that you can access the files in the volume without any special handling otherwise root would map to your uid and the uid 999 would map to some very high uid that you can't access without podman - This modifies the image at runtime and may make the systemd service timeout, maybe increase the timeout on low-powered machines 
    Volume=/srv/services/immich/database:/var/lib/postgresql/data # Database persistance
    Volume=/etc/localtime:/etc/localtime:ro # timezone info
    Exec=postgres -c shared_preload_libraries=vectors.so -c 'search_path="$user", public, vectors' -c logging_collector=on -c max_wal_size=2GB -c shared_buffers=512MB -c wal_compression=on # also part of official docker-compose.....last time i checked anyways
    [Service]
    Restart=always
    

    $HOME/.config/containers/systemd/immich-ml.container

    [Unit]
    Description=Immich Machine Learning
    Requires=immich-redis.service immich-database.service immich-network.service
    
    [Container]
    AutoUpdate=registry
    EnvironmentFile=${immich-config} #same config as above
    Image=ghcr.io/immich-app/immich-machine-learning:release
    Label=registry
    Pull=newer # auto update on startup
    Network=immich.network
    Volume=/srv/services/immich/ml-cache:/cache # machine learning cache
    Volume=/etc/localtime:/etc/localtime:ro
    
    [Service]
    Restart=always
    

    $HOME/.config/containers/systemd/immich.network

    [Unit]
    Description=Immich network
    
    [Network]
    DNS=8.8.8.8
    Label=app=immich
    
    $HOME/.config/containers/systemd/immich-redis.container
    [Unit]
    Description=Immich Redis
    Requires=immich-network.service
    
    [Container]
    AutoUpdate=registry
    Image=registry.hub.docker.com/library/redis:6.2-alpine@sha256:eaba718fecd1196d88533de7ba49bf903ad33664a92debb24660a922ecd9cac8 # should probably change this  to valkey.... 
    Label=registry
    Pull=newer # auto update on startup
    Network=immich.network
    Timezone=Europe/Berlin
    
    [Service]
    Restart=always
    

    $HOME/.config/containers/systemd/immich-server.container

    [Unit]
    Description=Immich Server
    Requires=immich-redis.service immich-database.service immich-network.service immich-ml.service
    
    [Container]
    AutoUpdate=registry
    EnvironmentFile=${immich-config} #same config as above
    Image=ghcr.io/immich-app/immich-server:release
    Label=registry
    Pull=newer # auto update on startup
    Network=immich.network
    PublishPort=127.0.0.1:2283:2283
    Volume=/srv/services/immich/upload:/usr/src/app/upload # i think you can put images here to import, though i never used it
    Volume=/etc/localtime:/etc/localtime:ro # timezone info
    Volume=/srv/services/immich/library:/imageLibrary # here the images are stored once imported
    
    [Service]
    Restart=always
    
    [Install]
    WantedBy=multi-user.target default.target
    
    1. systemctl --user daemon-reload
    2. systemctl --user enable --now immich-server.service
    3. enable linger so systemd user services run even if the user is logged of loginctl enable-linger $USER
    4. Setup a reverse proxy like caddy so you can make access to it simple like immich.mini-pc.localnet







  • Yeah it works great and is very secure but every time I create a new service it’s a lot of copy paste boilerplate, maybe I’ll put most of that into a nix function at some point but until then here’s an example n8n config, as loaded from the main nixos file.

    I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadn’t had a chance to use yet so keep that in mind.
    Podman support in home-manager is also really new and doesn’t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.

    Gaze into the boilerplate
    { config, pkgs, lib, ... }:
    
    {
        users.users.n8n = {
            # calculate sub{u,g}id using uid
            subUidRanges = [{
                startUid = 100000+65536*( config.users.users.n8n.uid - 999);
                count = 65536;
            }];
            subGidRanges = [{
                startGid = 100000+65536*( config.users.users.n8n.uid - 999);
                count = 65536;
            }];
            isNormalUser = true;
            linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though
            openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too
        };
        home-manager.users.n8n = { pkgs, ... }:
        let
            dir = config.users.users.n8n.home;
            data-dir = "${dir}/${config.users.users.n8n.name}-data"; # defines the path "/home/n8n/n8n-data" using evaluated home paths, could probably remove a lot of redundant n8n definitions....
        in
        {
            home.stateVersion = "24.11";
            systemd.user.tmpfiles.rules =
            let
                folders = [
                    "${data-dir}"
                    #"${data-dir}/data-volume-name-one" 
                ];
                formated_folders = map (folder: "d ${folder} - - - -") folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders
            in formated_folders;
    
            services.podman = {
                enable = true;
                containers = {
                    n8n-app = { # define a container, service name is "podman-n8n-app.service" in case you need to make multiple containers depend and run after each other
                        image = "docker.n8n.io/n8nio/n8n";
                        ports = [
                            "${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678" # I'm using a self defined option to keep track of all ports and uids in a seperate file, these values just map to "127.0.0.1:30023:5678", a caddy does a reverse proxy there with the same option as the port.
                        ];
                        volumes = [
                            "${data-dir}:/home/node/.n8n" # the folder we created above
                        ];
                        userNS = "keep-id:uid=1000,gid=1000"; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers can't read it because of that. This maps the user 1000 inside the container to the uid of the user that's running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesn't time out
                        environment = {
                            # MYHORSE = "amazing";
                        };
                        # there's also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template
                        extraPodmanArgs = [
                            "--pull=newer" # always pull newer images when starting, I could make this declaritive but I haven't found a good way to automagically update the container hashes in my nix config at the push of a button.
                        ];
                     # few more options exist that I didn't need here
                    };
                };
            };
        };
    }
    
    

  • DNS turns a domain name into an IP which can then be used to send data through your router, a dns server is the server which is used to do this conversion (www.google.com turns into an IP 1.2.3.4 (that isn’t the actual IP of google)).

    There are many dns servers, normally your local devices use your router as the dns server, which forwards it to your ISP which they further transfer it over global dns servers.

    Alternatively you could use Google’s DNS server (8.8.8.8) or cloudflares DNS server (1.1.1.1) but if the one on your router works then just use it.

    nameserver is the same as DNS server

    Tldr: set the router IP as your dns server, you only need this one.


  • …that’s the valid response, does ping www.google.com work and curl www.google.com return a bunch of text?

    If ping www.google.com doesn’t work then your system isn’t using the correct dns server, though your local dns server works (as seen by the prior dig).

    If curl works then…you have a working internet connection, maybe check the browser settings for proxy or something.



  • I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules I’ll switch back to running it native.