Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

NixOS NSpawn Container Management

Documentation Home

This repo provides tools and NixOS modules to support Nix RFC 108 declarative and imperative container management.

  • nixos-nspawn tool for imperative container management compatible with non-NixOS systems.
  • nixos.containers module for declarative container management.
  • Unified implementation across both container types allowing for safe migration between them.
  • nixos_nspawn is library-friendly for easy automation extension.

One Command Demo

You can try imperative containers on any system with this one command subject to these requirements:

  • Systemd version 256 or newer.
  • Nix package manager is available with flakes enabled.
  • Both /var/lib/machines and /etc/systemd/nspawn are writable.
$ sudo nix run github:m1cr0man/python-nixos-nspawn -- create --flake github:m1cr0man/python-nixos-nspawn#example example

nixos_nspawn.container.example: Building configuration from flake github:m1cr0man/python-nixos-nspawn#example
nixos_nspawn.container.example: Writing nspawn unit file
nixos_nspawn.container.example: Starting
Container example created successfully. Details:
Container example
  Unit File: /etc/systemd/nspawn/example.nspawn
  Imperative: True
  State: running
$ sudo machinectl shell example
[root@example:~]#

Check out the full documentation: https://m1cr0man.github.io/python-nixos-nspawn/

Development environment setup

This repository uses Nix. You can

# Get the dev tools in your environment
nix develop
# (VS Code) Open the workspace file
code python-nixos-nspawn.code-workspace
# Build and run the project
nix run

Updating dependencies

# Update flake lockfile
nix flake update

Installation (NixOS)

The simplest way to get access to both imperative and declarative container management is to install this flake’s module and overlay like so:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    # STEP 1: Add this to your flake inputs
    nixos-nspawn.url = "github:m1cr0man/python-nixos-nspawn";
  };

  outputs = { self, nixpkgs, nixos-nspawn }: {
    nixsConfigurations = {
      myhost = nixpkgs.lib.nixosSystem {
        system = "x86_64-linux";
        modules = [
          # STEP 2: Add the flake's module to your host system
          nixos-nspawn.nixosModules.hypervisor
          {
            # STEP 3: Add the overlay to expose the nixos-nspawn package
            nixpkgs.overlays = [nixos-nspawn.overlays.default];
          }
        ];
      };
    };
  };
}

NixOS is currently lacking some critical features for imperative container management. You will need to incorporate the changes from NixOS PR #216025 to ensure that your containers can be started properly and do not get stopped unintentionally during switch-to-configuration. A quick workaround is to use the nixpkgs branch from that PR:

    nixpkgs.url = "github:m1cr0man/nixpkgs/rfc108-minimal";

Otherwise, you can simply install the nixos-nspawn package from this flake however you wish.

Installation (Nix on other distros)

Imperative NixOS containers can be used on any distribution subject to meeting these requirements:

  • Systemd 256 or later.
  • Nix package manager is installed and available.
  • systemd-networkd is responsible for network configuration.
  • Both /var/lib/machines and /etc/systemd/nspawn are writable and persistent.

The easiest way to run the nixos-nspawn binary is:

nix run github:m1cr0man/python-nixos-nspawn -- --help
# For a _real_ test, try the example container
sudo nix run github:m1cr0man/python-nixos-nspawn -- create --flake github:m1cr0man/python-nixos-nspawn#example example

Imperative container management

The nixos-nspawn utility provided by this repo is responsible for all CRUD operations on systemd-nspawn NixOS containers.

Nspawn containers are particularly useful for breaking up large monolithic NixOS systems into many smaller container configurations. This has a number of advtanges:

  • Speeds up nixos-rebuild evaluation for the host system.
  • Separates nixpkgs updates for groups of services, avoiding the “all-or-nothing” update procedure of classic NixOS deployments.
  • Allows for isolated testing of new services and configuration.

These containers can also be used on systems that are not NixOS. See the installation instructions on how to get prepared to run nixos-nspawn containers on other distributions.

Defining your container configuration

Similar to nixos-rebuild, you must define a Nix configuration either as a “classic” config or using flakes.

Flake-based configuration starter

You can use nixos-nspawn.lib.mkContainer provided by this repo to build containers:

{
  description = "The simplest flake for nixos-nspawn --flake";

  inputs = {
    nixpkgs.url = "nixpkgs";
    nixos-nspawn.url = "github:m1cr0man/python-nixos-nspawn";
  };

  outputs = { self, nixpkgs, nixos-nspawn }: {
    # nixosContainers is the key that nixos-nspawn looks for.
    nixosContainers = {
      mycontainer = nixos-nspawn.lib.mkContainer {
        inherit nixpkgs;
        name = "mycontainer";
        system = "x86_64-linux";
        modules = [
          {
            system.stateVersion = "26.05";
            services.nginx.enable = true;
            networking.firewall.allowedTCPPorts = [ 80 ];

            nixosContainer.bindMounts = [
              "/var/lib/host/path:/var/lib/container/path"
            ];
          }
          # You may import more .nix files as you wish.
          ./configuration.nix
        ];
      };
    };
  };
}

You can instantiate the container like so:

sudo nixos-nspawn create --flake .#mycontainer mycontainer

A full list of nixosContainer options is available in the options reference.

Config-based configuration starter

Although we strongly recommend using flakes, you can also use classic configuration.nix files

# Just a regular configuration.nix
{
  system.stateVersion = "26.05";
  services.nginx.enable = true;
  networking.firewall.allowedTCPPorts = [ 80 ];
}

You can instantiate the container like so:

sudo nixos-nspawn create --config configuration.nix mycontainer

Switching to declarative containers

It is quite safe to move between imperative and declarative containers (and vice-versa). The process involves:

  • Remove the container with nixos-nspawn remove $container.
  • Rejigging your configuration to add the imperative container to your host’s nixos.containers (see the declarative container docs).
  • Performing nixos-rebuild on the host.

Since the same state directories are used for both kinds of containers, no other changes are required.

Update, delete, rollback, and other operations

Check out the nixos-nspawn --help output for more documentation on common imperative operations.

Further reading

Declarative container management

Declarative containers have existed in NixOS for quite some time under the containers option. This repository provides RFC108-style containers which use systemd-networkd instead of the classic script based networking.

Defining your container configuration

You should have already added the hypervisor module to your system during installation.

From here, you can simply declare NixOS containers in your host configuration like so:

{
  nixos.containers.mycontainer = {
    # This option houses the actual system configuration.
    config = {
      services.nginx.enable = true;
      networking.firewall.allowedTCPPorts = [ 80 ];
    };

    bindMounts = [
      "/var/lib/host/path:/var/lib/container/path"
    ];
  };
}

Upon nixos-rebuild, the container will be started. You can verify this with nixos-nspawn list or machinectl list.

Further reading

Migration from the containers module

This repository aims to replace the containers module provided by nixpkgs/NixOS.

Migrating an existing container is straight forward. Consider the following configuration:

{
  containers.mycontainer = {
    # System configuration option is unchanged
    config = { pkgs, ... }: {
      services.nginx.enable = true;
      networking.firewall.allowedTCPPorts = [ 80 ];
    };

    # Old mount configuration
    bindMounts = {
      "/home" = {
        hostPath = "/home/alice";
        isReadOnly = false;
      };
    };

    # Old networking configuration
    hostAddress = "10.231.136.1";
    localAddress = "10.231.136.2";
  };
}

The migrated configuration looks like this:

  nixos.containers.mycontainer = {
    # Unchanged system configuration
    config = { pkgs, ... }: {
      services.nginx.enable = true;
      networking.firewall.allowedTCPPorts = [ 80 ];
    };

    # New mount configuration
    bindMounts = [
      "/home/alice:/home"
    ];

    # New networking configuration
    # The blank entry clears any inherited addresses.
    hostNetworkConfig.address = [ "" "10.231.136.1/28" ];
    containerNetworkConfig.address = [ "" "10.231.136.2/28" ];
  };

Further reading

Networking

Understanding container networking in the context of systemd-nspawn can be a bit of a challenge. A lack of widespread use of either systemd-networkd or systemd-nspawn makes it difficult to determine the exact configuration you may be looking for when first configuring your containers. This document aims to help you understand the configuration possibilities and when you may want to use them.

This project does not attempt to abstract over the systemd-network configuration options beyond what NixOS provides. The hope is that one less level of abstraction makes the configuration easier to reason with, and negates the need to cover every possible use case here.

Check out the systemd.network(5) documentation and systemd.network.networks NixOS options for the full picture of configuration options available.

Host/Hypervisor Configuration

For correct operation of systemd-nspawn containers:

  • The host must be using systemd-networkd.
  • NAT and masquerading must be enabled.
  • IPv4/IPv6Forwarding must be enabled on the external interface.
  • The ve- and vz- interfaces are trusted by the host firewall, or the following ports are allowed:
    • UDP 53: DNS
    • UDP 67 + 68: DHCPv4
    • UDP 546 + 547: DHCPv6
    • TCP + UDP 5355: LLMNR

For NixOS users, this configuration should suffice:

{
  networking.firewall.trustedInterfaces = [ "ve-+" "vz-+" "vb-+" ];
  networking.nat = {
    enableIPv6 = true;
    enable = true;
  };
}

The Default Config

This imperative container configuration will be our control throughout the rest of this guide:

# example.nix
{
  system.stateVersion = "26.05";

  # Configure a basic web server. HTTP only, no TLS.
  services.nginx = {
    enable = true;
    virtualHosts.localhost.default = true;
  };
  networking.firewall.allowedTCPPorts = [ 80 ];

  # Expose the port via your host's network
  # *IMPORTANT:* Even if your host firewall would usually block this,
  # systemd-nspawn will configure nftables such that it will
  # work anyway.
  nixosContainer.forwardPorts = [{ hostPort = 8181; containerPort = 80; }];
}

You can create this container by writing the above config to a file and running this command:

nixos-nspawn create --config example.nix example

Out of the box, this provides:

  • An IPv4 address for both the container and host.
  • An IPv6 link-local address for both the container and host.
  • IPv4 internet connectivity via NAT.
  • Container DNS hostname resolution on the host only (via nss-mymachines).
  • Connections to the host on port 8181 routed to your container (IPv4 Only).

All the following commands should work:

# Ping the container on IPv4
ping -4 -c1 example
# Ping the container on IPv6
# Note: If nscd/nsncd is enabled (default on NixOS), you need to specify the interface to use.
ping -6 -c1 -I ve-example example
# View the web server's homepage
curl -o- http://example
# Ping the internet from within the container
machinectl shell example $(which ping) -c1 nixos.org

IPv6

IPv6 technically works out of the box, but the link local address is not very useful:

  • Port forwards only work on IPv4.
  • IPv6 internet connectivity within the container does not work.

If you see the error Destination unreachable: Beyond scope of source address, keep reading.

Global subnet delegation

If you have an IPv6 address block at your disposal, and you want your container to be reachable on the internet, you can delegate a subnet of that address block to your container and host.

Note on Routing: Your host’s upstream provider must be routing your subnet to your host’s external interface. If they rely on Neighbor Discovery (NDP) instead of static routing (common with providers like Hetzner), you may also need to enable IPv6ProxyNDP = "yes"; on your host’s main uplink interface.

ULA (Private IPv6 with NAT)

A Unique Local Address (ULA) is equivalent to an IPv4 private subnet. This should be distinct from any other IPv6 addresses on your host.

Configuration

For both of the above cases, configuration is similar:

# example.nix
{
  # ... Below the default configuration ...
  nixosContainer.hostNetworkConfig.ipv6Prefixes = [{
    Prefix = "2001:1234:abcd:ef01::/64";
    # The host itself will need an address in this subnet to reach the container
    # and to serve a gateway. Assign will automatically pick an address to use.
    Assign = true;
  }];

  # Optional: Assign a static address to the container itself.
  # The gateway address will be resolved via router advertisement.
  nixosContainer.containerNetworkConfig = {
    address = [
      "2001:1234:abcd:ef01::2/64"
    ];
  };
}

Zones

Zones are a systemd-nspawn abstraction over the basic setup of a hub and spoke bridge network. A vz- prefixed interface will be created on the host side instead of a ve- interface. Zones allow for private inter-container networking on the same host.

You can specify a zone interface to use like so:

# example.nix
{
  # ... Below the default configuration ...
  nixosContainer.zone = "myzone";
}

The host side of zone configuration must be specified declaratively via nixos.containers.zones on the hypervisor. It cannot be configured via imperative container options.

Within the containers, you may also consider enabling some sort of multicast DNS solution:

  • Link-Local Multicast Name Resolution (LLMNR): Enabled in systemd-resolved by default, containers will be able to resolve eachother via their hostnames. UDP 5355 must be opened on all containers.
  • multicast DNS (mDNS): Disabled in systemd-resolved by default, containers will be able to resolve eachother via $hostname.local. UDP 5353 must be opened on all containers.

Bridges

Bridges work similarly to zones, but the creation of the bridge is not managed by systemd-nspawn. You must create the bridge interface in advance of creating any containers which depend on it.

You can specify a bridge interface to use like so:

# example.nix
{
  # ... Below the default configuration ...
  nixosContainer.bridge = "mybridge";
}

Advanced configuration

For all other situations, you can directly configure systemd-network options. Some important info:

  • hostNetworkConfig configures the ve-$container or vz-$container interfaces as appropriate.
    • This is an alias for systemd.networks.20-$interface in the host config.
  • containerNetworkConfig configures the host0 interface in the container.
    • This is an alias for systemd.networks.20-host0 in the container config.
  • You can directly configure the container’s systemd-nspawn options via systemd.nspawn.$name. This allows you to configure MACVLAN networking, or disable the VirtualEthernet interface for networking-free containers.

Gotchas

  • nscd/nsncd (the protocol itself, infact) does not support passing around a Scope ID, required to make IPv6 link-local routing work without specifying the interface manually. This is why ping -6 will resolve the IP but fail to ping the container without adding -I ve-example. You can observe this behaviour on NixOS with the following commands, noting the %13 present in the last command’s output:
$ ping -6 -c1 example
PING example (fe80::a4cf:97ff:fe11:8c36) 56 data bytes

--- example ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

$ getent ahosts example
fe80::a4cf:97ff:fe11:8c36 STREAM example
fe80::a4cf:97ff:fe11:8c36 DGRAM
fe80::a4cf:97ff:fe11:8c36 RAW

$ LD_LIBRARY_PATH="$(nix eval --raw nixpkgs#systemd.outPath)/lib" getent -s mymachines ahosts example
fe80::a4cf:97ff:fe11:8c36%13 STREAM example
fe80::a4cf:97ff:fe11:8c36%13 DGRAM
fe80::a4cf:97ff:fe11:8c36%13 RAW
  • Firewall rules configured for NAT and port forwarding are added to the io.systemd.nat table. The prerouting and output hooks have a priority of -99, which is lower (meaning higher precedence) than nixos-nat which is -100. This results in forwarded ports bypassing other host firewall rules. You can view this configuration with these commands:
$ nft -y list table ip io.systemd.nat
# Prints port forwarding mappings, NAT masquerade config, and filter chains.

$ nft -y list table ip6 io.systemd.nat
# Same as above but for IPv6

Systemd-provided tooling

nixos-nspawn CLI reference

This is generated from the source code.

Usage:

nixos-nspawn [-h] [--unit-file-dir UNIT_FILE_DIR] [-v]
             {autostart,create,list,list-generations,remove,rollback,update} ...

NixOS imperative container manager v0.11.0

Positional arguments:

  • {autostart, create, list, list-generations, remove, rollback, update}: Command to execute

Optional arguments:

  • --unit-file-dir UNIT_FILE_DIR: Directory where Systemd nspawn container unit files are stored
  • -v, --verbose: Show build traces and other command activity

Usage of autostart:

nixos-nspawn autostart [-h] [--json] [-n]

Start all imperative containers on the system which are configured to start at boot time

Optional arguments of autostart:

  • --json: Output in JSON format
  • -n, --dry-run: Show which containers would be started

Usage of create:

nixos-nspawn create [-h] [--json] [--config CONFIG] [--profile PROFILE] [--flake FLAKE]
                    [--system SYSTEM] name

Create a container on the system

Positional arguments of create:

  • name: Container name

Optional arguments of create:

  • --json: Output in JSON format
  • --config CONFIG: Container configuration file
  • --profile PROFILE: Container system profile path
  • --flake FLAKE: Container configuration flake URL
  • --system SYSTEM: The host platform name. The default (x86_64-linux) is selected at compile time.

Usage of list:

nixos-nspawn list [-h] [--type {imperative,declarative}] [--json]

List containers on the system

Optional arguments of list:

  • --type {imperative,declarative}: Container type to filter by
  • --json: Output in JSON format

Usage of list-generations:

nixos-nspawn list-generations [-h] [--json] name

List configuration generations of a container

Positional arguments of list-generations:

  • name: Container name

Optional arguments of list-generations:

  • --json: Output in JSON format

Usage of remove:

nixos-nspawn remove [-h] [--delete-state] name

Remove a container from the system

Positional arguments of remove:

  • name: Container name

Optional arguments of remove:

  • --delete-state: Delete the container’s state directory

Usage of rollback:

nixos-nspawn rollback [-h] [--json] name

Roll back a container on the system

Positional arguments of rollback:

  • name: Container name

Optional arguments of rollback:

  • --json: Output in JSON format

Usage of update:

nixos-nspawn update [-h] [--json] [--config CONFIG] [--profile PROFILE] [--flake FLAKE]
                    [--strategy {reload,restart}] [--system SYSTEM] name

Update a container present on the system

Positional arguments of update:

  • name: Container name

Optional arguments of update:

  • --json: Output in JSON format
  • --config CONFIG: Container configuration file
  • --profile PROFILE: Container system profile path
  • --flake FLAKE: Container configuration flake path
  • --strategy {reload,restart}: Activation strategy to use to apply the update to the container. Leave blank to use strategy configured in the container’s configuration.
  • --system SYSTEM: The host platform name. The default (x86_64-linux) is selected at compile time.

Container Options

The container configuration options are broken into three sections:

Note that some basic options are also marked as declarative only (but only a few).

Basic Options

activation.autoStart

Whether to enable starting the container on hypervisor boot.

Type: boolean

Default:

true

Example:

true

activation.reloadScript

Script to run when a container is supposed to be reloaded.

Type: null or absolute path

Default:

null

activation.strategy

Decide whether to restart or reload the container during activation.

dynamic checks whether the .nspawn-unit has changed (apart from the init-script) and if that’s the case, it will be restarted, otherwise a reload will happen.

Type: one of “none”, “reload”, “restart”, “dynamic”

Default:

"dynamic"

bindMounts

Extra paths to bind into the container. These take the form of “hostPath:containerPath[:options]”.

Type: list of string

Default:

[ ]

bridge

Name of the networking bridge to connect the container to.

Type: null or string

Default:

null

config

NixOS configuration for the container. See configuration.nix(5) for available options.

Only available for declarative containers. Imperative containers can be configured as usual without this option.

Type: NixOS configuration

Default:

{ }

containerNetworkConfig

Extra options to pass to the configuration for the container’s host0 interface.

See systemd.network.networks for a full list of options.

If null, the defaults defined by systemd are used. This results in a network with DHCP, link local addresses and LLDP enabled which is reachable from the host network.

Using this is preferred over adding options via systemd.network.networks.host0 as care has been taken to preserve the default host0 configuration from pkgs.systemd.

Type: null or (attribute set)

Default:

null

credentials

Credentials using the LoadCredential=-feature from systemd.exec(5). These will be passed to the container’s service-manager and can be used in a service inside a container like

{
  systemd.services."service-name".serviceConfig.LoadCredential = "foo:foo";
}

where foo is the id of the credential passed to the container.

See also systemd-nspawn(1).

Type: list of (submodule)

Default:

[ ]

credentials.*.id

ID of the credential under which the credential can be referenced by services inside the container.

Type: string

credentials.*.path

Path or ID of the credential passed to the container.

Type: string

declarative

Indicates whether this container is declarative or imperative.

Type: boolean

Default:

true

ephemeral

ephemeral means that the container’s rootfs will be wiped before every startup. See systemd.nspawn(5) for further context.

Type: boolean

Default:

false

Example:

true

forwardPorts

Define port-forwarding from a container to host. See --port section of systemd-nspawn(5) for further information.

Type: list of (submodule)

Default:

[ ]

Example:

[
  { containerPort = 80; hostPort = 8080; protocol = "tcp"; }
]

forwardPorts.*.containerPort

Port to forward on the container-side. If null, the -option will be used.

Type: null or 16 bit unsigned integer; between 0 and 65535 (both inclusive)

Default:

null

forwardPorts.*.hostPort

Source port on the host-side.

Type: 16 bit unsigned integer; between 0 and 65535 (both inclusive)

forwardPorts.*.protocol

Protocol specifier for the port-forward between host and container.

Type: one of “udp”, “tcp”

Default:

"tcp"

hostNetworkConfig

Extra options to pass to the configuration for the hypervisor’s network interface. This only applies to containers using private networking - that is, they are not assigned to a bridge or zone.

See systemd.network.networks for a full list of options.

If null, the defaults defined by systemd are used. This results in a network with a randomly assigned IPv4 subnet and an IPv6 link local address. IPv4 NAT will be enabled and will grant the container internet access.

Using this is preferred over adding options via systemd.network.networks as care has been taken to preserve the default host0 configuration from pkgs.systemd.

Type: null or (attribute set)

Default:

null

mountDaemonSocket

Whether to enable daemon-socket in the container.

Type: boolean

Default:

false

Example:

true

nixpkgs

Path to the nixpkgs-checkout or channel to use for the container. If not provided, the current nixpkgs eval is used.

Only available for declarative containers.

Type: null or absolute path

Default:

null

sharedNix

Warning: Experimental setting! Expect things to break!

With this option disabled, only the needed store-paths will be mounted into the container rather than the entire store.

Type: boolean

Default:

true

systemCallFilter

Whether to filter system calls for the container. Corresponds to SystemCallFilter of systemd.exec(5).

Type: null or string

Default:

null

timeoutStartSec

Timeout for the startup of the container. Corresponds to DefaultTimeoutStartSec of systemd.system(5).

Type: string

Default:

"90s"

userNamespacing

Whether to use user/group namespacing. This will also enable idmapping on core mounts. You may want to disable this if you run into boot issues related to idmap bind mounts.

Type: boolean

Default:

false

zone

Name of the networking zone defined by systemd.nspawn(5).

Type: null or string

Default:

null

Imperative Options

The system can be configured as if it were a usual NixOS configuration. The following additional option is added for nixos-nspawn-specific settings:

nixosContainer

The container configuration. See container options for a full list of options.

Type: attribute set

Default:

{ }

Example:

bindMounts = [
  "/path/on/host:/path/on/container"
];

Declarative Options

nixos.containers.enableAutostartService

Whether to enable autostarting of imperative containers.

Type: boolean

Default:

true

Example:

true

nixos.containers.instances

Container configurations. See container options for a full list of options.

Type: attribute set of (attribute set)

Default:

{ }

Example:

mycontainer = {
  config = {
    services.openssh.enable = true;
  };
};

nixos.containers.zones

Extra configuration for networking zones for nspawn containers.

See systemd.network.networks for a full list of options.

If a container is defined using a zone not declared in this option, the defaults defined by systemd are used. This results in a network with DHCP, link local addresses and LLDP enabled which is reachable from the host network.

Type: attribute set of (attribute set)

Default:

{ }