Vanilla Kubernetes on NixOS
December 2nd, 2025Last year I was introduced to Kubernetes at university. After an initially steep learning curve, I learned to like the feeling of operating multiple computers at once and seeing every pod healthy in k9s. Naturally, after the course ended, I still had some things I wanted to experiment with. So I just had to install Kubernetes.
Having only touched running installations (I knew they were set up using Ansible and Terraform), I found myself staring at the services.kubernetes option in the NixOS GitHub repo — this was only a couple of weeks after I had installed it on my Desktop for the first time.
So, I spun up three arm64 VPSs on Hetzner — falkenberg, stadeln, and ronhof — and pushed the ISO generated from my Nix Config, including my public SSH key, onto them. The names are a nod to their physical locations: falkenberg is named after a place near its datacenter in Bavaria, while stadeln and ronhof are districts in Nuremberg (or rather Fürth) where the other two nodes are located.
The cost? Five euros a month per node. Not cheap… but bearable.
The NixOS Layer
I define my three hosts in a single default.nix, sharing a common kubernetes.nix module since the three nodes basically share everything except for their hostnames and IPs (somehow DHCP failed me). The file is part of my overarching Nix Configuration — of course, NeoVim has to be configured properly if I ever need to SSH into a worker node!
To keep the OS state in check across nodes, I use Comin. It runs on each node, polling my git repository and applying the configuration automatically. It’s the GitOps approach I learned to love from ArgoCD but for the bare metal. This means I don’t even need to run nixos-rebuild switch manually. I push to main, and within a minute, the nodes are all up to date. Be it the latest Linux kernel, a Kubernetes update or a new trusted public SSH key.
services.comin = {
enable = true;
remotes = [{
name = "origin";
url = "https://github.com/m4r1vs/NixConfig.git";
branches.main.name = "main";
}];
}; Containers are Still Messy in 2025
When I first booted up the nodes and copied the controller’s secret to the worker nodes, I ran into one major issue: nothing was working.
Quickly, I figured out the culprit:
❯ kubectl logs -n kube-system coredns-66bbb957b6-rrvzb
exec /bin/coredns: exec format error It’s always DNS — even if not directly this time.
Anyone who’s dipped their toes into containerization on another architecture besides 64bit x86 has probably seen this error. The CoreDNS binary was not compiled for arm64. So I went digging.
It turned out, the image I was using was correctly labeled for my system but somehow the x86 version landed in the image. The fix was rather simple: I compiled CoreDNS for arm64, stuffed it into my own and overwrote the Nix config to point to that one instead:
services.kubernetes.addons.dns.coredns = {
imageName = "mariusniveri/my-coredns";
imageDigest = "...";
finalImageTag = "latest";
sha256 = "sha256-ID+qV6/knQDQ8leyq4r08uexPdDiu739Qeh/cBP0GfE=";
}; Another minor issue was the sandbox pause image used by containerd (I still don’t really know what it does? What does it do?). Somehow the pull policy of that image is set to Always and the Dockerhub ratelimits for anonymous users are way to strict. Again, everything is laid out transparently in the Nix repo and I knew exactly what to do:
virtualisation.containerd = {
settings = lib.mkForce {
plugins."io.containerd.grpc.v1.cri" = {
sandbox_image = "registry.k8s.io/pause:3.10.1";
};
};
}; Bootstrapping ArgoCD and the Cluster
Here is where it gets interesting. I didn’t want to manually kubectl apply anything. I wanted the cluster to wake up with ArgoCD running pointed to my app of apps, ready to sync them all!
Reading the Nix source again, I figured that the appropriately called AddonManager would be best suited for this kind of task. It simply takes yaml files and makes sure they are applied. In the case of Nix, those yaml files cannot be pointed to directly but they need to be Nix attributes that are then converted into yaml.
Annoyingly, the Addon I want to install (ArgoCD) comes as yaml file and not a Nix set of Nix attributes. Time for some lovely over-engineering to make the following pipeline:
There probably is a cleaner way… there always is. But let’s not get hung up on this.
One prompt to my then favourite LLM later, and I had a working Nix function resourceFromYAML that was good enough for the ArgoCD installation manifest. And just like that, we’re up and running with the app of apps configured:
addonManager = lib.mkIf isMaster {
enable = true;
bootstrapAddons = (
resourceFromYAML {
path = builtins.fetchurl {
url = "https://raw.githubusercontent.com/argoproj/argo-cd/v3.2.0/manifests/install.yaml";
sha256 = "...";
};
ns = "argocd";
})
// {
argo-namespace = {
apiVersion = "v1";
kind = "Namespace";
metadata = {
name = "argocd";
};
};
cluster-bootstrap = {
apiVersion = "argoproj.io/v1alpha1";
kind = "Application";
metadata = {
name = "cluster-bootstrap";
namespace = "argocd";
};
spec = {
project = "default";
source = {
repoURL = "https://github.com/m4r1vs/argo-apps";
targetRevision = "HEAD";
path = "bootstrap";
directory = {
recurse = true;
};
};
destination = {
server = "https://kubernetes.default.svc";
namespace = "bootstrap";
};
syncPolicy = {
syncOptions = [
"CreateNamespace=true"
];
automated = {
prune = true;
allowEmpty = true;
selfHeal = true;
};
};
};
};
};
}; The Result
When the cluster boots up, now this is what happens:
- System Boot: The
cominservice starts, periodically checking my Nix config for updates and applying them. - Control Plane: It starts
etcd, generates the CA and certificates viaeasyCerts, and launches thekube-apiserver. - Addons: The
addonManagerservice kicks in. It sees the declarative definition of ArgoCD we fetched from the Internet and applies them. - GitOps Takeover: ArgoCD is running, sees the
cluster-bootstrapapplication that was just applied, and starts syncing.
It is a completely hands-off, declarative process. The only state is the git repository and a single secret to connect the nodes. If I were to wipe the disks and redeploy, the cluster would reconstruct itself exactly as it was. Thank you, Eelco!
- k9scli.io
- github.com/NixOS/nixpkgs/tree/nixos-25.11/nixos/modules/services/cluster/kubernetes
- Eelco Dolstra, the inventor of the Nix language