May 2, 2026
A $40/Month Kubernetes Lab Architecture (Without the Cloud Markup)
Real Kubernetes for $40/month using cheap VPSes — not a single-node toy, not a managed-K8s wallet drain. Here's the topology, the distro choice, and what it can and can't do.

If you want to learn Kubernetes properly — multi-node behavior, real failure modes, networking that resembles production — the path most guides recommend is one of:
- A managed cluster (GKE/EKS/AKS) — minimum $70-100/month before you run anything useful
- A single-node Minikube or kind — fine for syntax practice, doesn't teach you anything about the distributed parts
- A homelab with three Mini PCs — $500+ upfront, plus power and noise
There's a fourth option that's better for most people: three cheap VPSes from a budget provider, K3s, around $40/month total. You get real multi-node behavior, real network partitions when you intentionally pull a node, real failover. You don't get HA-grade capacity, but for learning that's a feature.
The architecture
┌─────────────────────────────────────────────────────────┐
│ control-plane (k3s server) │
│ Hetzner CX22 / CX21 - 2 vCPU, 4GB RAM, ~$5/mo │
│ Runs: k3s server, etcd, scheduler, controller-manager │
└─────────────────────────────────────────────────────────┘
│
│ k3s API (port 6443) over private network
▼
┌──────────────────────┐ ┌──────────────────────┐
│ worker-1 │ │ worker-2 │
│ CX22, ~$5/mo │ │ CX22, ~$5/mo │
│ Runs: kubelet, │ │ Runs: kubelet, │
│ containerd, your │ │ containerd, your │
│ workloads │ │ workloads │
└──────────────────────┘ └──────────────────────┘
Plus: external object storage for stateful workloads
- Cloudflare R2: $0 ingress + ~$0.015/GB stored ≈ $1-3/mo for typical lab
- Backblaze B2: $6/TB/mo
Plus: registry for your images
- GitHub Container Registry (free for public, generous private allowance)
Total: $15/mo for the cluster + ~$2-5 for storage. Round to $20/mo for the bare minimum, $40/mo if you spec up to CX31s (4 vCPU, 8GB) for more headroom and faster builds.
Why K3s, not full Kubernetes
K3s is a lightweight Kubernetes distribution from Rancher (now SUSE). Single binary, built-in SQLite or external etcd, includes Traefik as default ingress, includes a local-path storage class. About 100MB of RAM for the control plane vs 1GB+ for vanilla kubeadm.
For a learning lab on small VPSes, this is the sweet spot. Everything you do in K3s translates directly to vanilla Kubernetes — same kubectl, same YAML, same RBAC, same operators. The differences (single-binary, embedded SQLite by default) only matter when you're operating it, not when you're learning it.
If you specifically want to learn cluster setup itself (kubeadm, certificate management, etcd operations), do one cluster with kubeadm, then switch to K3s for ongoing experimentation. Don't try to learn workloads and cluster operations at the same time.
Setup at a glance
# On control-plane node
curl -sfL https://get.k3s.io | sh -
sudo cat /var/lib/rancher/k3s/server/node-token # save this
# On each worker
curl -sfL https://get.k3s.io | K3S_URL=https://CONTROL_PLANE_IP:6443 \
K3S_TOKEN=<token-from-above> sh -
# On your laptop
scp control-plane:/etc/rancher/k3s/k3s.yaml ~/.kube/config
sed -i 's/127.0.0.1/CONTROL_PLANE_PUBLIC_IP/' ~/.kube/config
kubectl get nodes
If kubectl get nodes shows three nodes (one server, two workers), you're done. About 15 minutes of work.
What this lab can do
- Real multi-node deployment behavior. Pods get scheduled across workers; you can see failures when one worker dies.
- Real ingress + TLS. Traefik comes pre-configured; cert-manager + Let's Encrypt for HTTPS.
- Real persistence with cloud storage. PVCs backed by R2/B2 work fine with the right CSI driver.
- Real failure scenarios. Reboot a worker, watch pods reschedule. Kill the control plane, watch what stops working.
- Helm, operators, GitOps, service mesh experiments. All of it works at this scale.
What it can't do
Be honest with yourself about the limits:
- No HA control plane. One server node = one point of failure. Vanilla K3s supports multi-server with embedded etcd; just spin up two more for $10/mo if you want HA. Most learning happens fine without it.
- Limited memory headroom. A 4GB worker can host maybe 5-8 small pods comfortably. Java apps will eat the budget fast. Stick to Go/Rust/Node services for density.
- Network bandwidth between regions. If your VPSes are in different regions, latency between them will affect intra-cluster traffic. Pick the same region for all three.
- No GPU workloads. Budget VPSes don't have them. If you need GPUs, this lab isn't the place.
What to do once it's running
A loose progression that builds intuition:
- Deploy a stateless web app with
kubectl create deploymentandkubectl expose. Watch the pods spread across workers. - Deploy with a Helm chart (try
bitnami/wordpressortraefik). Compare the YAML it generates. - Add ingress + cert-manager + Let's Encrypt. Get HTTPS working on your domain.
- Add Prometheus + Grafana. See what metrics are actually available.
- Intentionally break things.
kubectl drain worker-1. Reboot a node. Fill a disk. Read what kubelet logs. - Run a stateful workload (Postgres). Add persistent volumes. Practice restoring from backup.
Each step teaches a new layer. The lab can run all of them simultaneously without sweating.
The honest comparison
For learning Kubernetes specifically, this beats single-node options because it teaches the multi-node parts. It beats managed K8s because the cost barrier is gone — you can break things and fix them at $40/mo, not $400/mo. It beats a homelab because it's $0 upfront and you can tear it down when you're done.
If you're past learning and running real workloads, the calculus changes. But for the "I want to actually understand Kubernetes" phase, three small VPSes and K3s is the right tool.