May 2, 2026
Picking Homelab Hardware by Workload, Not Hype
Most homelab buying guides optimize for spec sheet bragging rights. The right way is to start from what you'll actually run — Plex, Home Assistant, a few self-hosted apps — and work backwards. Here's the workload-first method.
Open any homelab subreddit and you'll find someone asking what they should buy with their $2,000 budget. The replies are predictable: a Mini PC build, an old EPYC server, a 12-bay Synology, the latest mid-range NAS. Each reply is enthusiastic and right for the person who posted it. None of it tells you what's right for you.
The question that should come first is the one nobody asks: what are you actually going to run on it?
The workload-first method
Before you spec a single component, write down the services you actually plan to run, with realistic resource needs. Not aspirational — actual. The list usually looks something like:
- Plex / Jellyfin — media server. Needs disk, CPU for transcoding (or a GPU/iGPU with hardware decode).
- Home Assistant — always-on, low resource. Hates downtime.
- Nextcloud — disk + occasional CPU spikes. Backups are the real consideration.
- A couple of self-hosted apps — n8n, Vaultwarden, a Postgres for personal projects. Each tiny on its own.
- Maybe Docker hosts for dev work — variable resource, often spiky.
Most homelabs don't need a 256GB RAM, 64-core monster. They need 16-32GB of RAM, a CPU with hardware video decode, and disk that fits the media plus backups. The hype tier is overkill for the modal homelab and is bought for emotional reasons.
Workload categories and what they actually need
Storage-heavy (Plex/Jellyfin, Nextcloud, photo backup):
- Disk space matters more than CPU. Get a NAS or a build with enough drive bays for your media plus 2-3x growth.
- ZFS or mdadm for redundancy. Not because RAID is backup — it isn't — but because rebuild time on a 16TB drive is brutal.
- Hardware video decode (Intel iGPU with QuickSync, or a low-end GPU) cuts CPU load on transcoding by 90%.
Compute-heavy (occasional ML, video encoding, dev workloads):
- Modern CPU with reasonable cores (8-16 is plenty for solo use). Single-thread performance still matters.
- 32-64GB RAM if you'll run VMs or LLMs locally.
- An NVMe drive for working storage; spinning disk for cold storage.
Always-on services (Home Assistant, monitoring, reverse proxy):
- Reliability over performance. A small low-power machine that runs 24/7 without complaint.
- ECC RAM if you're paranoid. Not strictly necessary at homelab scale but cheap insurance.
- Hardware watchdog that auto-restarts on hang.
Mixed (most homelabs): A combination. One always-on small box for the critical services, one bigger box for the heavy stuff that you can power down when you're not using it.
The decision matrix
A simple grid that filters most options:
| Use case | Primary need | Reasonable picks |
|---|---|---|
| Plex/Jellyfin only | Disk + iGPU | Mini PC w/ Intel N100/N305, USB-attached storage; or NAS w/ Intel CPU |
| Plex + Home Assistant + a few apps | Reliability + iGPU | Mini PC (Beelink, Minisforum) w/ Proxmox running multiple LXCs/VMs |
| Heavy Docker workloads + dev | CPU + RAM | Used Dell/HP small server, or a Ryzen mini-tower build, 32-64GB RAM |
| Storage-first NAS | Disks + redundancy | Synology/QNAP if you want it easy; TrueNAS Scale build if you want flexibility |
| Learning + experiments | Flexibility | Used enterprise (Dell R730, HP DL360) — loud, power-hungry, but cheap and capable |
The "best" answer for the modal homelab in 2026 is a Beelink/Minisforum N100 or N305 mini PC running Proxmox with USB or SATA-attached external storage. It draws 10-15W idle, handles 4K transcoding via QuickSync, and runs ten LXCs without breaking a sweat. It costs $250-400 depending on RAM/storage. It's boring and it's right.
The hype tier and when it's actually justified
Used enterprise gear (R730, R740, EPYC builds) gets a lot of love and it's not wrong — for a specific person. The justifications that hold up:
- You're learning enterprise systems for work
- You actually need 256GB+ of RAM (LLMs, big databases, 50+ VMs)
- Power and noise aren't issues (you have a basement, electricity is cheap)
The justifications that don't hold up:
- "You never know what you'll need" — you do know, mostly
- "It's only a bit more power" — it's 5-10x more power, and that adds up to hundreds of dollars a year
- "More cores are better" — for batch workloads sometimes; for the homelab modal use case, no
The hype tier is a great choice for the right person and a fast path to a noisy, power-hungry, eventually-unused machine for everyone else.
A sanity check before you buy
Write down your top 5 services. Sum up their realistic CPU and RAM needs. Add 50% headroom. Compare to the spec of the box you're considering.
If the box has 4x what you need, you're overspending. If it has 0.8x what you need, you're underspending. The right gear is in the 1.2-2x range — comfortable but not absurd.
This isn't sexy. It also doesn't end with you regretting your purchase six months in. The sexier route — buying the maxed-out enterprise tier or the latest mini-PC trend — has a much higher regret rate.
What to do once you've picked
Once the hardware is decided, the deployment workflow is its own problem. The principles that work: one orchestration layer (Proxmox VE for the hypervisor + LXC/Docker for services); one storage strategy (ZFS pool + offsite backup); one networking setup (single VLAN to start, fancier later if needed). Keep it boring. Add complexity only when a real need surfaces.
Server Compass exists for the deployment layer specifically — once you have the hardware running, deploying apps to it without manual SSH gymnastics is the next problem. But the order matters: hardware first (workload-driven), orchestration second (boring), apps third (whatever you actually want to use).