Homelabbing is fun, but can get expensive very quickly. Using somewhat older hardware may significantly reduce cost, and more often than not, platforms from just a few years ago handle all the services you throw at them with ease—unless, of course, you’re planning to deploy local AI workloads and the like. That’s why I came up with a simple plan to repurpose my old Ryzen desktop. Sort of… There’s one catch, though: like all 5000X series chips my Ryzen 5 5600X doesn’t come with an iGPU, and keeping an RTX 3070 in there wastes a ton of power even at idle-while it has way too little VRAM to be actually useful for local AI tasks. Never did a lot of gaming either. So I decided to run an experiment: pick up a relatively cheap Ryzen 7 5700G and later sell off both the Ryzen 5 5600X and the RTX 3070. The motherboard on hand is an ASUS ROG STRIX B550-F Gaming (Wi-Fi), powered by a Seasonic PRIME TX 750W. The power supply now is absolutely overkill, but that is what is in there. In its silent mode it should easily power two of the servers without spinning the fan once. This is made possible by its fairly good efficiency, so that’s a plus…

For context: ECC memory, enterprise-grade SSDs and the like are great—but I don’t have them at my disposal, nor do I really think I need them. Requirements vary, and my server is not meant as a blueprint for anyone else at all. The guiding principle was simply to keep reusing the existing platform.

My initial plan, ideally, looked like this:

  • Proxmox VE

    • 1x ZFS mirror, Samsung 990 Pro 2TB: boot + VMs
  • TrueNAS Scale, virtualized

    • 1x ZFS mirror, Samsung 990 Evo 2TB: synced data (syncthing) and photos (immich)
    • 1x ZFS mirror, Crucial BX500 1TB (SATA): “leftovers” from other builds, TBD
    • 1x ZFS mirror, TBD (SATA): bulk storage

Two open questions remained:

  • Power consumption
  • Hardware virtualization / passthrough

Hardware Virtualization

Since the board is based on the B550 chipset, hardware virtualization is heavily limited. While both CPU-connected PCIe/NVMe slots are cleanly separated into their own IOMMU groups (and can therefore be passed through properly), most of the chipset’s PCIe lanes are lumped into essentially one huge IOMMU group. A tiny bit simplified of course, but you get the point. Because of this limitation, I quickly abandoned the virtualization plan. Proxmox now directly manages the pools and provides the network shares. Some would argue that virtualizing TrueNAS on a Proxmox node makes little sense anyway—they would prefer this route regardless. (I would argue separation of the storage management from the hypervisor and the nice TrueNAS UI can be advantageous.)

Actual Implementation

The motherboard provides the following options for attaching PCIe and SATA devices:

Slot/connectorOption aOption b
m.2_1Gen 4 x4 (CPU)Gen 4 x4 (CPU)
m.2_2Gen 3 x4 (Chipset)-
SATA 1–4SATA-6GSATA-6G
SATA 5–6-SATA-6G
SlotOption 1Option 2Bifurcation
PCIe_1 (x16)Gen 4 x16 (CPU)Gen 4 x16 (CPU)8x8, (8x4x4*)
PCIe_2 (x1)Gen 3 x1 (Chipset)-
PCIe_3 (x1)Gen 3 x1 (Chipset)-
PCIe_4 (x16)Gen 3 x1 (Chipset)Gen 3 x4 (Chipset)
PCIe_5 (x1)Gen 3 x1 (Chipset)-

*) UEFI showed 8x4x4 as an option with the 5600X; the 5700G only supports 8x8.

Since I wanted to connect a total of four fast NVMe drives but the board only offers two slots (Option a), adapters are mandatory. Without a graphics card, PCIe_1 with bifurcation becomes the logical choice for running a couple of NVMes: With the 5600X, the board exposed bifurcation options for 8x8 and 8x4x4, but not 4x4x4x4; however, with the 5700G, only 8x8 is available. In practice, you can use half of the lanes effectively with a four-slot adapter board by populating only slots 1 and 3—leaving the other eight PCIe Gen 4 lanes permanently unused. Oh well.

For now, the four SATA ports in Option a are sufficient; if more are needed, PCIe–SATA adapters are here to help. Realistically, even a single Gen 3 PCIe lane is fine for up to four additional SATA HDDs. If an additional 10Gb NIC (or more) is needed, things get more difficult: you might end up running it in the second x16 slot at just one lane—assuming the NIC actually supports it (it should?!).

For my setup, that’s not an issue at all—I’m simply using an i226V 2.5Gb NIC in one of the x1 slots, for now only at 1Gb.

Power Consumption

After a fresh Proxmox install with all four NVMe drives and four DIMMs (128 GB DDR4 @ 3200 MHz) but no SATA drives yet, and with all relevant power-saving options enabled, idle consumption sits around 16–18W at the wall. That’s noticeably more than my ThinkCentre M920q—but in terms of CPU performance and RAM, this one server has enough compute, RAM and connectivity to effectively replaces two of those.

In normal use, with two additional SATA SSDs installed (but still no HDDs), several VMs and LXC containers running yet mostly idle (Syncthing, Home Assistant, Immich, GitLab, GitLab Runner, secondary DHCP and DNS servers etc.), consumption hovers around 25–30W. With more VMs, power usage scales upward by a few watts fairly quickly. For example, running a more active VM like CheckM plus its agents in a couple of VMs and containers in an 1min cycle may add something like 3-5W (but haven’t measured this precisely). That’s why I’ve increased the cycle time to 5min which should be absolutely fine for me.

Conclusion

Would I build or even recommend this setup from scratch if I didn’t already have some of the components? No. Depending on PCIe lane needs, I’d rather look at boards like the Minisforum BD795M, which offer a ton of compute at a fairly low price and may be even more power-efficient (depending on UEFI tuning of course). If Reddit is correct, that board also supports 8x4x4 bifurcation, which allows for some flexibility. Additional SATA ports could then be added for example via a m.2 to SATA adapter if required. Still, this server is definitely a step up from my ThinkCentres—both in terms of capability and in sheer size…

By the way:

  • Without HDDs spinning of course the server is absolutely quiet at any load, at least with the case open. Not too much of a surprise with this huge CPU cooler and a 65W TDP CPU. (This was intentional.)
  • The LEDs on the NIC as well as the NVMe adapter board are super bright. Why?