For the past several years, I’ve been updating a little self-hosting puzzle I’ve made for myself to keep me busy. I have an old power Mac G4 cube shell. I’ve been designing, 3d printing, and releasing the completed designs for new skeletons to replace Apple’s core with one of my own design that allows off the shelf parts to fit into this little case and serve my home networking needs. I technically beat apple to the internet with the first ARM-powered Mac years ago.
I switched back to intel for the most recent incarnation of this rig, but I’m definitely cheating a little at this point to get it all to fit in there. I’m not proud of this latest version and I don’t think I even released the final version publicly. The project gave me a fun little engineering challenge but now I fear I cannot go further with it without an electrical engineering degree and several years of PCB design work under my belt. I need to make a big boy server now. It’s time to move on.
This little server has always been quiet and energy efficient. It’s been stable, and reliable. It’s just getting to the point where I cannot realistically fit all the parts I want into that shell anymore.
One of the things I want to do is declutter my equipment a bit, combining several things into one enclosure if possible. I’d like to move from my firewalla purple to something that doesn’t need the cloud at all. I’d like to be able to replace more of my cloud based things, including my home cameras, with self hosted options. The way I want to do those things means I really can’t cram them into that box anymore.
My current plan is to build a PC with a Ryzen CPU and at least 6 cores, 7000 or 9000 series, install proxmox, put OPNsense in a VM with a few cores pinned to it and PCI passthru to an intel dual-port NIC and installing a coral TPU for frigate detection. I would install an intel GPU to handle the media decode and encode for frigate managing my cameras and jellyfin managing my media. I’d install immich, and qBitTorrent and i2PD and if its going to be doing all that anyway, why not throw in a second AMD GPU just for rendering games in a Linux VM with the GPU accessed via PCI passthru and stream those to my tv via sunshine? Just throw it all in that one box. Build it into a rack case, put it in the basement. Make it nice and clean and tidy. All in one piece.
There are two problems with this. First, it’s going to be expensive, but that’s less of a problem as I can just do this over time, stretch the project out and add resources to it as they are needed. It doesn’t have to do everything on day 1, the old stuff still works. The other issue is power consumption. This thing is NOT going to be as efficient as my arm based firewalla purple and the N100 lattepanda Mu in my G4 cube. It will be able to do much more than those two on their own could ever hope to do, but most of the time, it’s not going to need all the muscle it has. It’s just sitting idle, and consuming more energy to do the same shit.
I thought maybe I could just skip the gaming requirement, modify something like an WTX Pro off amazon to use an intel ARC GPU for the transcoding and camera decoding and thought would work well enough. It would sip power, cost less, do almost everything the other set up would, but be less versatile and more janky thanks to the modifications I would have to do for the GPU. I looked at other ryzen embedded boards and intel based NAS boards and they all had something about them that would just make them impractical to use for this. then I saw a video on YouTube today about someone going the other direction with his homelab due to the energy expense and breaking it all up into smaller, weaker hardware all tied together with 2.5G Ethernet. A little N100 based NAS, a little ARM based this and that and all the separate things tied together through network but all acting as their own independent boxes. Ugly, sloppy, more complex, but they used MUCH less power than 1 big box.
I figured maybe I could set up proxmox to only spin up the gaming VM when I needed it, and when its shut down, power down the AMD GPU and maybe even disable the CPU cores I would have pinned to the gaming VM. The CPU cores probably wouldn’t save that much power though and may even be more efficient just leaving them available for the running containers rather than collapsing those smaller container loads into the few remaining cores and clocking them up to compensate. I wasn’t sure of the math on this. According to chatGPT it’s kind of a wash but the AI is really only useful if I know what I’m doing so I can correct it or question it when it says something suspicious so that doesn’t tell me much.
I’m just in the planning stages now. I considered intel for the CPU but the prices are higher, the chips aren’t as good as AMDs per dollar, and I’d get a longer life out of an AM5 socket than the intel stuff which changes every time a board member sneezes. Plus the AMDs are generally lower on power consumption.
I’m kind of thinking I’m getting carried away with this, that the power draw won’t be all that major considering it’s just in my basement and not churning through heavy traffic in an enterprise environment. But I’ve always only ever built 24/7 stuff out of more efficient stuff so I’m not sure what I’m in for here. I know I can’t build an ARM server because OPNsense isn’t supported there.
I need some outside opinions. I’m drowning in options here.
If you haven’t used Sunshine to play games yet, I would first try it out with whatever equipments you have before going all-in. It sounds fucking cool on paper, but the whole experience wasn’t all that great for me. Not the Sunshine’s fault, but the games I play are very latency sensitive that it was barely playable.
Personally, if the games play well, I would just go for it.
Well I would be playing over the wired network with an appleTV4k in the living room. I don’t game much at all but there are a few upcoming games I want to enjoy. Is there a list somewhere of which games dislike VMs?
Were you playing over ethernet or WiFi? Latency over ethernet is quite good. unless playing something like smash I never felt it.
I was talking more about whether they can personally tolerate it or not. I thought Factorio over Wi-Fi would be okay even with the inevitable latency, but it was slightly off in a way that I simply could not continue. Meanwhile, I’ve seen people playing ranked games of Rainbow 6 Siege with a similar setup.
I’ve also looked at an idea liked this, what stopped me was anti-cheat engines. They will usually block playing in a VM and there is no real way to trick them into thinking you’re not in a VM.
If that’s what works best for you, I don’t agree that having multiple boxes for different purposes linked by network is ugly. This is often what you see in enterprise and should bye plenty fast for your needs.
An full computer is also not a bad idea, if you want a gpu for transcoding it could be good. Honestly I found in most cases CPU transcoding with modern CPUs was quite good. If you go the GPU route, I would simply look for a used, cheap GPU it should do the trick!
Nah, you can trick them, it just takes tweaking a ton of settings that most GUIs don’t expose, basically passing through all hardware IDs and blahblah to the VM, giving them real MAC addresses, etc. It’s a huge pain and would limit what options you can roll with in the VM settings since the options have to be compatible with the real hardware underneath, since that’s what the VM’s OS will see, but it’s possible.
Definitely easier to just have a separate gaming rig.
My concern with CPU transcoding is it just won’t be able to keep up with all the concurrent streams whereas a GPU will do so effortlessly. Intel GPUs have the best encoders second only to Apple. AMD…has the worst.
Just a little apology to anyone who read, I definitely rambled on here. I’m overrun with ideas and just don’t know how to trim this all down