Virtualization & VM Management - Which Hypervisors I Use and What I Do With Them

I run a few different hypervisors across my home environment, depending mostly on what I am trying to do:

Where I Script - My Workstation

I spend most of my time, like many, at my personal workstation PC. It has more powerful hardware than my server does, and more importantly, has a monitor, keyboard, and mouse (my server operates headless, without any monitor or direct input devices attached). Since the server itself runs my "production" environment, I do not often test or build scripts directly in the server itself. Instead, I use one of a few hypervisors I have installed on my system:

1. Service/Application testing - VMWare Workstation

I run a virtual machine with roughly 1/4 scale hardware to my server (4core / 8GB RAM) running the same OS version and kernel as my "production" server. I use this to test third-party apps or services before deciding if I will run them on my server itself. This helps prevent any unintentional interruptions to my lab's ongoing services (SMB share, media hosting, my ongoing VM, etc.). Avoiding any unnecessary reboots of the production server minimizes downtime.

2. Basic Script Testing - Windows Subsystem for Linux (WSL)

Finding the WSL was a godsend for testing basic scripts - it is a tool built into the Windows 10 and 11 ecosystem that can execute an entire Linux-based operating system as a contained application. As well, because it is integrated with Windows, file access is fairly straightforward: anything stored on the "host" PC's drives are available to the contained OS as mounted hardware (located in /mnt/c , /mnt/d , etc.), and the "host" PC automatically adds links to the contained OS's storage in File Explorer. This allows me to test-run whole scripts (or snippets) during development without needing to launch a 'traditional' virtual machine (which is both slower and more resource-intensive).

3. Legacy Software, New OS testing, Misc. - Hyper-V

I use Hyper-V to manage what I consider my more "temporary" VM's. I have a virtualized Windows XP machine that I use to run legacy software (some older programs/games do not interface well with more modern versions of Windows), as well as a handful of 2-core, 4GB-RAM "test bed" machines running alternative Linux-based distributions (so far I have explored around RockyLinux, CentOS, TrueNAS Core & Scale, Debian 12, and Windows Server, to name a few). While each hypervisor has its pros and cons, I find Hyper-V to be one of the more user-friendly to run (which is why I use it for the majority of my 'disposable' testing).

My “Production” Server - HomeLab

The CPU in my lab server is an 8-core, 16-thread chip running at around 4GHz, which is plenty more threads than I would feasibly need to handle operating a file server and media streaming (most of the encoding for the media stream is handled by the dedicated GPU). While planning out the services it would run full-time, I looked into what kind of CPU-heavy tasks I could run to really maximize my "output-per-watt" and simulate a more realistic enterprise-like "production" environment. Computing nonsense locally (just for the sake of computing something) seemed wasteful, and my research led me to the "crowdsourced computing" project space. After some comparison and testing on both my workstation and spare Ubuntu ‘guinea pig’ system, I landed on Folding@Home, a collaborative public research project spearheaded by medical researchers at Stanford University.

Their software downloads raw data (in this case, scans of protein structures) mathematically sequences it into a useful format using all available CPU power, and uploads the organized data back to the cloud server for future use. You can learn more about Folding @ Home here on their website. As well, you can view my server’s progress in real-time here!

I opted for Virtualbox to run this service primarily because of VBox's command line interface (tested against VMWare Workstation). Running the virtual machine software is hardly a challenge for the hardware in my lab. Running the VM without overheating the CPU, however, was a masstive challenge, mostly of my own making. The chassis holding my lab's hardware is a very old case from the late 90's - a time when components drew far less power and output far less heat, best described as a suffocating metal box. There is a full breakdown of the hardware solutions I implemented in My_Content (linked here), if you are interested. Once I had thermal performance "stabilized" at the hardware level, I started fine-tuning the virtual machine itself. Dedicating a VM with 6 cores was already proving more efficient than running all 16 cores at 50%, but these 6 'dedicated' cores were still running hot enough to throttle the rest of the CPU.

While Folding@Home is a robust-enough software to run its compute load, the processing options are very "all or nothing" - running F@H on my bare-metal would leverage all 16 cores at either 50% or 100% (depending on software configuration). Containing F@H inside a virtual machine, granted direct control over how many of my physical CPU's cores to dedicate, and more importantly, how 'hard' to hit them. This approach also offered an unexpected advantage - VirtualBox appears to load-balance by leveraging individual host CPU cores dynamically (using the most ‘available’ core vs. dedicating 6 specific cores). This helped reduce interruptions to other running services, but utilizing these cores at 100% utilization was still producing more heat than the cooling system could dissipate. The best solution was to artificially limit how much of each core’s processing capacity Virtualbox could access (until I can improve the cooling solution):

Adjusting the “maximum” CPU threshold in the hypervisor settings 'tricks' the virtualized operating system into thinking it still has 100% leverage of its CPU. Whenever the host core in use reports 80% utilization cap, Virtualbox instead reports "100%" to the virtualized OS. This allows the VM to run F@H at "100%", while the CPU cores themselves never cross 80% utilization at the system level.

*Finding the "sweet spot" core limit was a challenge - reducing core utilization helped with thermal performance, but came at the cost of increased processing time for each compute job F@H would run. Ultimately, the 80% utilization value keeps the CPU running a little below its throttle limit with a minimal (rudimentary testing indicated 7%-10%) increase in processing time.

The VM itself runs headless 24/7, managed either via CLI commands or through the VirtualBox GUI - which was the other feature that won VirtualBox my "contract" for service. Managing VMWare machines via the CLI was not complicated, but I ran into some configuration issues passing their GUI through an SSH connection (using X11 forwarding), while Virtualbox's CLI tools were equally robust **and** it passes GUI windows over SSH near-flawlessly. The machine itself can be started (with GUI or headless), stopped, paused, or rebooted right from the terminal, and whenever I need to directly interact with the VM's operating system, I can quickly open its GUI on my workstation and administer it over the LAN.

JORT

Tinkerer, Linux enthusiast, data hoarder, dungeon master, cat parent, and learner of things.

Previous
Previous

My Digital Toolbox: Apps/Tools I Keep On-Hand

Next
Next

Homelab Hardware - What’s inside and what it does