Save to My DOJO
For many years, a home IT lab was a “requirement” for any budding IT Pro – you needed a place to test out new software and learn. In some ways, this requirement has lessened with the rise of cloud computing but many of our great DOJO contributors continue to use a home labs setup. In this article, we’ll hear from them, what their setup is, why, choices they made along the way and what they plan for the future.
Andy Syrewicze
Altaro/Hornetsecurity Technical Evangelist – Microsoft MVP
Why do you have a lab?
The main reason I’ve always maintained a lab is to keep my skills current. Not only does my lab allow me to fill knowledge gaps in existing technologies I work with, but it allows me to test new features, or work with other technologies I’ve never worked with before. In doing this I can make sure I’m effective with and knowledgeable about current and emerging technologies. Plus… it’s just fun as well =)
How did I source my home lab?
I research other commonly used home lab equipment on the web and paired that with my working knowledge of the hardware industry and settled on commodity SuperMicro gear that was cost-effective yet had some of the features I was looking for. Other bits and pieces I picked up over the years as needed. For example, I’ve recently been doing some work with Azure site-to-site VPNs and as such purchased a Ubiquiti firewall capable of pairing with an Azure VPN gateway.
What’s your setup?
I have a 2 node hyper-converged cluster that is running either Storage Spaces Direct, Azure Stack HCI, or VMware VSAN at any given time.
Currently, each node has:
-
- 1 x 6-core Intel Xeon CPU
-
- 32GB of Memory (Soon to be upgraded to 64GB)
-
- 4 x 1TB HDDs for Capacity Storage
-
- 2 x 500GB NVMEs for Cache
-
- 1 x 250GB SSD for the host Operating System disk
-
- 1 x Intel i350 1Gbps Quad Port Ethernet Adapter for management and compute traffic
-
- 1 x Dual port 10Gbps Mellanox Connect-X 3 for east/west storage traffic
Additionally, my physical lab has:
-
- 1 Cyberpower UPS with about 1-hour runtime in case of power outages
-
- 1 ReadyNAS 316 for backup storage with 4 x 1TB HDDs
-
- 1 Ubiquiti UDM Pro for firewalling and layer-3 routing
-
- 2 Ubiquiti WAPs for Wireless access in the house
-
- 2 NetGear ProSAFE switches wired in a redundant capacity
On top of that, I do pair some Azure cloud resources with my lab and send private traffic over my site-to-site VPN between my UDM-Pro and my Azure vNet. Services running in the cloud include:
-
- 1 x IaaS VM with AD Domain Services running on it
-
- 1 x storage account for Azure Files storage
-
- 1 x storage account for blob offsite backup storage
-
- 1 x container in Azure container instance running a Minecraft Server for my son and his friends (HIGHLY critical workload I know…)
-
- Some basic Azure ARC services (Been slowly working on this over the last few months)
What services do you run and how do they interact with each other?
I mostly run virtualized workloads on the on-prem cluster. This is typically VMs, but I’ve started tinkering a bit with containers and Azure Kubernetes Service. The cluster also runs VMs for AD/DNS, DHCP, Backup/DR, File-Service and a few other critical gaming workloads for the end-users in the house! The cloud resources also have backup AD/DNS components, file storage, and offsite storage for the on-prem backups. I also use Azure for the occasional large VM that I don’t have the resources on-prem to run.
What do you like and don’t like about your setup?
I’ll start with the positive. I really like that my lab is hyper-converged as well as hybrid-cloud in that there are used resources in Azure access via VPN.
There are two things I’d like to change about my setup they’d:
-
- >More memory for the compute nodes. When running VMware VSAN, VSAN itself and vCenter (required for VSAN) consume about 48GB of memory. This doesn’t leave much memory left over for VMs. Thankfully S2D and Azure Stack HCI don’t have this issue. Either way, memory is my next upgrade coming soon
-
- Upgraded Mellanox Cards. Don’t get me wrong, the Connect-X 3s were amazing for their time, but they are starting to get quite outdated. More recent Connect-X cards would be preferred and better supported, but there certainly is a cost associated with them.
What does your roadmap look like?
As mentioned above I’m likely to add more memory soon, and potentially upgrade my storage NICs. Additionally, I’d like to add a 3rd node at some point but that is quite a ways down the line.
Any horror stories to share?
Not really, I had one situation where I was away from the house on a work trip and the cluster rebooted due to an extended power outage. The OpenSM service which runs the subnet for the storage network between the direct-connected Mellanox cards didn’t start, thus the storage network never came online. This meant that the core services never came online for the house. Thankfully, the VPN to azure remained online and things in the house were able to use my Azure IaaS hosted Domain Controller for DNS resolution until I got home.
Eric Siron
Senior System Administrator – Microsoft MVP
You may know Eric as a long-time DOJO contributor whose first articles for this site were written on stone tablets. He knows more about the inner workings of Hyper-V than anyone else I know.
All the technical articles that I write depend on first-hand tests and screenshots. My home lab provides the platform that I need while risking no production systems or proprietary data. Like the small business audience that I target, I have a small budget and long refresh cycles. It contained no cutting-edge technology when I originally obtained it, and it has fallen further behind in its four years of use. However, it still serves its purpose admirably.
Component Selection Decisions
Tight budgets lead to hard choices. Besides the cost restraint, I had to consider that my design needed to serve as a reproducible model. That ruled out perfectly viable savings approaches such as secondhand, refurbished, or clearance equipment. So, I used only new, commonly available, and inexpensive parts.
Architectural Design Decisions
Even on a budget, I believe that organizations need a strong computing infrastructure. To meet that goal, I designed a failover cluster with shared storage. As most of the items that I used now have superior alternatives at a similar or lower price, I will list only generic descriptions:
-
- >2x entry-level tower server-class computers with out-of-band module
-
-
- 16 GB RAM
-
-
-
- 2x small internal drives
-
-
-
- 2x 2-port gigabit adapters
-
-
-
-
- 1 port on each adapter for virtual networks
-
-
-
-
-
- 1 port on each adapter for SMB and iSCSI
-
-
-
- 1x entry-level tower server-class computers (as shared storage)
-
-
- 8 GB RAM
-
-
-
- 4x large internal drives
-
-
-
- 2 additional gigabit adapters for SMB and iSCSI
-
-
- 24-port switch
-
- Battery backup
All the technical articles that I have written in the last few years involved this lab build in some fashion.
Lab Configuration and Usage
Since the first day, I have used essentially the same configuration.
The two towers with an out-of-band module run Windows Server with Hyper-V and belong to a cluster. Each one hosts one of the lab’s domain controllers on mirrored internal storage.
The single tower with the large drive set acts as shared storage for the cluster. The drives are configured in a RAID-5. Also, because this is a lab, it contains virtual machine backups.
I generally do not integrate cloud services with my lab, primarily because a lot of small businesses do not yet have a purpose for integration between on-premises servers and public clouds. I do use basic services that enhance the administrative quality of life without straining the budget, such as Azure Active Directory.
Lab Maintenance, Management, and Monitoring
Whenever possible and practical, I use PowerShell to manage my lab. When graphical tools provide better solutions, I use a mix of Windows Admin Center and the traditional MMC tools (Hyper-V Manager, Failover Cluster Manager, Active Directory Users and Computers, etc.). For monitoring, I use Nagios with alerts through a personal e-mail account. I back up my virtual machines with Altaro VM Backup.
Aside from Altaro, none of the tools that I use in the lab requires additional license purchases. For instance, I do not use any System Center products. I believe that this practice best matches my audience’s uses and constraints. Most paid tools are too expensive, too complex, too resource-hungry, and require too much maintenance of their own to justify use in small businesses.
I only reformat drives for operating system upgrades. The in-place upgrade has become more viable through the years, but I still see no reward for the risk. On general principle, I do not reload operating systems as a fix for anything less than drive failures or ransomware. Once I feel that Windows Server 2022 has had enough testing by others, these hosts will undergo their third ever reformat.
Pros and Cons of this Lab
Overall, this lab satisfies me. A few of the reasons that I like it:
-
- Low cost
-
- Stability
-
- Acceptable performance for typical small business daily functions
-
- Good balance of performance and capacity
-
- Ability to test the most common operations for a Microsoft-centric shop
Things that I would improve:
-
- The storage performs well enough for a regular small business, but I’m an impatient administrator
-
- Memory
-
- Network adapter capabilities
Theresa Miller
Principal Technologist at Cohesity and Microsoft MVP
Why do you have a lab?
I have had various forms of home labs over the years for varying reasons. In fact, when I built my home, I made sure my house was hard-wired for internet, which shows how long I have been in the technology industry. At the time hard wiring was the only way to distribute the internet to all the rooms in your home; unlike today where we have wireless and Wi-Fi extender options to help with network stability, Wi-Fi extending to places like the outdoors, and additional security features. Back to the question at hand, What do you use it for? my home lab options are what enable to me put forth the IT Community work that I have done. This includes having the tech to create training courses, blogging, events speaking and more. So, “When and why did you decide to get a home lab? I decided to get a home lab over 8 years ago and continue to use every evolution of my home lab for this function, educating myself and others.
How did I source my home lab?
Initially, my home lab was sourced by end-of-life equipment that my employer allowed employees to wipe the storage on, but eventually, I transitioned to source my hardware through a side business I have had for over 8 years. Purchasing a single Dell PowerEdge server, I was able to virtualize all of the servers I needed to run Active Directory and any necessary windows servers needed at the time. Beyond that my IT Community involvement has allowed me to enjoy the appropriate software licensing needed to support such an environment.
Over time my home lab has changed, my hardware became end-of-life and what was once set up in my basement lab is now hosted in the Azure Cloud. Yep, I decommissioned my hardware and switched to cloud.
What were your considerations and decision points for what you decided to purchase?
The transition to the cloud came from the fact that has become a challenge to deal with end-of-life hardware, and ever-evolving hardware requirements becoming outdated for the latest software running. Not only did it become time-consuming to manage, but it also became too costly.
What’s your setup?
My setup today is now in the Azure cloud, so the only hardware I have in my home is my internet router and the appropriate Eero wifi extenders that are needed to ensure network reliability. I find that running all cloud keeps my backend infrastructure up to date accordingly. For storage, I leverage all Azure-managed disks are block-level storage volumes that are managed by Azure on my servers that I need to leverage with keeping the consumption of resources low in mind.
What services do you run and how do they interact with each other and what services do you run and how do they interact with each other?
My minimal environment consists of a windows VM with Active Directory deployed the Azure DNS service, and one additional basic VM that changes depending on the technology I am testing. The basic VM can sometimes grow to multiple VMs if the project software being deployed requires it. In that scenario, I may also have SQL server deployed if that’s required. I try to keep the deployment simple but keep the core foundational elements in place as needed, and wipe systems as needed. How do I manage all of this? I leverage cost management services that notify me if I hit the threshold that I am willing to pay. At that point I may need to make some decisions around which systems must stay online and what I can shut down, or if I want to pay more that month.
What do you like and don’t like about your setup?
I am really happy with my setup since I have moved to a cloud model because maintaining the hardware including the cost of electricity became time-consuming. While costs with the cloud virtual machines that I have to keep me from having a large-scale deployment, I am ok with that. It’s fun to tear down and bring online what I need when I am looking to try something new with technology.
What does your roadmap look like?
My roadmap is strictly focused on what technology to try out next, and I find that I make these decisions based on technology that I cross paths with that is interesting in that moment. It could be something new, or something that has been around for some time that I may need to dive deeper into for a project or just for new learning and sharing.
Any horror stories to share?
I don’t have any horror stories to share when it comes to my home lab. I have adapted as needed from on-premises hardware in my home to a cloud model that has allowed me to be agile and keep my learning and technology sharing ongoing.
Paul Schnackenburg
Finally, here are some words from me. IT Consultant & DOJO editor.
If you’re starting out in IT today, you probably don’t realize the importance of having a home IT lab setup. But when the cloud was just a faint promise if you wanted to practice on your own, to further your skills or try something out, you had to have your own hardware to do it on. Early on I used VMware workstation to spin up VMs, but there are limitations on what you can fit, especially when you need multiple VMs running simultaneously, and 15 years ago, RAM was a lot more expensive (and came with a lot less GB) than it is today.
After some years I realized that I needed separate machines to practice setting up Hyper-V clusters, Live Migration etc. so I bought the first parts of my set-up back in 2012, starting with three “servers”. I couldn’t justify the cost of real servers, so I got desktop-class motherboards, Intel i5 CPUs and 32 GB of RAM for three servers. One became a storage server, running Windows Server 2012 as an iSCSI target (again I didn’t have the budget for a real iSCSI SAN), and the other two VM hosting nodes in the cluster. Connectivity came from Intel 4 port 1 Gb/s NICs, offering decent bandwidth between nodes. A few years later I added two more nodes and a separate domain controller PC. The backend storage for Hyper-V VM disks was changed over to an SMB 3 file server as Hyper-V was now supporting this. All throughout this time, I was writing articles on Hyper-V and System Center for various outlets and this setup served as my test bed for several different applications and systems. From an “investment” point of view, it made perfect sense to have these systems in place.
I also worked as a part-time teacher and because we were only given “hand me down” hardware for the first few years of Hyper-V and VMware becoming mainstream and part of the curriculum I opted to house the servers on a desk in our hardware lab. That way my students could experiment with Live Migration etc. and through my own VPN connection to the boxes, I could access the cluster after hours to test new software apps and write articles.
In early 2016 this cluster was three nodes and one storage server, but two things happened – Windows Server 2016 offered a new option – Storage Spaces Direct (S2D) and I outfitted all four servers with two 1 TB HDDs and two 120 GB SSDs (small by today’s standard, but this is now eight years ago). These were all consumer grade (again – budget) and wouldn’t have been supported for production, especially not connected to desktop-class hardware but they did allow me (and my students) to explore S2D and VM High Availability.
The other thing that happened was that Chelsio – makers of high-end Remote Direct Memory Access (RDMA) / iWarp 10/25/40 Gb/s Ethernet hardware offered me some NICs in exchange for writing a few reviews. So, two nodes in the cluster were outfitted with a two-port 40 Gb/s card, and the other two with a two-port 10 Gb/s card. Initially, I did testing with the cabling running directly between two nodes, but this didn’t allow for a full, four-node cluster so I purchased a Dell X4012, 12 port 10 Gb/s switch. The two 10 Gb/s NICs used two cables each for a total bandwidth of 20 Gb/s, while the 40 Gb/s NICs came with “spider” cables with a 40 Gb/s interface in the server end, and four 10 Gb/s cables connected to the switches for a total bandwidth of 40 Gb/s. This was ample for the S2D configuration and gave blazing-fast Live Migrations, storage traffic and other East-West flows.
Dell X4012 10Gb/s switch
In late 2020 I left the teaching job so the whole cluster was mothballed in my home office for 1 ½ years and over the last month I’ve been resurrecting it (after purchasing an Ikea bookshelf to hold it all). Currently, it’s running Windows Server 2022 Datacenter. Each upgrade has been a complete wipe and reinstall of Windows Server (desktop experience, Server Core is just too hard to troubleshoot).
Trying to revive this old hardware has taught me two things – first, the “fun” of misbehaving (or just plain old) hardware to wrestle with was a lot more attractive when I was younger, and the cloud is SO much better for this stuff. Hence my home lab was mothballed for so long and I didn’t really miss it.
I use Windows Admin Center to manage it all, and I’ll also use various Azure cloud services for backup etc. to test them out.
My only “horror story” (apart from all the silly, day-to-day mistakes we all make) is during the wipe and reinstall to Windows Server 2019, using the wrong product key and ending up with four Windows Server Standard nodes – which don’t support Storage Spaces Direct.
What’s your Homelab Setup (and do you even need one)?
As you can see, home labs come in many shapes and sizes. If you’re a budding IT Pro today and you’re wondering if a home lab is right for you, consider the use cases it would fulfil for you very carefully. I see some trainers and IT Pros opting for laptops with large amounts of storage and memory and virtualizing everything on a single PC – certainly that cover many use cases. But if your employers are still mostly on-premises and supporting server clusters is still part of your daily life, nothing beats having two or three physical cluster nodes to test and troubleshoot. Expect to pay a few thousand US dollars (or the equivalent in your currency) and balance the extra cost of “real” servers with the cost savings but time investment in building your own PCs.
If you’re considering setting up a machine or two for your home lab I have the following recommendations – select cases that allow for upgrades and changes in the future, you never know what you’ll need to install and test. Don’t spend money on expensive, server-grade hardware unless you have to – your home lab is unlikely to be mission-critical. Go for fewer nodes, it’s easy to fit a cost-effective machine today with 64, 128 or even more RAM, giving you plenty of space for running VMs. And use SSDs (or NVMe) for all storage if you can afford it, using HDDs is just too slow.
And don’t forget the power of hosting your lab in the cloud, making it easy to rebuild and scale up and down, with a lower initial cost but a monthly subscription cost instead to keep an eye on.
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!