Save to My DOJO
So you decided to build a lab, now what?
Building a VMware home lab is one of the best ways to learn about VMware. I’ve been a huge advocate of “whitebox” home labs. Some of my most popular posts are my ones based on home lab builds. A home lab allows you to gain hands-on experience without disrupting a production environment. We all try to avoid those RGEs (Resume Generating Events) right? There’s nothing like a place you can start a fire and not have to worry about putting it out!
A VMware lab can come in many forms, such as full rackmount servers running ESXi to small PCs running hosted hypervisors. You can even run nested ESXi inside VMware Workstation. The trouble is finding a sweet spot between, cost and functionality. Nobody wants to run a full blown rack of servers anymore. Or maybe they do? I remember the good old pre-virtualization days with a full 42U rack of servers in my basement. Nowadays I tend to go for the low power stuff. Something with what I call a high WAF (Wife Acceptance Factor).
Before we go any further you have to realize that what we are doing is in no way shape or form supported by VMware! None, zero, zilch. You cannot expect them to support home lab hardware. Their partner ecosystem ensures that everything has been tested and will be fully supported. The good news though is that there is a great community and support. If you’re stuck it won’t be for long. The trick here is to find hardware that works with ESXi. It can be hard though as the only hardware VMware lists is typically server grade hardware. My home lab has some serious uptime, so it is indeed possible to find hardware that is perfectly stable!
For the most part, the general rule of thumb is to stick with Intel based boards. I don’t want to fully rule out AMD setups either, but generally Intel boards have Intel based NICs and you will have far better luck with out of the box driver support with Intel NICs. Generally with AMD labs (and any lab setups with Realtek network adapters) you will have to inject what they call a VIB into your image. A VIB is a driver or piece of software for ESXi. It’s not a hard process at all and is well documented over here at v-front.de. Below is a great video to watch. It covers the process in good detail.
To keep our builds as simple as possible I will walk you through a couple different builds and link to hardware that works well as an ESXi lab. We’ll start first with an Intel NUC lab. One of the reasons the Intel NUCs are popular is they are ridiculously small and pack a lot of power. The next best reason people love the NUCs is the fact that you can install the vanilla ESXi install and the drivers work with vSphere 6.5. You do not have to inject any vib files into the installer. Download ESXi from VMware and install it, that is as long you’re using the 5th and 6th generation NUCs!
To keep these builds simple we will only be building a single host in the parts lists below.
The Intel NUC Lab
The NUC is a fantastic little box. Small enough that you can take it in your airplane carry on baggage if you had to. Low power also helps the electric bill out as well. The biggest downside is only a single NIC. Although there are some reports that have gotten the Startech USB network adapter to work, I have not tested it.
So if you choose to go with a NUC, you’ll want to pick up a switch that’s capable of vLANS since you’ll need them!
Parts List:
- 1 x Intel NUC 6th Gen NUC6i3SYH (Intel i3) or 1x Intel NUC 6th Gen NUC6i5SYH (Intel i5)
- Be sure to update to at least BIOS v44. NVMe devices can be used after the BIOS has been updated. Anything prior to that and you’ll wonder why they disappeared or are unavailable.
- 2 x 16GB DDR4 Modules (for a total of 32GB memory)
- 1x Samsung NVMe 128GB SSD (for a vSAN build, this can be used as a cache disk.)
- 1x Samsung 500GB SSD (for a vSAN build this will be used for vSAN capacity disk.)
- 1x Cisco SG300-10 Port Managed Gigabit Switch
- Because the NUC only has a single network adapter with no real way of expanding or adding additional adapters, you’ll want to ideally setup a few vLANs to segment your network traffic. This switch works great.
The Shuttle DS87 Lab
These Shuttle boxes are small (nowhere near the biggest host you can build) but I recommend these for two huge reasons, small size and low power. They have dual gigabit network connections as well built in so no need for any additional cards! All of these things equate to what I call a highwife acceptance factor so that’s a plus. These boxes consume a fraction of the power that a full desktop uses. My kilowatt measures about 44 watts around 75% usage. Running at least two of them is what I recommend so you can create a cluster.
Parts List:
The Host/Processor
We have to start our build with the Shuttle DS87.
For this build you have two options, to go all out or to keep it simple and cheap. My opinion it to keep the costs down and buy more hosts instead of put a lot of money into getting just 1 or 2 large hosts. I’d rather have 3 medium sized hosts for a lab any day of the week. Maybe you save a bit of money with an i5 over an i7.
- The all out option (will get you a total of 8 cores per host with hyper threading enabled): Intel Core i7 i7-4770S 3.10 GHz Processor – Socket H3 LGA-1150 – Quad-core (4 Core)
- The keep it simple and cheap option (will get you a total of 4 cores per host with hyper threading enabled): Core i3 (2 Core) 3.70 GHz Processor
The biggest difference in price from above is a quad core vs a dual core. When you enable Hyperthreading it will double it as well. The i7 has 4 cores with hyperthreading so you’ll get 8 cores total.
The Memory
In terms of memory, you really need to max out the host here. They are capable of doing 16GB of ram across the entire host. The system has 2x 204pin memory slots that support DDR-1333/1600 with maximum of 8GB per DIMM. This also seems to be a good price point these days. 2 sticks of 8GB memory is one of the cheapest options out there now. Whatever you do, don’t go below 8GB total.
- Crucial 16GB Kit (8GBx2) DDR3-1600 MT/s (PC3-12800) 204-Pin SODIMM Notebook Memory
- G.Skill Ripjaws Series Laptop Memory F3-1600C9D-16GRSL 16GB (2 x 8G) DDR3 SO-DIMM
- If you want to do 8GB memory per host: Corsair Vengeance 8GB (2x4GB) DDR3 1600 MHz (PC3 12800) Laptop Memory (Again, memory is important so think twice before only doing 8GB!)
The Disk
You will install ESXi 6.5 to the integrated SD slot using the SD card, your 1TB 7200 SATA disk will be used as a capacity disk in vSAN and the 128GB mSATA drive will be used for caching. If you’re looking to not use vSAN, you really only need the USB stick.
- HGST Travelstar 7K1000 2.5-Inch 1TB 7200 RPM SATA III 32MB Cache (for vSAN capacity)
- Transcend 128 GB SATA III 6Gb/s MSA370 mSATA SSD (for vSAN cache)
- Samsung 32GB USB Stick (or any other USB stick with 8GB or so you have laying in the drawer. We seem to have a pile laying around these days.)
Additional VIBs
With the Shuttle DS87, the 2 built in network adapters do not work with ESXi 6.x, you will have to use the ESXi Customizer script above to inject this VIB to your ESXI image before you install ESXi.
To wrap up…
You’ll want to remember that VMware has a free 60 day evaluation license. They also have a great option for home labs called the VMUG EVAL Experience which is similar to what Microsoft technet used to be in it’s heyday. You basically pay $200/yr for a key that allows you to run solely for lab purposes.
The two lab setups above are just a place to start, there is a lot of hardware out there that works with ESXi. The NUC, you can use 32GB of memory which is a big positive but you only get 1 network adapter. The Shuttle builds max out at 16GB of memory per host but give you two network adapters out of the box.
It’s time to get building!
[the_ad id=”4738″][the_ad id=”4796″]
Not a DOJO Member yet?
Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!
61 thoughts on "Building a vSphere 6.5 Home Lab"
HyperThreading doesn’t really give you double the cores, as HyperThreading (as the name states) handles simultaneous scheduling of threads. So, a processor with 4 physical cores with HyperThreading (or any form of SMT or CMT) will appear as 8 logical cores for scheduling purposes.
When one scheduled thread is stalled (say: blocking I/O), it will move on to the next scheduled thread. It will switch back when the second scheduled thread is stalled or completes execution.
It may duplicate a small portion of a core, but it doesn’t duplicate the entire core; thus, it can’t be 100% parallel multi-threading. If it does actually duplicate the entire core, then it would just be considered a proper 8 physical core processor.
AMD did futz with the idea of dual execution cores with shared non-core components to sit somewhere between HyperThreading and just plain doubling cores.
Hello, yes that is correct. Hyperthreading is not a 100% real core. It helps with CPU scheduling problems. You generally won’t get 1:1 performance like your would with an actual core.
Hi,
I have one server and installed esxi 6.5 and vcenter 6.5. Then i installed asav. created and connected asa interfaces to vSwitches. Then connected a host to vSwitch1 (dmz) and another host to vSwitch2 (inside). Created and ACL to allow traffic to hosts. Each host can reach (ping) the asa interface BUT NOT another host. Devices within a vSwitch CAN communicate. There are no vlans configured. Firewall config seems fine, as i can trace-route a packet – testing.
Im suspecting something to do with vSwitches config or what could be the problem with my setup?
Promiscuous mode settings might be a possibility?
I am trying to do the same but I cannot find vCenter appliance in the evaluation mode.
vCenter will install by default in eval mode. There is no need to do anything. It happens by default.
I have a DS81 at home. Will that VIB work on a DS81 as well for 6.5?
Hi Bill! Yes, that VIB should should work! Anything with a RTL8111x has worked in my testing.
Double check the Crucial link 😉
D’oh! Had a duplicate. Thanks! I fixed it.
Hi,
quick question, DH110 will work with VS 6.5 without problems?
I have not tested those. They use a different network adapter. Some users have had success with it though it looks like. https://www.v-front.de/2015/08/a-fix-for-intel-i211-and-i350-adapters.html
Thanks Ryan for answer, I’m planning to build my first Home Lab. It would be definitely a TWO hosts. I was thinking about DH110 but now I also consider DQ170. Some more questions:
Q1: If we are going for low-cost option, The only difference between DS87 and DS81L is POWER:
DS81L: 84W Power Adapter, Output: 12V 7A DC
DS87: 90W Power Adapter, Output: 19V 4.47A DC
Does DS81L will serve well? It’s much cheaper than DS87.
Q2: Do I really nieed to care about differences between STORAGE?
Full size Mini-PCIE socket (Only support m-SATA ) Half size Mini-PCIE socket
vs
M.2 2260 Type M key socket Half size Mini-PCIE socket
Q3:
I read that Realtek Network Interfaces are not good for vMware, so maybe 2x DQ170 will server better?
Q4:
If I’m gonna have TWO hosts, do I really need to care about DS81L/DS87 maximum RAM = 16GB?
I have not tested the DS81L yet but it looks like it would work ok. I will have to add it to my list of stuff to try. 🙂
The storage question is a wide open answer. Some people will want vSAN, then you’ll have to consider that. Others want to just point it to a small NAS.
Realtek NICs work quite well once you get that VIB installed into the iso. I’ve never ran into any real issues with them besides them not being built into the image.
Yes, trust me, you’ll want more memory. 🙂 You can never have enough. 16GB is generally the lowest I would go these days even with two hosts.
Thank you for you support! Last question solve my doubts: I need 32GB for single host so DQ170 will be my choice 🙂
“Nowadays I tend to go for the low power stuff. Something with what I call a high WAF (Wife Acceptance Factor)” HAhahaHAHAHAHA DIED…
I purchased 2 Intel NUC 6 gen i5’s. Will VMware support installing esxi on the SD or will I have to install it on a USB instead? I would like to leave the 2 internal drives for vSan.
This is my setup:
2 – Intel NUC 6th Gen NUC6i5SYH (Intel i5)
2 – SAMSUNG 950 PRO M.2 2280 512GB PCI-Express 3.0 x4 Internal Solid State Drive (SSD)
2 – SAMSUNG 850 EVO 2.5″ 1TB SATA III 3-D Vertical Internal Solid State Drive (SSD)
2 – Crucial 32GB Kit (16GBx2) DDR4 2133 MT/s (PC4-17000) SODIMM 260-Pin Memory
2 – SanDisk 32GB Class 4 SDHC Flash Memory Card
Any advice will be greatly appreciated.
Danny, yes you should be able to install it on a USB stick. Officially VMware won’t support these systems but it should work just fine.
Danny, instead of the SDHC card I would buy 2x SanDisk Ultra Fit 16gb.. fast usb 3.0 and non intrusive
Ryan,
Could you please share similar article using blades (off course cheap ones), which can help ?
Thanks
I have not tried using blades yet in any home lab scenario. I will have to look into it.
Maybe w. clearcube please? 🙂
Thanks for Nice WriteUp Ryan, Me personally would not recommend spending money on a nuc/MiniPC that gives less than 32GB RAm. We can buy single 16GB stick and later stack up as required if felt not needed for some.
Yes, it really depends on budget and what everyone wants to do. The good thing: We have multiple ways to get it done! 🙂
HP Proliant G7 server can it be used for this kind of lab ? buying G7 is cheaper in Singapore where I live..
I am mainly interested in learning NSX since I am from networking background , any specific recommendations for setting up lab for learning NSX ?
Yes, you can use a G7 in a lab. If you’re looking for official support you would have to check the HCL. Not sure if you’ll get official support with a G7. I know the G6 is not supported.
nice!
my highly transportable lab… for customer training and demo is an Shuttle R8. it has the advantage to support 64GB and I run it with an i7 4GHz. This allows me to run, vCenter appliance, an Windows AD, DNS, DHCP, SQL, FileShare and email server as well as VRA( and iaas, external VRO and vRB4C), vrops, NSX and Loginsight at the same time with lots of room to spare and no CPU ready. the on board nic and the board are all supported out of the box, no viber insert needed.
Total Cost with SSDs (3TB) 1800EUR.
Hi Daniel,
I am very interested to build a home lab with a Shuttle R8. Can you please share more details about your setup. I have one NUC already but I find it limiting.
Thanks,
Mislav
I have not used the Shuttle R8 yet, but have used the larger cube style Shuttle units in the past and they worked great! I don’t see why it would not work looking at the specs.
Hi Ryan, thanks for the article It is very useful.
I have just one question. Is vLAN = VLAN?
If so, then any VLAN capable switch should do the job I think.
Yep, VLAN = vLAN. I get so used to typing vEVERYTHING. Haha.
This is an excellent post Ryan! Just wanted to stop by and show support and say thank you for including my video tutorial in this post!
-virtualex-
You bet! Great video!
I do love these little form factor boxes for labs but they can be pricey. I was actually going to use the NUCs or Gigabrix but I ended up building my own lab box for much cheaper and much more horse power than what these boxes are capable of. if you have the time and knowledge, just do yourselves all a favor and get a good Asus MB for about 120 buck. There are Asus MBs that can hold up to 64gb of DDR4. Get a NVME M.2 card for primary OS and a SSD for secondary. Make sure to get a Intel i7 at least 4.3ghz and the cases are cheap roughly 35 bucks. You end up with an extremely powerful lab box that can do vSan, VCSA, etc for right around 750 to 900 max… This way you can also add more ram or the new Intel Optane card as well. Whatever works for you but my way is most bang for your buck.
I can’t believe people still spend money on servers with 16/32 GB RAM limit when you can go with SM server on Xeon D processors or use second hand rack mounted servers, e.g. Dell R710.
This may be useful for home labbers willing to test MS products, like Windows Servers/Exchange Servers. However, if you want to play with more complex nested vSphere/vSAN/NSX labs or test VMware products like vRNI you will need quite a few of NUC servers.
I would say for a comfortable home lab you need at least 128GB of RAM nowadays. With top spec NUCs you will need at least 4 servers. Whereas, you can get the same 128Gb in one Supermicro server and you have a range of CPUs to choose from starting from 4 to 16 cores.
So, I would say – NUC are useful for specific use cases, but the general recommendation is to go with servers that can work with more than 64GB RAM
I would tend to agree but some people simply want to spin up a small environment. More RAM is always better! I also have rackmount systems (r710 and HP DL380 G6s) but those systems spec’d up to 128GB use significantly more power resources than the smaller hosts, can be louder and take up more space.
You described me to a tee. I spin up 1-3 server for learning things like Puppet, Ansible , for testing my code , or for setting up a lightweight Web servers or gitlab server for my own use.
I light the Intel NUC’s, but when one breaks like mine just did, it’s not easy to get the replacement parts to fix it.
So , while I’m waiting until Monday ( 2 days from now ), for Intel support to look at my warranty support ticket, I’m in the market for something small and quiet like a NUC, but with user replaceable motherboard / cpu . I’ll consider your Shuttle setup.
Thanks again for the article!
You could also get a top breed laptop. Octo/Quad/Six, on at least 16/32Gigs.
Run VMWARE Workstation Pro, Load 2 ESXi hosts, run the vCenter Server Appliance for management and a Windows 2012/2016 Server for Active Directory integration.
Then add all the hosts under the respective ESXi hosts as you would like. Enable HA for recovery/redundancy. Done.
I love it, I do the same thing on my laptop for a mobile lab! It works great when traveling. 16GB memory would be the absolute minimum I would go. Jason Fenech is actually working on another post that lays that type of lab out.
I have an ASUS w/ 2t ssd and 400gb hdd, 16gb ram. Are you saying I can create a complete lab with just my notebook?
Thanks for the reply!
You’d have to nest your ESXi hosts inside of Workstation, but yes.
Dennis, have a look at this;
https://www.altaro.com/vmware/vsphere-home-lab-free/
Hi Ryan,
Great article. I am Just starting to learn VMware.
Questions:
Do you still recommend the NUC for 6.7 for a home lab?
Can I pass on local storage if I go with a cheap Synology solution?
What limitations are there with only one NIC?
Thanks again,
-Brad
The NUC is still a great option. I still find local storage useful, but technically you don’t need it if you opt for a Synology. As far as NICs, VLANs work best with only a single NIC, but you can also use a USB NIC as well. THe VMware Flings page has a driver for it.
Hi.
Thanks
I brought a hpe proliant dl380 g7, Please Help me for Create VMware Homelab Step by Step (VCP-DCV, VCP-NV and VSAN) ?
Hardware’s Server has :
1- 128 GB Memory
2- 2*CPU x5670
3- 8* 300GB Hard disk 15k
please give me better suggestion.
thanks for help me.
Hello, that is a good start. My recommendation would be to take a USB stick and install ESXi to that and then use the HDs you have as datastores. That should get you started!
Hey Ryan, it’s 1,5y later now when you wrote this post, what would you currently recommend for a slim system to run vmware on?
To be fair, even now in 2021, Intel NUCs are still a widely popular choice for lean lab hardware.
An alternative is to buy a second hand server (Tower or Rack) which offers better bang for the buck but it takes up space, electricity and can be noisy in a small place.
I’m a newbie to virtualization and would like to build a home lab. What do you recommend for hardware and how much? Can I run both Workstation Pro and vsphere on the same machine? Do you even advise that? I was looking to use an Intel 6 or 8 core chip, Asus MB, 64 GB RAM, one or two NVMe drives for storage or at least that’s what I’ve been told. Is that sufficient or is that overkill? Any help would be appreciated. Thank you.
You could run vSphere in a VM in workstation (installed on your Windows PC) or install vSphere directly to a lab server (such as nuc …).
Now if you’re asking if you can dual boot Windows+ESXi, probably not the best scenario. I’d probably install ESXi on a USB key and boot on it whenever I need. But still, strange setup.
It’s hard to recommend something as it will depend on your needs / budget. 64GB might be way too much if you want to play around with Horizon, vSphere but it would not be enough if you wanted to run vRA, Tanzu, …
The configuration you mention sounds good, although you can save a few bucks by going for regular SSDs instead of NVMe.
Hi Ryan,
Thanks for this info! Would there be any advantages/disadvantages to using an BOXNUC7I7BNH over the i3/i5 models? Just wondering why you didn’t mention using an i7? Also, would you see any advantages to using the Intel optane memory?
I’m mainly trying to build a lab to learn VMware, Puppet, Git, and RedHat.
Hi Jason,
Core i7 processors are usually overkill for lab purpose. i3 are fine for most things, you get less physical cores but you have HT enabled. Though if you’re going to push it a bit, i5 is maybe best with physical cores and turbo mode if you can spend the extra bucks.
Optane memory is also a bit overkill. You are probably better off sticking to standard SSDs and you’ll be fine.
Hello,
I plan to build home lab probably with shuttle hardware.
Would you have a description how you used/configured the 2 Lan interfaces of the shuttle ?
Would it be possible to have a view/drawing of the network config ?
thanks a lot,
yvo
Hi. The best is probably put both vmnics in the same vswitch so you get redundancy.
Then you can set vmnic0 as active and vmnic1 as standby on the management/VM portgroup and the opposite for vMotion for instance.
You can also tie iSCSI vmkernels to each vmnic but I’d say NFS is probably a better choice in this case.
Been using Optiplex desktops for years for VMWare labs, mostly because it’s what I can get my hands on. I skipped v6.0, but am trying 6.5 and man is it giving me problems on Optiplex 780/790/1710… actually, it’s not ESXi that’s the problem… It runs seemingly stable on them… it’s getting the Proton appliances to run which seems even more odd since you would think hey would be so what isolated. Even got an LSI says card supported thinking its the Sata controller, but no.
The vCSA with or without embedded PSC fails consistently at inconsistent spots in the install or configuration process leaving me to believe it has to be hardware specific. The dependence on FQDN and he hacks to get it to work on IP only are simple, yet ridiculous. Would love to identify the culprit.
This is a pretty old comment but it would be valuable to share the root cause if you identified it.
Have you checked the installation log to see if it gives any insight? You could also the vmkernel log on the host to see if there’s nic flapping or something like that.
Might sound like a silly question but do you have a stable connection to the host? (wifi drops or else)