Lab problems with Intel NUC 11th Generation hardware with VMware ESXi 7.0.1

This is a placeholder posting for ongoing updates as and when new updates/resolutions are found. It isn’t intended to provide any additional detail to the problems outlined but simply to document the areas where bugs or ‘gotchas’ are located.

I have recently acquired several Intel NUC 11th Generation (NUC11TNHv50L) for my lab/testing environment which are being deployed into an existing vSAN/NSX-T environment as a workload domain. The release of these latest NUCs seemed to have generated a lot of interest with different community members discussing the ideal fit with NSX-T (due to the dual 2.5 Gbit/s Intel I225-LM NICs which come in the Pro version), however there are a couple of limitations that make this not a smooth ride currently.

Community networking driver and workarounds

Out of the box these NUCs are not supported with VMware ESXi and rely upon the Community Networking Driver Fling. Therefore before purchasing these devices for your home lab be aware that this fling:

  • Requires a custom ESXi image to be created which includes the Community Networking Driver
  • Does not support jumbo frames (e.g. >1500 byte MTU) – which in my view prevents any serious use with the NSX-T Geneve protocol which is typically 1600 byte minimum
  • Causes the network interface to become disconnected (link layer communication fails) if configured MTU is greater than 1500, which only recovers after a reboot
  • Seems to cause a purple screen (PSOD) failure when the second NIC is connected (under undefined circumstances currently)

Currently I am overcoming the NSX-T frame size issue by using the Startech USB 3.1 1Gbit/s USB network adapters, but this requires an additional fling to be installed. As a compromise it’s not too bad, since there are two Thunderbolt/USB-C ports on these NUCs allow up to two additional 1Gbit/s interfaces to be attached. So I am configuring my ESXi hosts as:

1 x Onboard Intel I225-LM at 2.5 Gbit/s – dvSwitch 1 (Management, vSAN)

1 x StarTech USB 3.1 adapter at 1Gbit/s – dvSwitch 2 (NSX-T, vMotion)

Power off and shut down

In addition it seems that when ‘Shut down’ of an ESXi host is performed the system ignores the BIOS power setting (e.g. to remain off, or power on etc.) and will immediately restart the operating back to a running condition (almost as if a reboot instead of shut down were chosen). This is strange behaviour which needs further experimentation and makes shutting down your lab a lot more time consuming – however it can be worked around currently by:

  1. Shut down the ESXi instances individually using host UI/vCenter
  2. Watch the power light on the front panel (assuming no screen attached) – when the power light turns off for approximately 0.5s it is initiating the actual power off, prior to becoming turned back on again
  3. At this point pull the power supply out of the back of the NUC and plug it back in a couple of seconds later – it will remain off instead of rebooting (even if the BIOS setting says on loss of power – power on)

It’s getting hot in here

Lack of fan speed and temperature within ESXi hardware sensors. This is not a new issue but despite the integrated 3D graphics which is now on-chip there still seems to be a lack of information exposed to the operating system (presumably by Intel). In my bookcase vSAN/NSX-T environment it’s becoming a ‘hot topic’ to say the least ;-). Both new and older NUCs are doing fine on the Balanced performance/fan speed setting, and do a good job of spinning up and down the fan whenever the CPU turbo feature engages (up to 4.1GHz on my units), but it would be good to be able to view this more empirically than just watching how many windows need to be opened!

Good resources to check out in all things NUC are William Lam and Florian Grehl.

Leave a Reply