Is VMware Networking a Nightmare?

Posted by: Admin Post Date: March 19, 2020

Is VMware Networking a Nightmare?

The direct answer to the question a above is no. VMware networking is definitely not a nightmare. But it can appear a little different from your traditional way of doing things. There are many well written books on Validated Designs for VMware environment that you should definitely consult. But the minimum that you need to start off can be found in this article.

Networking in VMware

Let start with something a little confusing; in the VMware world, a server’s physical interfaces is called VMNIC and a VMs virtual interfaces is called VNIC. Yes, you got that right. This means that a network adapter port-0 on your server will be called vmnic0 in the VMware world and the virtual adapters you assign to it is called a VNIC. And then there are PortGroups (used to set policies such as VLAN assignment) and VMkernel (IP-assigned interfaces used by the ESXi hypervisor). Multiple VNICs can belong to the same PortGroup and each VMkernel must also belong to a PortGroup. Note that a single VNIC cannot belong to two different PortGroups. Also, there are vSwitches to which your various PortGroups belong and that use the VMNICs as uplinks. Note a PortGroup can belong to only one vSwitch but a vSwitch can have multiple uplinks.

See? I told you it was easy 😉

Layer 3 (or IP interfaces)

Here is an interesting fact: As far as its operation is concerned, the ESXi host (VMware bare-metal hypervisor) can only possess a single default gateway no matter how many IP interfaces it holds. Whenever you try to assign a gateway on another interface, it asks you to override the previously set one. This is not a problem, nor is it a flaw. Keep in mind that the VMs can have their individual gateways based on the PortGroup they belong to.

For each hypervisor, you require some few IPs to perform specific tasks. In a typical environment you would need the following:

  • VMkernel for Management (IP address used for remote management of the host)
  • VMkernel for vMotion (used to perform VM hot migration across hosts)
  • VMkernel for Provisioning (or storage if you are using IP based storage like iSCSI or NFS this helps to communicate with the target storage system)

These will be assigned IP addresses and will each belong to a PortGroup. You may add more functions on the same IP (or VMkernel) but it is advised to keep these three networks separate. In such case, which one gets the default gateway (or should be routable)? You guessed right it is the management interface that needs to be routable (so that you can manage the hosts from various distant networks). This is because you only need to maintain Layer 2 connectivity for vMotion (very bursty traffic) and for iSCSI/NFS traffic (you do not want to route storage traffic knowing that your VMs are running from there).

vMotion Across Data Centers

If vMotion is not routable you might wonder how you would do vMotion across Data Centers. The technology that answers this question is Data Center Interconnect (DCI). Many vendors have their own implementations and protocols. What you need to know is that it usually allows you to tunnel your layer 2 information to the destination where they will be de-encapsulated and thus appear as if bridged.

Standard and Distributed Switches

Another interesting ability is to be able to extend the vSwitch configuration across multiple hosts for consistency and ease of management. This is accomplished through DvSwitches and DvPortGroups.


Keep your VMware networking simple and place the default gateway on the VMkernel holding the Management network. As much as possible, try to isolate Storage and vMotion traffic from all other traffic and keep each one of them in its own non-routed VLAN. With these your VMs will feel snappy and your network will be efficient.

Leave a Reply

Your email address will not be published. Required fields are marked *