VMware NSX-T Management Plane
VMware NSX-T Management plane delivers single API entry point to the system, persists user configuration, handles user queries, and performs operational tasks on all of the management, control and data plane nodes in the system. Management plane is also responsible for querying, modifying and persisting user configuration.
There are some points to understand the management plane in the VMware NSX-T environment and these points are as below:
- Serves as a unique entry point for user configuration via RESTful API (CMP, automation) or VMware NSX-T user interface.
- Provides universal connectivity, consistent enforcement of security and operational visibility through object management and inventory collection. It can provide you with the multiple Compute domains – up to 16 vCenters, container orchestrators (PKS & Open Shift) and clouds (AWS and Azure)
- Recover the desired configuration in addition to system information.
- Manage for storing wanted configuration in its database. The VMware NSX-T Manager stock the final configuration request by the user for the system. This configuration will be pushed by the VMware NSX-T Manager to the control plane to become a effective configuration.
The NSX-T management plane (MP) automatically creates the structure connecting the service router to the distributed router.
The MP allocates a VNI and creates a transit segment, then configures a port on both the SR and DR, connecting them to the transit segment. The MP then automatically allocates unique IP addresses for both the SR and DR.
VMware NSX Manager Appliance
Instances of the VMware NSX Manager and VMware NSX Controller are bundled in a virtual machine called the NSX Manager Appliance. The VMware NSX manager, VMware NSX policy manager and VMware NSX controller as an element will co-exist under a common VM.
Three unique VMware NSX appliance VMs are required for cluster availability. Because the NSX-T Manager is storing all its information in a database immediately synchronized across the cluster, configuration or read operations can be performed on any appliance.
Each appliance has a dedicated IP address and its manager can be accessed directly or through a load balancer. Optionally, the three appliances can be configured to maintain a virtual IP address which will be serviced by one appliance selected among the three.
VMware NSX-T Control Plane
Let’s talk about control plane now. The set of objects that the control plane deals with include VIFs, logical networks, logical ports, logical routers, IP addresses, and so on.
Control plane disseminates topology information reported by the data plane elements and pushes stateless configuration to forwarding engines.
VMware NSX-T splits the control plane into two parts:
- Central Control Plane (CCP) – The CCP is implemented as a cluster of virtual machines called CCP nodes. The cluster form factor provides both redundancy and scalability of resources. The CCP is logically separated from all data plane traffic, meaning any failure in the control plane does not affect existing data plane operations. User traffic does not pass through the CCP Cluster.
- Local Control Plane (LCP) – The LCP runs on transport nodes. It is adjacent to the data plane it controls and is connected to the CCP. The LCP is responsible for programing the forwarding entries and firewall rules of the data plane.
VMware NSX-T Data Plane
The data plane performs stateless forwarding/transformation of packets based on tables populated by the control plane and reports topology information to the control plane, and maintains packet level statistics.
There are two main types of transport nodes in VMware NSX-T:
- Hypervisor Transport Nodes: Hypervisor transport nodes are hypervisors prepared and configured for VMware NSX-T. The N-VDS provides network services to the virtual machines running on those hypervisors. VMware NSX-T currently supports VMware ESXi and KVM hypervisors. The N-VDS implemented for KVM is based on the Open vSwitch (OVS) and is platform independent .
- Edge Nodes: VMware NSX-T Edge nodes are service appliances dedicated to running centralized network services that cannot be distributed to the hypervisors. They can be instantiated as a bare metal appliance or in virtual machine form factor. They are grouped in one or several clusters, representing a pool of capacity.