Nicolas Vermandé (VCDX#055) is practice lead for Private Cloud & Infrastructure at Kelway, a VMware partner. Nicolas covers the Software-Defined Data Center
This is Part 2 in a series of posts the describes a specific use case for VMware NSX in the context of Disaster Recovery. Here’s part 1,
Deploying the environment
Now let’s see have a closer look at how to create this environment. The following picture represents the vSphere logical architecture and the associated IP scheme…
… and the networks mapping:
First of all you have to create three vSphere clusters: one Management Cluster and two Compute Clusters, as well as two distinct VDS, within the same vCenter. Each Compute cluster will be connected to the same VDS. One cluster will represent DC1, and the other one will represent DC2. The second VDS will connect to the Management and vMotion networks. Also, you have to create a couple of VLANs: one VLAN for VTEPs, used as the outer dot1q tag to transport VXLAN frames, two external transit VLANs to allow the ESGs to peer with your IP core and VLANs for traditional vSphere functions, such as Management, vMotion and IP storage if required.
Note: As this lab has been created for educational purpose, it is clearly not aligned with NSX design considerations for a production environment. I’ll probably dedicate another blog post for that.
Now let’s get our hands dirty. I have to assume that you already have the NSX Manager deployed as well as 3 controllers. All those virtual appliances should be placed in the Management Cluster and connected to the Management VDS. For the sake of simplicity you can use the same Management VLAN for both ESXi and NSX components management.
The first step after having deployed the controllers is to install the ESXi vibs: go to the NSX vCenter Plugin, then under Installation, select the Host Preparation tab. Select your Compute Clusters and click Install.
Once done, click Configure under the VXLAN section to configure the VXLAN networking:
The VLAN field is the outer VLAN ID for your VXLAN overlay. Create a new IP pool named VTEP and use it at the reference pool for your VTEP configuration. Note that if you select “Load Balance – SRCID” or “Load Balance – SRCMAC” as the teaming policy, two VTEPs will be created within the same IP Pool. It means that if you want your VTEPs to reside in two different subnets, you have to use a DHCP server. Another thing I noticed: be sure to create the appropriate number of VDS uplinks BEFORE preparing the hosts, or the NSX manager may not see the right number of uplinks when you want to deploy multiple VTEPs.
Next step is to configure the Segment ID range, which will represent your pool of available VNIs. As we will be using unicast transport mode, we don’t need to configure any Multicast Group.
Then you can go under Logical Network Preparation > Transport Zones. Add two Transport Zones, as we’ll be simulating two distinct datacenters. Select Unicast as the Control Plane Mode.
Each simulated datacenter should end up with its own transport zone, as shown below:
Now it’s time to create the Logical Switches. In the Network & Security pane, go to Logical Switches. In the right pane click the “+” icon. Give it a name, and select the first Transport Zone.
Create a second Logical Switch, linked to the second Transport Zone. As both Logical Switches are in two different Transport Zones, they will be completely isolated, without any possibility to connect them to the same Logical Router.
For the sake of completeness and to match the initial design, you can create a second Logical Switch in each datacenter. Additional Logical Switches to be created create are those connecting Logical Routers to the upstream Edge Gateway. Name those Logical Switches dc1_transit and dc2_transit.
The next components to be deployed are the Logical Routers. In the Networking & Security Pane, go to NSX Edges. In the right pane, click the “+” icon. Select Logical Router, you can enable HA if you wish so (I know it’s kind of weird to configure the DLR under the NSX Edge menu…).
Configure the credentials and at the “Configure deployment” step, click the “+” icon under NSX Edge Appliances. Select you first datacenter cluster and the appropriate Datastore, Host and Folder.
Then configure the management interface by clicking Select, next to “Connected To”. You should select aDistributed Portgroup, not a Logical Switch.
Next, go under Configure interfaces of this NSX Edge and click the “+” icon. Give the interface a name and selectinternal as the interface type. Connect the interface to the first Logical Switch in DC1 and configure its IP address and subnet mask. Repeat the steps to connect a second internal interface to dc1_ls02.
As you can imagine, the Uplink interface type will be used to connect the Logical Router interface to thedc1_transit Logical Switch. Add this interface and configure its IP address and subnet mask. It is worth noting that in the case of an internal LIF, the IP address given must be the default gateway for the VMs belonging to that particular Logical Switch.
Here is a screenshot of what you should have as the final configuration:
You can then click Next, Next, Finish. Repeat the same operations to create a second Logical Router, but this time in the second datacenter. The cluster/resource pool parameter must be set do dc2 so you can have access to the NSX components available in that specific Transport Zone. Here is a screenshot of what you should have in the end:
The last components to be deployed are the NSX Edge Gateways, which connect to the Logical Routers Uplink LIF through the transit Logical Switch. The Edge Gateways must have both a VXLAN interface (the internal interface connecting to the Logical Router) and a VLAN interface, connecting to the external, physical network.
To deploy an Edge Gateway, go to NSX Edges and click on the “+” icon under NSX Edge Appliances. Select Edge Services Gateway as the install type, enable HA and give a name to the gateway.
Click Next and configure the credentials. Then click Next and select the appliance size. Compact is ok for a lab, bigger appliances support a higher number of adjacencies, firewall rules, etc.
Then click the “+” icon under NSX Edge Appliances and select dc1 as the Cluster/Resource Pool and the appropriate Datastore, Host and Folder.
Click Next, then the “+” icon to add an interface. Give the interface a name and connect it to thedc1_transit network as an internal interface. Configure the IP address and the subnet mask, click OK and repeat the procedure to create an Uplink interface connected to a VDS Portgroup that represents the external network (It can be a VDS or VSS Portgroup).
The end result should look like this:
Click Next and configure a default gateway if you wish so. However, it’s not strictly necessary in our scenario. You can then click Next, Next and Finish to deploy the Edge Gateway in the first datacenter. Repeat the deployment procedure for the second datacenter by selecting dc2 as the cluster/resource pool so you can connect the appliance to the NSX components available in the second Transport Zone.
Before activating dynamic routing protocols within the NSX environment, we must configure an external device to enable routing adjacency with Edge Gateways in both simulated datacenters. You can use a physical device but if you want to deploy this in your home lab or if you don’t have access to a physical device, I recommend using a Vyatta virtual appliance. It has decent routing capabilities and OSPF configuration is pretty straight forward. I’m using VC6.6R1 in my lab.
Your external routing device should have two interfaces: one connecting to the DC1 Edge Gateway external interface network and one connecting to the DC2 Edge Gateway external interface network. Refer to the global topology diagram for IP addresses and subnets. Here is a screenshot of my Vyatta hardware configuration (I’ve added a third VNIC to connect to the management network so I can SSH into the appliance):
Now let’s see how to activate OSPF on the Logical Router and the Edge Gateway:
Under Network & Security, go to NSX Edges and select the Logical Router for DC1. Double-click on it. Go toManage > Global Configuration and click Edit next to Dynamic Routing Configuration. Set a custom Router IDand click save (Don’t tick the OSPF box)
Then go to OSPF and click Edit next to OSPF Configuration. You have to set two IP addresses: the Protocol Address is used to establish adjacency with neighbors. The Forwarding Address is the actual address used by the ESXi kernel module in the data plane. They must be part of the same subnet and the Forwarding Address must be the IP address of the Logical Router Uplink interface you have previously configured.
Click on the “+” icon under Area Definitions and add Area 0.
Then go to Area to Interface Mapping and add the transit vNIC to Area 0. Don’t add the Logical Switch internal LIFs to Area 0 as they’re not participating in the OSPF process. Instead the Logical Switch routing information is redistributed into OSPF (redistribute connected routes). Don’t forget to Publish Changes.
Repeat the same procedure for the second Logical Router in DC2.
To activate OSPF within the Edge Gateways, configure the Router ID and tick the OSPF box. There is no need to split the control plane from the data plane as the Edge Gateway is a virtual appliance and as such, it doesn’t have any kernel module installed on the ESXi host. Another difference is that you have to add both the transit and the external network interfaces into Area 0.
Note that if you want to ping the DLR and the ESG from you external network, you’ll have to modify appropriate firewalls rules as both components may have a default deny rule on their local firewall.
If you have configured everything correctly, you should see OSPF information about all routes on the external routing device:
vyatta@vyatta:~$ sh ip route ospf Codes: K - kernel route, C - connected, S - static, R - RIP, O - OSPF, I - ISIS, B - BGP, > - selected route, * - FIB route O>* 192.168.0.0/24 [110/1] via 192.168.6.2, eth0, 00:06:08 O>* 192.168.1.0/24 [110/1] via 192.168.6.2, eth0, 00:06:08 O 192.168.6.0/30 [110/10] is directly connected, eth0, 00:22:31 O>* 192.168.7.0/29 [110/11] via 192.168.6.2, eth0, 00:11:38 O>* 192.168.10.0/24 [110/1] via 192.168.14.2, eth1, 00:00:21 O>* 192.168.11.0/24 [110/1] via 192.168.14.2, eth1, 00:00:21 O 192.168.14.0/30 [110/10] is directly connected, eth1, 00:22:31 O>* 192.168.15.0/29 [110/11] via 192.168.14.2, eth1, 00:00:38
By default, the Vyatta appliance adds a cost of 10 to its ospf interfaces and the ESG adds a cost of 1. This is customizable, and so are OSPF Hello and Dead intervals.
Hopefully you’ve got everything working now!
The next post will focus on the very cool part, which is how to use python and pyVmomi to perform both NSX and vSphere tasks to move things around.