Automate your VMware Validated Design NSX-V Distributed Firewall Configuration

A few weeks back I mentioned on twitter that i was working on automating the VMware Validated Design NSX-V Distributed Firewall Configuration in my lab. (I admit it took longer than i had planned!) Currently this is a manual post deployment step once VMware Cloud Builder has completed the deployment. This will likely be picked up by Cloud Builder in a future release but for now its a manual, and somewhat tedious, but required, step!

Full details on the manual steps required for this configuration can be found here. Please take the time to understand what these rules are doing before implementing them.

So in an effort to make this post configuration step a little less painful i set out to automate it. I’ve played with the NSX-V API in the past and found it much easier to interact with by using PowerNSX, rather than leveraging PostMan and the API directly. PowerNSX is the unofficial, official automation tool for NSX. Hats off to VMware engineers Nick Bradford, Dale Coghlan & Anthony Burke for creating and documenting this tool. Anthony also published a FREE book on Automating NSX for vSphere with PowerNSX. More on that here.

Disclaimer: This script is not officially supported by VMware. Use at your own risk & test in a development/lab environment before using in production.

I’ve posted the script to GitHub here as its a bit lengthy! There may be a more efficient way to do some parts of it and if anyone wants to contribute please feel free!

As with a lot of the scripts i create it is menu based and has 2 main options:

  1. Create DFW exclusions, IP Sets & Security Groups
  2. Create DFW Rules

The reason i split it into 2 distinct operations is to allow you to inspect the exclusion list, IP Sets & Security Groups before creating the firewall rules. This will ensure that you dont lock yourself out of vCenter by creating an incorrect rule.

Required Software

  • PowerCli
    • The script will check for PowerCli and if not found will attempt to install the latest version from the PowerShell Gallery
    • Currently tested on Windows only
    • If you dont have internet access you can manually install PowerCli by opening a PowerShell console as administrator and running:
    • Find-Module -Name VMware.PowerCLI | Install-Module
  • PowerNSX
    • The script will check for PowerNSX and if not found will attempt to install the latest version from the PowerShell Gallery
    • Currently tested on Windows only
    • If you dont have internet access you can manually install PowerNSX by opening a PowerShell console as administrator and running:
    • Find-Module -Name PowerNSX | Install-Module

Required Variables

Before you can run the script you need to edit the User Variables to provide the following:

  • Target vCenter details
    • Required to establish a PowerCli Connection with vCenter Server. This is used when updating the DFW exclusion list
  • Target NSX Manager details
    • Required to establish a connection with NSX manager to configure the DFW
  • IP Addresses for the various SDDC components

Hopefully you will find this useful!

What not to do when your Platform Services Controllers are Load Balanced!

I needed to do some validation around vRealize Operations Manager & vRealize Orchestrator for an upcoming VVD release and a physical lab environment was made available. The environment is a dual region VVD deployment. Upon verifying that I had access to all the components I needed it became obvious there was an issue with SSO in the primary region (SFO). Browsing to the web client for the SFO management vCenter I was seeing this:

As i mentioned this is a VVD deployment and per VVD guidelines there are 2 Platform Services Controllers (PSCs) behind an NSX load balancer per region. Like so: (Diagram from the VMware Validated Design 5.0 Architecture & Design guide)

Like any good (lazy!) IT person the first thing i did was google the error to find the quick fix! That led me to this communities post which had some suggestions around disk space etc. None of which were relevant to my issue. Running the following on the PSCs and vCenters showed that some services were not starting

service-control –status

Restarting the services didn’t help. Next up i checked the usual suspects:

  • NTP
  • DNS
  • SSL Certificates

All of the above looked ok. Next I turned my attention to the load balancer. Because the vCenter Web Client was inaccessible I was not able to access the load balancer settings through the UI so I turned to the NSX API using Postman

To connect to the NSX manager that is associated with the load balancer you need to configure a Postman session with basic authentication and enter the NSX manager admin user & password.

To retrieve information on the load balancer you need to run the following GET:

https://sfo01m01nsx01.sfo01.rainpole.local/api/4.0/edges/edge-1/loadbalancer/config

I wont post the full response from the above command as it’s lengthy but scanning through it I noticed that the condition of each load balancer pool member was disabled. In the immortal words of Bart Simpson:




The response above is from a more targeted API call to /pools/pool-1.

Now I dont know how it got into this state – maybe someone was doing some jenga style doomsday testing, pulling one brick at a time until the tower crashes! – but this certainly looked to be the cause of the issue. So I figured the quickest fix would be to do a PUT API call to NSX with condition enabled for the pool members and I’d be all set. Not so easy!

Running the following PUT appears to work temporarily (running a GET at the same time confirms this)

But the change does not get fully applied and reverts the conditions to disabled after about 30 seconds with the below error:

So to apply the change to the load balancer NSX requires a handoff with the PSC that is is mapped to…in this case its the load balanced PSC that is not functional. So the command fails.

So it was clear I needed to get at least 1 PSC operational before i could attempt to make a change. Time to play with some DNS redirects to “fool” the PSC services into starting.

As my PSCs are setup in HA mode behind a load balancer the SSO endpoint URL is https://sfo01psc01.sfo01.rainpole.local which both PSCs will respond from. So to get my first PSC up I changed the IP for sfo01psc01.sfo01.rainpole.local in DNS to point to the first PSC’s IP.

So now, pings to the load balancer VIP FQDN sfo01psc01.sfo01.rainpole.local respond from the first PSC IP

Next I set a static entry in /etc/hosts on each of my PSCs, and vCenters to do the same as i’ve seen vCenter especially cache DNS entries in it’s local dnsmasq.

Next step was to stop & start all services on each PSC

service-control –stop –all

service-control –start –all

And hey presto the services started! Ran the same on vCenter and the services also started. This allowed me to go in and modify the load balancer pools to set the members to enabled.

Once the load balancer was back as it should be it was just a case of removing the /etc/hosts entries on each VM and reverting the DNS server change to point the load balancer FQDN back to its correct IP address.

For completeness I restarted all the services on each appliances in the above mentioned order

Moral of the story? Dont disable both nodes in a load balancer pool!

Now onwards with the original testing i needed to do!

NSX IPSec VPN between datacenters (multi site/region)

I’m doing some lab work with my team at the moment and we were gifted some hardware to do some multi region validation. Both systems (a VxRack SDDC & a VxRail) are in 2 separate datacenters, and both are using private IP addressing that is not routable between datacenters. As part of the validation we need both systems to be able to communicate with each other, however we dont control the inter lab switching to put in place the necessary routes to enable this. Rather than go through a change control process with the keepers of that gate we decided to get creative and have some fun (and hopefully learn something!) by setting up an NSX IPSec VPN between the labs.

Disclaimer: There are many better ways to do this for a permanent lab setup (i.e. BGP to the core with routes) but this was done on borrowed kit that was never initially designed with inter lab routing as a requirement, with no direct control on the inter lab switches, and we would also like to put it back the way we found it so dont want to make sweeping architectural changes!

Continue reading “NSX IPSec VPN between datacenters (multi site/region)”

vRA Network and Security Inventory Data Collection Failed

I’ve been playing around with Dell EMC RP4VM & vRA and needed to setup cross vCenter NSX in my lab. I’m not going to go into that setup as there are many blogs on the subject. What i will cover is an error i hit when trying to do Network and Security Inventory data collections on one of my NSX endpoints. The error from the Dem logs in vRA (Infrastructure > Monitoring > Log ) was as follows:


Workflow 'vSphereVCNSInventory' failed with the following exception:
'object' does not contain a definition for 'clusters'

After digging around for VMware KBs and blogs on the subject and coming up empty handed i went back to review my entire setup and discovered i had missed adding a vCenter cluster to the universal transport zone on the offending NSX endpoint, which is my DR site.

Once i rectified this the Network and Security Inventory Data Collection worked as expected.

Backup NSX Manager

Was playing around with NSX today and found that you can enable backup of the NSX manager from within the administration page. While using a backup solution like Avamar or VMware vSphere Data Protection (VDP) is preferred for backing up your VMs this is a quick and easy way to backup your NSX manager config that enables you to quickly restore in the case of losing the NSX manager (or screwing it up by changing something!!)

Continue reading “Backup NSX Manager”