Deploy VMware Cloud Foundation Cloud Builder using the vSphere Terraform Provider

As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.

Check out the other posts on Terraform with VMware Cloud Foundation here:

Deploy Cloud Builder with the vSphere Terraform Provider

As before, you first need to define your provider configuration

# providers.tf
 
terraform {
  required_providers {
    vsphere = {
      source  = "hashicorp/vsphere"
      version = "2.5.1"
    }
  }
}
provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true
}

Then we define our variables

# variables.tf
 
# vSphere Infrastructure Details
variable "data_center" { default = "sfo-m01-dc01" }
variable "cluster" { default = "sfo-m01-cl01" }
variable "vds" { default = "sfo-m01-vds01" }
variable "datastore" { default = "vsanDatastore" }
variable "compute_pool" { default = "sfo-m01-cl01" }
variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"}
variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"}
 
# vCenter Credential Variables
variable "vsphere_user" {}
variable "vsphere_password" {}
 
# Cloud Builder Deployment
variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" }
variable "vm_name" { default = "sfo-cb01" }
variable "local_ovf_path" { default = "F:\\binaries\\VMware-Cloud-Builder-4.5.2.0-22223457_OVF10.ova" }
variable "ip0" { default = "172.16.225.66" }
variable "netmask0" { default = "255.255.255.0" }
variable "gateway" { default = "172.16.225.1" }
variable "dns" { default = "172.16.225.4" }
variable "domain" { default = "sfo.rainpole.io" }
variable "ntp" { default = "ntp.sfo.rainpole.io" }
variable "searchpath" { default = "sfo.rainpole.io" }
variable "ADMIN_PASSWORD" { default = "VMw@re1!" }
variable "ROOT_PASSWORD" { default = "VMw@re1!" }
variable "hostname" { default = "sfo-cb01.sfo.rainpole.io" }

Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.

# terraform.tfvars
 
# vSphere Provider Credentials
vsphere_user     = "administrator@vsphere.local"
vsphere_password = "VMw@re1!"

Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.

# main.tf
 
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
  name = var.data_center
}
 
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
  name          = var.cluster
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
  name          = var.datastore
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
  name          = var.mgmt_pg
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
  name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
  name          = var.compute_host
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
  name             = var.vm_name
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  host_system_id   = data.vsphere_host.host.id
  local_ovf_path   = var.local_ovf_path
  ovf_network_map = {
    "Network 1" = data.vsphere_network.mgmt.id
  }
}
 
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
  name                 = var.vm_name
  datacenter_id        = data.vsphere_datacenter.datacenter.id
  datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
  host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
  resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
  num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
  num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
  memory               = data.vsphere_ovf_vm_template.ovfLocal.memory
  guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id
  scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
 
  wait_for_guest_net_timeout = 5
 
  ovf_deploy {
    allow_unverified_ssl_cert = true
    local_ovf_path            = var.local_ovf_path
    disk_provisioning         = "thin"
    ovf_network_map   = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
 
  }
  vapp {
    properties = {
      "ip0"               = var.ip0,
      "netmask0"          = var.netmask0,
      "gateway"          = var.gateway,
      "dns"             = var.dns,
      "domain"           = var.domain,
      "ntp"              = var.ntp,
      "searchpath"       = var.searchpath,
      "ADMIN_USERNAME"  = "admin",
      "ADMIN_PASSWORD"           = var.ADMIN_PASSWORD,
      "ROOT_PASSWORD"       = var.ROOT_PASSWORD,
      "hostname"           = var.hostname
    }
  }
  lifecycle {
    ignore_changes = [
      #vapp # Enable this to ignore all vapp properties if the plan is re-run
      vapp[0].properties["ADMIN_PASSWORD"],
      vapp[0].properties["ROOT_PASSWORD"],
      host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
    ]
  }
}

Now we can run the following to initialise Terraform and the required vSphere provider

terraform init 

One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.

terraform plan -out=DeployCB

Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.

terraform apply DeployCB

VMware Cloud Foundation Terraform Provider: Create a New VCF Instance

Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).

As of writing this post I am using version 0.5.0 of the provider.

First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.

variable "cloud_builder_username" {
  description = "Username to authenticate to CloudBuilder"
  default = "admin"
}

variable "cloud_builder_password" {
  description = "Password to authenticate to CloudBuilder"
  default = "VMw@re1!"
  sensitive = true
}

variable "cloud_builder_host" {
  description = "Fully qualified domain name or IP address of the CloudBuilder"
  default = "sfo-cb01.sfo.rainpole.io"
}

variable "sddc_manager_root_user_password" {
  description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
  default = "VMw@re1!"
  sensitive = true
}

variable "sddc_manager_secondary_user_password" {
  description = "Second user (vcf) password for the SDDC Manager VM.  Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
  default = "VMw@re1!"
  sensitive = true
}

variable "vcenter_root_password" {
  description = "root password for the vCenter Server Appliance (8-20 characters)"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_manager_admin_password" {
  description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_audit_password" {
  description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_root_password" {
  description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "esx_host1_pass" {
  description = "Password to authenticate to the ESXi host 1"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host2_pass" {
  description = "Password to authenticate to the ESXi host 2"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host3_pass" {
  description = "Password to authenticate to the ESXi host 3"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host4_pass" {
  description = "Password to authenticate to the ESXi host 4"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_license_key" {
  description = "NSX license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vcenter_license_key" {
  description = "vCenter license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vsan_license_key" {
  description = "vSAN license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "esx_license_key" {
  description = "ESXi license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.

terraform {
  required_providers {
    vcf = {
      source = "vmware/vcf"
    }
  }
}
provider "vcf" {
  cloud_builder_host = var.cloud_builder_host
  cloud_builder_username = var.cloud_builder_username
  cloud_builder_password = var.cloud_builder_password
  allow_unverified_tls = true
}

resource "vcf_instance" "sddc_1" {
  instance_id = "sfo-m01"
  dv_switch_version = "7.0.3"
  skip_esx_thumbprint_validation = true
  management_pool_name = "sfo-m01-np"
  ceip_enabled = false
  esx_license = var.esx_license_key
  task_name = "workflowconfig/workflowspec-ems.json"
  sddc_manager {
    ip_address = "172.16.11.59"
    hostname = "sfo-vcf01"
    root_user_credentials {
      username = "root"
      password = var.sddc_manager_root_user_password
    }
    second_user_credentials {
      username = "vcf"
      password = var.sddc_manager_secondary_user_password
    }
  }
  ntp_servers = [
    "172.16.11.4"
  ]
  dns {
    domain = "sfo.rainpole.io"
    name_server = "172.16.11.4"
    secondary_name_server = "172.16.11.5"
  }
  network {
    subnet = "172.16.11.0/24"
    vlan_id = "1611"
    mtu = "1500"
    network_type = "MANAGEMENT"
    gateway = "172.16.11.1"
  }
  network {
    subnet = "172.16.13.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.13.101"
      end_ip_address = "172.16.13.108"
    }
    vlan_id = "1613"
    mtu = "8900"
    network_type = "VSAN"
    gateway = "172.16.13.1"
  }
  network {
    subnet = "172.16.12.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.12.101"
      end_ip_address = "172.16.12.104"
    }
    vlan_id = "1612"
    mtu = "8900"
    network_type = "VMOTION"
    gateway = "172.16.12.1"
  }
  nsx {
    nsx_manager_size = "medium"
    nsx_manager {
      hostname = "sfo-m01-nsx01a"
      ip = "172.16.11.72"
    }
    root_nsx_manager_password = var.nsx_manager_root_password
    nsx_admin_password = var.nsx_manager_admin_password
    nsx_audit_password = var.nsx_manager_audit_password
    overlay_transport_zone {
      zone_name = "sfo-m01-overlay-tz"
      network_name = "sfo-m01-overlay"
    }
    vip = "172.16.11.71"
    vip_fqdn = "sfo-m01-nsx01"
    license = var.nsx_license_key
    transport_vlan_id = 1614
  }
  vsan {
    license = var.vsan_license_key
    datastore_name = "sfo-m01-vsan"
  }
  dvs {
    mtu = 8900
    nioc {
      traffic_type = "VSAN"
      value = "HIGH"
    }
    nioc {
      traffic_type = "VMOTION"
      value = "LOW"
    }
    nioc {
      traffic_type = "VDP"
      value = "LOW"
    }
    nioc {
      traffic_type = "VIRTUALMACHINE"
      value = "HIGH"
    }
    nioc {
      traffic_type = "MANAGEMENT"
      value = "NORMAL"
    }
    nioc {
      traffic_type = "NFS"
      value = "LOW"
    }
    nioc {
      traffic_type = "HBR"
      value = "LOW"
    }
    nioc {
      traffic_type = "FAULTTOLERANCE"
      value = "LOW"
    }
    nioc {
      traffic_type = "ISCSI"
      value = "LOW"
    }
    dvs_name = "SDDC-Dswitch-Private"
    vmnics = [
      "vmnic0",
      "vmnic1"
    ]
    networks = [
      "MANAGEMENT",
      "VSAN",
      "VMOTION"
    ]
  }
  cluster {
    cluster_name = "sfo-m01-cl01"
    cluster_evc_mode = ""
    resource_pool {
      name = "Mgmt-ResourcePool"
      type = "management"
    }
    resource_pool {
      name = "Network-ResourcePool"
      type = "network"
    }
    resource_pool {
      name = "Compute-ResourcePool"
      type = "compute"
    }
    resource_pool {
      name = "User-RP"
      type = "compute"
    }
  }
  psc {
    psc_sso_domain = "vsphere.local"
    admin_user_sso_password = "VMw@re1!"
  }
  vcenter {
    vcenter_ip = "172.16.11.70"
    vcenter_hostname = "sfo-m01-vc01"
    license = var.vcenter_license_key
    root_vcenter_password = var.vcenter_root_password
    vm_size = "tiny"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.101"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx01"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.102"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx02"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.103"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx03"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.104"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx04"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
}

Once the above is defined you can run the following to create your Terraform Plan:

terraform init
terraform plan -out=vcf-bringup

Once there are no errors from the above plan command you can run the following to start the VCF bring-up

terraform apply .\vcf-bringup

All going well, this should result in a successful VMware Cloud Foundation bring-up

Where Are My VMware Cloud Foundation 5.x Logs?

From time to time we all need to look at logs, whether its a failed operation or to trace who did what when. In VMware Cloud Foundation there are many different logs, each one serving a different purpose. Its not always clear which log you should look at for each operation so here is a useful reference table.

Log TypeVM Locationlog Location
BringupCloud Builder/var/log/vmware/vcf/bringup/vcf-bringup-debug.log
LicensingSDDC Manager/var/log/vmware/vcf/operationsmanager/operationsmanager.log
Network PoolSDDC Manager/var/log/vmware/vcf/commonsvcs/vcf-commonsvcs.log
Host Commission/DecommissionSDDC Manager/var/log/vmware/vcf/operationsmanager/operationsmanager.log
VI (WLD domain)SDDC Manager/var/log/vmware/vcf/domainmanager/domainmanager.log
vRLISDDC Manager/var/log/vmware/vcf/domainmanager/domainmanager.log
vROPSSDDC Manager/var/log/vmware/vcf/domainmanager/domainmanager.log
vRASDDC Manager/var/log/vmware/vcf/domainmanager/domainmanager.log
vRSLCM DeploymentSDDC Manager/var/log/vmware/vcf/domainmanager/domainmanager.log
vRSLCM OperationsvRSLCM/var/log/vrlcm/vmware_vrlcm.log
LCMSDDC Manager/var/log/vmware/vcf/lcm/lcm.log
API LoginSDDC Manager/var/log/vmware/vcf/commonsvcs/vcf-commonsvcs.log
SoSSDDC Manager/var/log/vmware/vcf/sddc-support/vcf-sos-svcs.log
Certificate OperationsSDDC Manager/var/log/vmware/vcf/operationsmanager/operationsmanager.log

PowerShell Script to Configure an NSX-T Load Balancer for the vRealize Suite & Workspace ONE Access

As part of my role in the VMware Hyper-converged Business Unit (HCIBU) I spend a lot of time working with new product versions testing integrations for next-gen VMware Validated Designs and Cloud Foundation. A lot of my focus is on Cloud Operations and Automation (vROPs, vRLI, vRA etc) and consequently I regularly need to deploy environments to perform integration testing. I will typically leverage existing automation where possible and tend to create my own when i find gaps. Once such gap was the ability to use PowerShell to interact with the NSX-T API. For anyone who is familiar with setting up a load balancer for the vRealize Suite in NSX-T – there are a lot of manual clicks required. So i set about creating some PowerShell functions to make it a little less tedious and to speed up getting my environments setup so i could get to the testing faster.

There is comprehensive NSX-T API documentation posted on code.vmware .com that I used to decipher the various API endpoints required to complete the various tasks:

  • Create the Load Balancer
  • Create the Service Monitors
  • Create the Application Profiles
  • Create the Server Pools
  • Create the Virtual Servers

The result is a PowerShell module with a function for each of the above and a corresponding JSON file that is read in for the settings for each function. I have included a sample JSON file to get you started. Just substitute your values.

Note: You must have a Tier-1 & associated segments created. (I’ll add that functionality when i get a chance!)

PowerShell Module, Sample JSON & Script are posted to Github here

NSX IPSec VPN between datacenters (multi site/region)

I’m doing some lab work with my team at the moment and we were gifted some hardware to do some multi region validation. Both systems (a VxRack SDDC & a VxRail) are in 2 separate datacenters, and both are using private IP addressing that is not routable between datacenters. As part of the validation we need both systems to be able to communicate with each other, however we dont control the inter lab switching to put in place the necessary routes to enable this. Rather than go through a change control process with the keepers of that gate we decided to get creative and have some fun (and hopefully learn something!) by setting up an NSX IPSec VPN between the labs.

Disclaimer: There are many better ways to do this for a permanent lab setup (i.e. BGP to the core with routes) but this was done on borrowed kit that was never initially designed with inter lab routing as a requirement, with no direct control on the inter lab switches, and we would also like to put it back the way we found it so dont want to make sweeping architectural changes!

Continue reading “NSX IPSec VPN between datacenters (multi site/region)”