Upgrading VCF 5.2 to 9.0 – Part 9 – Deactivate Enhanced Link Mode (ELM)

The final part of this series on Upgrading VCF 5.2 to 9.0 is to Deactivate Enhanced Link Mode (ELM) on the VCF instance that we have upgraded. ELM has been around forever, but with VCF 9.0, ELM is now deprecated. You can no longer deploy vCenter instances in an ELM ring. VCF 9.0 introduces a new concept of vCenter linking, which, along with VCF SSO, enabled by VCF Identity Broker (VIDB), replaces the functionality previously provided by ELM. I will cover vCenter linking and VCF SSO in a later post, but before you can take advantage of VCF SSO, you must first deactivate ELM.

Deactivating ELM means each vCenter in the ELM ring now becomes a standalone vCenter with its own isolated SSO domain and its is an all or nothing operation, meaning you cannot selectively remove one vCenter from the ELM ring. Once you perform the operation, all vCenters will be removed from the ELM ring, and all vCenters will use their own instance of vsphere.local.

Important Note: Taking offline snapshots of all vCenter instances in the ELM ring is recommended in case you need to revert.

To deactivate ELM, we will use the SDDC Manager API.

  • Browse to SDDC manager and click on Developer Center
  • Navigate to Domains and under GET /v1/domains click Execute.
  • Expand the response and locate the id for the management domain
  • Expand POST /v1/domains/{id}/validations , enter the management domain id in the id field and the following json in the body, and click Execute.
{
  "breakElmSpec": {
      "isReconcileWorkflow": false
  }
}
  • Expand GET /v1/domains/{id}/validations/{validation_id} and replace {id} with the management domain ID. Replace {validation_id} with the validation ID, from the previous step, and click Execute. Expand the response to ensure it is successful.

To break ELM across all VCF domains in the VCF instance, expand PATCH /v1/domains/{id} , enter the management domain id in the id field and the following json in the body, and click Execute.

{
  "breakElmSpec": {
      "isReconcileWorkflow": false
  }
}

Locate the task ID in the response.

To monitor the task progress, expand GET /v1/tasks/{id} , enter the task id in the id field, and click Execute.

Upgrading VCF 5.2 to 9.0 – Part 8 – Deploy VCF Fleet Management Components

VCF 9.0 introduced the concept of VCF fleet, which is defined as:

An environment that is managed by a single set of fleet-level management components – VCF Operations & VCF Automation. A VCF fleet contains one or more VCF Instances and may contain one or more standalone vCenter instances, managed by the VCF Operations instance for the fleet. The management domain of the first VCF Instance in the VCF fleet typically hosts the fleet-level management components.

When deploying a new VCF fleet, you get the option to deploy the fleet-level management components using the VCF installer. Because I am upgrading from VCF 5.2, where I did not have Aria Operations or Aria Automation, I need to deploy new instances of each component (If I had pre-existing instances, they could be upgraded). You can deploy them manually from OVA, however, there is a new SDDC Manager API to automate the process using a JSON payload.

The API can be accessed via the SDDC Manager developer centre, under VCF Management Components.

The JSON payload to deploy VCF Operations (including a collector & the fleet management appliance) and VCF Automation is as follows: (NOTE: This spec is for a simple/single node deployment of the fleet management components where VCF Operations & VCF Automation will be deployed to an NSX Overlay segment, and the VCF Operations collector will be deployed to the management DVPG)

 {
    "vcfOperationsFleetManagementSpec": {
        "hostname": "flt-fm01.rainpole.io",
        "rootUserPassword": "VMw@re1!VMw@re1!",
        "adminUserPassword": "VMw@re1!VMw@re1!",
        "useExistingDeployment": false
    },
    "vcfOperationsSpec": {
        "nodes": [
            {
                "hostname": "flt-ops01a.rainpole.io",
                "rootUserPassword": "VMw@re1!VMw@re1!",
                "type": "master"
            }
        ],
        "useExistingDeployment": false,
        "applianceSize": "medium",
        "adminUserPassword": "VMw@re1!VMw@re1!"
    },
    "vcfOperationsCollectorSpec": {
        "hostname": "sfo-opsc01.sfo.rainpole.io",
        "rootUserPassword": "VMw@re1!VMw@re1!",
        "applianceSize": "small"
    },
    "vcfAutomationSpec": {
        "hostname": "flt-auto01.rainpole.io",
        "adminUserPassword": "VMw@re1!VMw@re1!",
        "useExistingDeployment": false,
        "ipPool": [
            "192.168.11.51",
            "192.168.11.52"
        ],
        "internalClusterCidr": "250.0.0.0/15",
        "vmNamePrefix": "flt-auto01"
    },
    "vcfInstanceName": "San Francisco VCF01",
    "vcfMangementComponentsInfrastructureSpec": {
        "localRegionNetwork": {
            "networkName": "sfo-m01-cl01-vds01-pg-vm-mgmt",
            "subnetMask": "255.255.255.0",
            "gateway": "10.11.10.1"
        },
        "xRegionNetwork": {
            "networkName": "xint-m01-seg01",
            "subnetMask": "255.255.255.0",
            "gateway": "192.168.11.1"
        }
    }
}

Validate your JSON payload using the POST /v1/vcf-management-components/validations API.

Executing this will return a task id. Copy this id to monitor the task

Check the status of the validation task using GET /v1/vcf-management-components/validations/{validationId} until it’s resultStatus is SUCCEEDED.

Now, submit the same JSON payload to POST /v1/vcf-management-components, and go grab a coffee!

Once the deployment completes, you should have a VCF Operations instance to manage your fleet, along with a VCF Automation instance for the consumption layer.

Retrieve VCF Operations Appliance Root Password from the VMware Aria Suite Lifecycle Locker

When you deploy a component using VMware Aria Suite Lifecycle, it stores the credentials in it’s locker. If you need to SSH to a VCF Operations appliance and you dont know the root password, you need to retrieve the root password from the VMware Aria Suite Lifecycle locker. To do this you need to query the Aria Suite Lifecycle API for a list of locker entries using basic auth.

GET https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords?from=0&size=10

From the response, locate the corresponding vmid for the VCF OPs appliance

{            
"vmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",            "tenant": "default",            
"alias": "VCF-flt-ops01a.rainpole.io-rootUserPassword",          "password": "PASSWORD****",            
"createdOn": 1737740091124,            
"lastUpdatedOn": 1737740091124,            
"referenced": true        
}

Query the Aria Suite Lifecycle locker for the decrypted password, again with basic auth, passing the Aria Suite Lifecycle root password in the payload body.

#BODY (Aria Suite Lifecycle root password)
{
  "rootPassword": "VMw@re1!VMw@re1!"
}

POST https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords/a789765f-6cfc-497a-8273-9d8bff2684a5/decrypted

If all goes well, it should return the password

{
    "passwordVmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",
    "password": "u!B1U9#Q5L^o2Vqer@6f"
}

PowerCLI Module For VMware Cloud Foundation: Bringup Using an Existing JSON

This is the 2nd post in a series on the native PowerCLI Module For VMware Cloud Foundation (VCF). If you haven’t seen the previous post, it is available here:

  1. PowerCLI Module For VMware Cloud Foundation: Introduction

This post will focus on the Cloud Builder module to perform a bringup of a VCF instance. For this example, I am using a pre-populated JSON file. I will do a follow-up post on how to create the spec from scratch.

To get started we need a Cloud Builder connection.

Connect-VcfCloudBuilderServer -Server sfo-cb01.sfo.rainpole.io -User admin -Password VMw@re1!VMw@re1!

If you have a pre-populated json spec, you can simply do the following to perform a validation using the Cloud Builder API

$sddcSpec = (Get-Content -Raw .\sfo-m01-bringup-spec.json)
Invoke-VcfCbValidateBringupSpec -SddcSpec $sddcSpec

And once the validation passes, do the following to start the bringup:

Invoke-VcfCbStartBringup -sddcSpec $sddcSpec

Bringup is a long-running task but you can monitor the status using something like this

# Retrieve the bringup task id
$bringupTaskId = (Invoke-VcfCbGetBringupTasks).elements.Id

#Poll the status of the task until it is no longer in progress
Do {
$bringupTask = Invoke-VcfCbGetBringupTaskByID -id $bringupTaskId
}
Until ($bringupTask.Status -ne 'IN_PROGRESS')

PowerCLI Module For VMware Cloud Foundation: Introduction

As you are no doubt aware I am a fan of PowerShell and PowerCLI. Since my early days working with VMware products, whether it was vCenter, vCloud Director or VMware Cloud Foundation (VCF), I have always leveraged PowerCLI to get the job done. Up until recently, there was no native PowerCLI support for the VMware Cloud Foundation API. Hence why I started the open-source PowerVCF project almost 5 years ago! PowerVCF has grown and matured as new maintainers came onboard. Open-source projects are a great way to deliver functionality to our customers that is not yet available in officially supported channels. Since the release of PowerCLI 13.1 I am delighted to say that we now have officially supported, native PowerCLI modules for VMware Cloud Foundation.

2 distinct modules are now part of PowerCLI. One for the Cloud Builder API and one for the SDDC Manager API.

Install-Module -Name VMware.Sdk.Vcf.CloudBuilder
Install-Module -Name VMware.Sdk.Vcf.SddcManager

The cmdlets for each module are too many to list here but to see what’s available once you have them installed do the following

get-command -module VMware.Sdk.Vcf.CloudBuilder
get-command -module VMware.Sdk.Vcf.SDDCManager

You will see from the output that the cmdlets are broken into primarily 2 types:

  • Initialize-Vcf<xyz>
    • Used to gather information and generate input specs
  • Invoke-Vcf<xyz>
    • Used to execute the API request with an input spec

Each module also has a connect/disconnect cmdlet which can be used in the following way

Connect-VcfCloudBuilderServer -Server sfo-cb01.sfo.rainpole.io -User admin -Password VMw@re1!VMw@re1!

This connection object is then stored in $defaultCloudBuilderConnections

Connect-VcfSddcManagerServer -Server sfo-vcf01.sfo.rainpole.io -User administrator@vsphere.local -Password VMw@re1!VMw@re1!

This connection object is then stored in $defaultsddcManagerConnections

Note: If you are working in a lab environment with untrusted certs you can pass -IgnoreInvalidCertificate to each of the above commands.

Once you have an active connection, you can begin to query the API. The example below returns a list of all hosts from SDDC Manager. One thing you will notice, if you are a PowerVCF user, is that you will need to parse the response a little more than you needed to with the PowerVCF cmdlet Get-VCFHost.

Running Invoke-VcfGetHosts will return a list of host elements

So to parse the response, you can do something like this, which will return the details of all hosts

But lets say you would like to filter the response to just the hosts from a specific workload domain. You first need the Id of the workload domain, in this case sfo-m01.

And you can then get a filtered list of hosts for that domain

Hopefully, this introduction was helpful, I will put together a series of blogs over the next few weeks covering some of the main VCF operations, such as bringup, commissioning hosts, deploying workload domains etc. As always, comments & feedback are welcome. Please let me know what your experience is with the new modules and I can feed it back to the engineering team.

Cleanup Failed Credential Tasks in VMware Cloud Foundation

I have covered how to clean up general failed tasks in Cleanup Failed Credentials Tasks in VMware Cloud Foundation in a previous post. Another type of task that can be in a failed state is a credentials rotation operation. Credential operations can fail for a number of reasons (the underlying component is unreachable at the time of the operation etc), and this type of failed task is a blocking task – i.e. you cannot perform another credential task until you clean up or cancel the failed task. The script below leverages the PowerVCF cmdlet Get-VCFCredentialTask to discover failed credential tasks and Stop-VCFCredentialTask to clean them up. As with all scripts, please test thoroughly in a lab before using it in production.

# Script to cleanup failed credential tasks in SDDC Manager
# Written by Brian O'Connell - Staff II Solutions Architect @ VMware
#User Variables
# SDDC Manager FQDN. This is the target that is queried for failed tasks
$sddcManagerFQDN = "sfo-vcf01.sfo.rainpole.io"
# SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
$sddcManagerAPIUser = "administrator@vsphere.local"
$sddcManagerAPIPassword = "VMw@re1!"
# DO NOT CHANGE ANYTHING BELOW THIS LINE
#########################################
# Set TLS to 1.2 to avoid certificate mismatch errors
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
# Install PowerVCF if not already installed
if (!(Get-InstalledModule -name PowerVCF -MinimumVersion 2.4.0 -ErrorAction SilentlyContinue)) {
Install-Module -Name PowerVCF -MinimumVersion 2.4.0 -Force
}
# Request a VCF Token using PowerVCF
Request-VCFToken -fqdn $sddcManagerFQDN -username $sddcManagerAPIUser -password $sddcManagerAPIPassword
# Retrieve a list of failed tasks
$failedTaskIDs = @()
$ids = (Get-VCFCredentialTask -status "Failed").id
Foreach ($id in $ids) {
$failedTaskIDs += ,$id
}
# Cleanup the failed tasks
Foreach ($taskID in $failedTaskIDs) {
Stop-VCFCredentialTask -id $taskID
# Verify the task was deleted
Try {
$verifyTaskDeleted = (Get-VCFCredentialTask -id $taskID)
if (!$verifyTaskDeleted) {
Write-Output "Task ID $taskID Deleted Successfully"
}
}
catch {
Write-Error "Something went wrong. Please check your SDDC Manager state"
}
}

Install HashiCorp Terraform on a PhotonOS Appliance

HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.

#!/usr/bin/env bash

# Versions
GO_VERSION="1.21.4"
TERRAFORM_VERSION="1.6.3"

# Arch
if [[ $(uname -m) == "x86_64" ]]; then
  LINUX_ARCH="amd64"
elif [[ $(uname -m) == "aarch64" ]]; then
  LINUX_ARCH="arm64"
fi

# Directory
if ! [[ -d ~/code ]]; then
  mkdir ~/code
fi

# Go
wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
PATH=$PATH:/usr/local/go/bin
go version
rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
export GOPATH=${HOME}/code/go

# HashiCorp
wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
rm ./*.zip

Terraform Module for Deploying VMware Cloud Foundation VI Workload Domains

I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.

I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.

The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.

# main.tf

module "vidomain" {
source= "LifeOfBrianOC/vidomain"
version = "0.1.0"

sddc_manager_fqdn     = "sfo-vcf01.sfo.rainpole.io"
sddc_manager_username = "administrator@vsphere.local"
sddc_manager_password = "VMw@re1!"
allow_unverified_tls  = "true"

network_pool_name                     = "sfo-w01-np"
network_pool_storage_gateway          = "172.16.13.1"
network_pool_storage_netmask          = "255.255.255.0"
network_pool_storage_mtu              = "8900"
network_pool_storage_subnet           = "172.16.13.0"
network_pool_storage_type             = "VSAN"
network_pool_storage_vlan_id          = "1633"
network_pool_storage_ip_pool_start_ip = "172.16.13.101"
network_pool_storage_ip_pool_end_ip   = "172.16.13.108"

network_pool_vmotion_gateway          = "172.16.12.1"
network_pool_vmotion_netmask          = "255.255.255.0"
network_pool_vmotion_mtu              = "8900"
network_pool_vmotion_subnet           = "172.16.12.0"
network_pool_vmotion_vlan_id          = "1632"
network_pool_vmotion_ip_pool_start_ip = "172.16.12.101"
network_pool_vmotion_ip_pool_end_ip   = "172.16.12.108"

esx_host_storage_type = "VSAN"
esx_host1_fqdn        = "sfo01-w01-esx01.sfo.rainpole.io"
esx_host1_username    = "root"
esx_host1_pass        = "VMw@re1!"
esx_host2_fqdn        = "sfo01-w01-esx02.sfo.rainpole.io"
esx_host2_username    = "root"
esx_host2_pass        = "VMw@re1!"
esx_host3_fqdn        = "sfo01-w01-esx03.sfo.rainpole.io"
esx_host3_username    = "root"
esx_host3_pass        = "VMw@re1!"
esx_host4_fqdn        = "sfo01-w01-esx04.sfo.rainpole.io"
esx_host4_username    = "root"
esx_host4_pass        = "VMw@re1!"

vcf_domain_name                    = "sfo-w01"
vcf_domain_vcenter_name            = "sfo-w01-vc01"
vcf_domain_vcenter_datacenter_name = "sfo-w01-dc01"
vcenter_root_password              = "VMw@re1!"
vcenter_vm_size                    = "small"
vcenter_storage_size               = "lstorage"
vcenter_ip_address                 = "172.16.11.130"
vcenter_subnet_mask                = "255.255.255.0"
vcenter_gateway                    = "172.16.11.1"
vcenter_fqdn                       = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_cluster_name               = "sfo-w01-cl01"
vds_name                           = "sfo-w01-cl01-vds01"
vsan_datastore_name                = "sfo-w01-cl01-ds-vsan01"
vsan_failures_to_tolerate          = "1"
esx_vmnic0                         = "vmnic0"
vmnic0_vds_name                    = "sfo-w01-cl01-vds01"
esx_vmnic1                         = "vmnic1"
vmnic1_vds_name                    = "sfo-w01-cl01-vds01"
portgroup_management_name          = "sfo-w01-cl01-vds01-pg-mgmt"
portgroup_vsan_name                = "sfo-w01-cl01-vds01-pg-vsan"
portgroup_vmotion_name             = "sfo-w01-cl01-vds01-pg-vmotion"
esx_license_key                    = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
vsan_license_key                   = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"

nsx_vip_ip                    = "172.16.11.131"
nsx_vip_fqdn                  = "sfo-w01-nsx01.sfo.rainpole.io"
nsx_manager_admin_password    = "VMw@re1!VMw@re1!"
nsx_manager_form_factor       = "small"
nsx_license_key               = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
nsx_manager_node1_name        = "sfo-w01-nsx01a"
nsx_manager_node1_ip_address  = "172.16.11.132"
nsx_manager_node1_fqdn        = "sfo-w01-nsx01a.sfo.rainpole.io"
nsx_manager_node1_subnet_mask = "255.255.255.0"
nsx_manager_node1_gateway     = "172.16.11.1"
nsx_manager_node2_name        = "sfo-w01-nsx01b"
nsx_manager_node2_ip_address  = "172.16.11.133"
nsx_manager_node2_fqdn        = "sfo-w01-nsx01b.sfo.rainpole.io"
nsx_manager_node2_subnet_mask = "255.255.255.0"
nsx_manager_node2_gateway     = "172.16.11.1"
nsx_manager_node3_name        = "sfo-w01-nsx01c"
nsx_manager_node3_ip_address  = "172.16.11.134"
nsx_manager_node3_fqdn        = "sfo-w01-nsx01c.sfo.rainpole.io"
nsx_manager_node3_subnet_mask = "255.255.255.0"
nsx_manager_node3_gateway     = "172.16.11.1"
geneve_vlan_id                = "1634"
}

Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version

terraform init

Then create the and apply the plan

terraform plan -out=create-vi-wld
terraform apply create-vi-wld

Deploy VMware Cloud Foundation Cloud Builder using the vSphere Terraform Provider

As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.

Check out the other posts on Terraform with VMware Cloud Foundation here:

Deploy Cloud Builder with the vSphere Terraform Provider

As before, you first need to define your provider configuration

# providers.tf
 
terraform {
  required_providers {
    vsphere = {
      source  = "hashicorp/vsphere"
      version = "2.5.1"
    }
  }
}
provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true
}

Then we define our variables

# variables.tf
 
# vSphere Infrastructure Details
variable "data_center" { default = "sfo-m01-dc01" }
variable "cluster" { default = "sfo-m01-cl01" }
variable "vds" { default = "sfo-m01-vds01" }
variable "datastore" { default = "vsanDatastore" }
variable "compute_pool" { default = "sfo-m01-cl01" }
variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"}
variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"}
 
# vCenter Credential Variables
variable "vsphere_user" {}
variable "vsphere_password" {}
 
# Cloud Builder Deployment
variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" }
variable "vm_name" { default = "sfo-cb01" }
variable "local_ovf_path" { default = "F:\\binaries\\VMware-Cloud-Builder-4.5.2.0-22223457_OVF10.ova" }
variable "ip0" { default = "172.16.225.66" }
variable "netmask0" { default = "255.255.255.0" }
variable "gateway" { default = "172.16.225.1" }
variable "dns" { default = "172.16.225.4" }
variable "domain" { default = "sfo.rainpole.io" }
variable "ntp" { default = "ntp.sfo.rainpole.io" }
variable "searchpath" { default = "sfo.rainpole.io" }
variable "ADMIN_PASSWORD" { default = "VMw@re1!" }
variable "ROOT_PASSWORD" { default = "VMw@re1!" }
variable "hostname" { default = "sfo-cb01.sfo.rainpole.io" }

Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.

# terraform.tfvars
 
# vSphere Provider Credentials
vsphere_user     = "administrator@vsphere.local"
vsphere_password = "VMw@re1!"

Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.

# main.tf
 
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
  name = var.data_center
}
 
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
  name          = var.cluster
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
  name          = var.datastore
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
  name          = var.mgmt_pg
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
  name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
  name          = var.compute_host
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
  name             = var.vm_name
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  host_system_id   = data.vsphere_host.host.id
  local_ovf_path   = var.local_ovf_path
  ovf_network_map = {
    "Network 1" = data.vsphere_network.mgmt.id
  }
}
 
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
  name                 = var.vm_name
  datacenter_id        = data.vsphere_datacenter.datacenter.id
  datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
  host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
  resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
  num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
  num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
  memory               = data.vsphere_ovf_vm_template.ovfLocal.memory
  guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id
  scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
 
  wait_for_guest_net_timeout = 5
 
  ovf_deploy {
    allow_unverified_ssl_cert = true
    local_ovf_path            = var.local_ovf_path
    disk_provisioning         = "thin"
    ovf_network_map   = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
 
  }
  vapp {
    properties = {
      "ip0"               = var.ip0,
      "netmask0"          = var.netmask0,
      "gateway"          = var.gateway,
      "dns"             = var.dns,
      "domain"           = var.domain,
      "ntp"              = var.ntp,
      "searchpath"       = var.searchpath,
      "ADMIN_USERNAME"  = "admin",
      "ADMIN_PASSWORD"           = var.ADMIN_PASSWORD,
      "ROOT_PASSWORD"       = var.ROOT_PASSWORD,
      "hostname"           = var.hostname
    }
  }
  lifecycle {
    ignore_changes = [
      #vapp # Enable this to ignore all vapp properties if the plan is re-run
      vapp[0].properties["ADMIN_PASSWORD"],
      vapp[0].properties["ROOT_PASSWORD"],
      host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
    ]
  }
}

Now we can run the following to initialise Terraform and the required vSphere provider

terraform init 

One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.

terraform plan -out=DeployCB

Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.

terraform apply DeployCB

VMware Cloud Foundation Terraform Provider: Create a New VCF Instance

Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).

As of writing this post I am using version 0.5.0 of the provider.

First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.

variable "cloud_builder_username" {
  description = "Username to authenticate to CloudBuilder"
  default = "admin"
}

variable "cloud_builder_password" {
  description = "Password to authenticate to CloudBuilder"
  default = "VMw@re1!"
  sensitive = true
}

variable "cloud_builder_host" {
  description = "Fully qualified domain name or IP address of the CloudBuilder"
  default = "sfo-cb01.sfo.rainpole.io"
}

variable "sddc_manager_root_user_password" {
  description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
  default = "VMw@re1!"
  sensitive = true
}

variable "sddc_manager_secondary_user_password" {
  description = "Second user (vcf) password for the SDDC Manager VM.  Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
  default = "VMw@re1!"
  sensitive = true
}

variable "vcenter_root_password" {
  description = "root password for the vCenter Server Appliance (8-20 characters)"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_manager_admin_password" {
  description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_audit_password" {
  description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_root_password" {
  description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "esx_host1_pass" {
  description = "Password to authenticate to the ESXi host 1"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host2_pass" {
  description = "Password to authenticate to the ESXi host 2"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host3_pass" {
  description = "Password to authenticate to the ESXi host 3"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host4_pass" {
  description = "Password to authenticate to the ESXi host 4"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_license_key" {
  description = "NSX license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vcenter_license_key" {
  description = "vCenter license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vsan_license_key" {
  description = "vSAN license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "esx_license_key" {
  description = "ESXi license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.

terraform {
  required_providers {
    vcf = {
      source = "vmware/vcf"
    }
  }
}
provider "vcf" {
  cloud_builder_host = var.cloud_builder_host
  cloud_builder_username = var.cloud_builder_username
  cloud_builder_password = var.cloud_builder_password
  allow_unverified_tls = true
}

resource "vcf_instance" "sddc_1" {
  instance_id = "sfo-m01"
  dv_switch_version = "7.0.3"
  skip_esx_thumbprint_validation = true
  management_pool_name = "sfo-m01-np"
  ceip_enabled = false
  esx_license = var.esx_license_key
  task_name = "workflowconfig/workflowspec-ems.json"
  sddc_manager {
    ip_address = "172.16.11.59"
    hostname = "sfo-vcf01"
    root_user_credentials {
      username = "root"
      password = var.sddc_manager_root_user_password
    }
    second_user_credentials {
      username = "vcf"
      password = var.sddc_manager_secondary_user_password
    }
  }
  ntp_servers = [
    "172.16.11.4"
  ]
  dns {
    domain = "sfo.rainpole.io"
    name_server = "172.16.11.4"
    secondary_name_server = "172.16.11.5"
  }
  network {
    subnet = "172.16.11.0/24"
    vlan_id = "1611"
    mtu = "1500"
    network_type = "MANAGEMENT"
    gateway = "172.16.11.1"
  }
  network {
    subnet = "172.16.13.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.13.101"
      end_ip_address = "172.16.13.108"
    }
    vlan_id = "1613"
    mtu = "8900"
    network_type = "VSAN"
    gateway = "172.16.13.1"
  }
  network {
    subnet = "172.16.12.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.12.101"
      end_ip_address = "172.16.12.104"
    }
    vlan_id = "1612"
    mtu = "8900"
    network_type = "VMOTION"
    gateway = "172.16.12.1"
  }
  nsx {
    nsx_manager_size = "medium"
    nsx_manager {
      hostname = "sfo-m01-nsx01a"
      ip = "172.16.11.72"
    }
    root_nsx_manager_password = var.nsx_manager_root_password
    nsx_admin_password = var.nsx_manager_admin_password
    nsx_audit_password = var.nsx_manager_audit_password
    overlay_transport_zone {
      zone_name = "sfo-m01-overlay-tz"
      network_name = "sfo-m01-overlay"
    }
    vip = "172.16.11.71"
    vip_fqdn = "sfo-m01-nsx01"
    license = var.nsx_license_key
    transport_vlan_id = 1614
  }
  vsan {
    license = var.vsan_license_key
    datastore_name = "sfo-m01-vsan"
  }
  dvs {
    mtu = 8900
    nioc {
      traffic_type = "VSAN"
      value = "HIGH"
    }
    nioc {
      traffic_type = "VMOTION"
      value = "LOW"
    }
    nioc {
      traffic_type = "VDP"
      value = "LOW"
    }
    nioc {
      traffic_type = "VIRTUALMACHINE"
      value = "HIGH"
    }
    nioc {
      traffic_type = "MANAGEMENT"
      value = "NORMAL"
    }
    nioc {
      traffic_type = "NFS"
      value = "LOW"
    }
    nioc {
      traffic_type = "HBR"
      value = "LOW"
    }
    nioc {
      traffic_type = "FAULTTOLERANCE"
      value = "LOW"
    }
    nioc {
      traffic_type = "ISCSI"
      value = "LOW"
    }
    dvs_name = "SDDC-Dswitch-Private"
    vmnics = [
      "vmnic0",
      "vmnic1"
    ]
    networks = [
      "MANAGEMENT",
      "VSAN",
      "VMOTION"
    ]
  }
  cluster {
    cluster_name = "sfo-m01-cl01"
    cluster_evc_mode = ""
    resource_pool {
      name = "Mgmt-ResourcePool"
      type = "management"
    }
    resource_pool {
      name = "Network-ResourcePool"
      type = "network"
    }
    resource_pool {
      name = "Compute-ResourcePool"
      type = "compute"
    }
    resource_pool {
      name = "User-RP"
      type = "compute"
    }
  }
  psc {
    psc_sso_domain = "vsphere.local"
    admin_user_sso_password = "VMw@re1!"
  }
  vcenter {
    vcenter_ip = "172.16.11.70"
    vcenter_hostname = "sfo-m01-vc01"
    license = var.vcenter_license_key
    root_vcenter_password = var.vcenter_root_password
    vm_size = "tiny"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.101"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx01"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.102"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx02"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.103"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx03"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.104"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx04"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
}

Once the above is defined you can run the following to create your Terraform Plan:

terraform init
terraform plan -out=vcf-bringup

Once there are no errors from the above plan command you can run the following to start the VCF bring-up

terraform apply .\vcf-bringup

All going well, this should result in a successful VMware Cloud Foundation bring-up