Quick Tip: Retrieve vSphere Supervisor Control Plane SSH Password

SSH access to the Supervisor Control Plane VMs uses an auto-generated password. The password for the system user on these VMs needs to be retrieved from associated vCenter.

IMPORTANT: Direct access to the vSphere Supervisor Control Plane VMs should be used with caution and for troubleshooting purposes only.

SSH to the vCenter as the root user, and enter the bash shell by typing shell

Run the following to retrieve the password

root@sfo-w01-vc01 [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.py

You should now be able to SSH to the IP of the vSphere Supervisor Control Plane VM (The IP returned will be the floating IP or FIP) with username root.

Upgrading VCF 5.2 to 9.0 – Part 4 – Upgrade NSX

Once your upgrade binaries are downloaded, the next step is to upgrade NSX. Once again, navigate to Workload Domains > Management Workload Domain > Updates, and click Run Precheck and ensure all prechecks pass.

Once the pre-check passes, click Configure Update.

On the Introduction page, click Next.

On the NSX Edge Clusters pane, you can choose to upgrade all NSX Edge Clusters, or select specific NSX Edge Clusters to upgrade. In my case, I only have one NSX Edge Cluster. Click Next.

On the Upgrade Options pane, you have the option to Enable sequential upgrade of NSX Edge clusters. Click Next.

On the Review pane, review the choices made and click Run Precheck.

While it is called a Precheck, it will copy the upgrade bundle over to NSX Manager. During this copy, the progress will sit at 66% completed for a while, so dont panic.

Once it completes, review any errors & warnings before proceeding, and click Back to Updates.

Click Schedule Update

On the Review pane, click Next.

On the Schedule Update pane, select either Upgrade Now, or Schedule Update to choose a future start date & time, and check the box “I have reviewed the precheck report and have verified that the update is safe to apply“, and click Finish.

To monitor the status, click View Status.

Once the NSX upgrade completes, you can move on with the next step of upgrading vCenter.

Quick Tip: No products found in Aria Lifecycle Manager with VCF 5.2.1

VCF 5.2.1 ships with Aria Lifecycle Manager 8.18. When you attempt to deploy an environment you will be met with the following error:

No content found corresponding to SDDC Manager version 5.2.1 This could be due to version incompatibility between VMware Aria Suite Lifecycle and SDDC Manager.

The reason for this is you need a product support pack (pspak) for Aria LCM 8.18 – specifically VMware Aria Suite Lifecycle 8.18.0 Product Support Pack 3. See this KB for more details on which product support pack maps to which release.

Download the pack from the Broadcom support site and log into Aria LCM. Navigate to Lifecycle Operations > Settings > Product Support Pack and click Upload.

Take a snapshot of Aria LCM and then click Select file and select the product support pack, and click Import.

Monitor the upload process in the Requests pane. Once the upload completes, navigate back to the Product Support Pack screen. The support pack will be shown. Click Apply Version & Submit. Aria LCM will restart services during the install.

Once the install completes, you should not have a list of available products when creating an environment.

Retrieve VCF Operations Appliance Root Password from the VMware Aria Suite Lifecycle Locker

When you deploy a component using VMware Aria Suite Lifecycle, it stores the credentials in it’s locker. If you need to SSH to a VCF Operations appliance and you dont know the root password, you need to retrieve the root password from the VMware Aria Suite Lifecycle locker. To do this you need to query the Aria Suite Lifecycle API for a list of locker entries using basic auth.

GET https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords?from=0&size=10

From the response, locate the corresponding vmid for the VCF OPs appliance

{            
"vmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",            "tenant": "default",            
"alias": "VCF-flt-ops01a.rainpole.io-rootUserPassword",          "password": "PASSWORD****",            
"createdOn": 1737740091124,            
"lastUpdatedOn": 1737740091124,            
"referenced": true        
}

Query the Aria Suite Lifecycle locker for the decrypted password, again with basic auth, passing the Aria Suite Lifecycle root password in the payload body.

#BODY (Aria Suite Lifecycle root password)
{
  "rootPassword": "VMw@re1!VMw@re1!"
}

POST https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords/a789765f-6cfc-497a-8273-9d8bff2684a5/decrypted

If all goes well, it should return the password

{
    "passwordVmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",
    "password": "u!B1U9#Q5L^o2Vqer@6f"
}

QuickTip: Renew SDDC Manager VMCA Certificate

I got a question from someone internally if renewing the VMCA signed certificate on SDDC Manager in a VCF instance is possible. For context, out-of-the-box SDDC Manager is signed by the VMCA on the management domain vCenter Server, but there is no supported way to renew that certificate. So before the VMCA certificate expires, you must replace it with a signed CA cert from your internal CA, or from an external 3rd party CA.

That said, it is possible to leverage VMCA to renew the cert on SDDC Manager. Here are some notes I had from doing this previously in the lab.

Disclaimer: This is not officially supported by VMware/Broadcom, use at your own risk.

First generate a CSR for SDDC Manager in the normal way using the SDDC Manager UI

Download the CSR as sfo-vcf01.sfo.rainpole.io.csr

SSH to the Management vCenter Server and do the following

    mkdir /tmp/certs
    upload CSR to /tmp/certs
    cd /tmp/certs
    vi /tmp/certs/cert.cfg
    
    # cert.cfg contents replacing FQDN appropriately
    [ req ]
    req_extensions = v3_req
    
    [ v3_req ]
    extendedKeyUsage = serverAuth, clientAuth
    authorityKeyIdentifier=keyid,issuer
    authorityInfoAccess = caIssuers;URI:https://sfo-m01-vc01.sfo.rainpole.io/afd/vecs/ca
    
    Save /tmp/certs/cert.cfg

    On the management vCenter Server, generate the cert

    openssl x509 -req -days 365 -in sfo-vcf01.sfo.rainpole.io.csr -out sfo-vcf01.sfo.rainpole.io.crt -CA /var/lib/vmware/vmca/root.cer -CAkey /var/lib/vmware/vmca/privatekey.pem -extensions v3_req -CAcreateserial -extfile cert.cfg
    

    Create a certificate chain

    cat sfo-vcf01.sfo.rainpole.io.crt>>sfo-vcf01.sfo.rainpole.io.chain.pem
    cat /var/lib/vmware/vmca/root.cer>>sfo-vcf01.sfo.rainpole.io.chain.pem
    

    SSH to SDDC Manager to install the cert

    su
    cp /etc/ssl/private/vcf_https.key /etc/ssl/private/old_vcf_https.key
    mv /var/opt/vmware/vcf/commonsvcs/workdir/vcf_https.key /etc/ssl/private/vcf_https.key
    cp /etc/ssl/certs/vcf_https.crt /etc/ssl/certs/old_vcf_https.crt
    rm /etc/ssl/certs/vcf_https.crt
    
    SCP sfo-vcf01.sfo.rainpole.io.chain.pem to /etc/ssl/certs/
    
    mv /etc/ssl/certs/sfo-vcf01.sfo.rainpole.io.chain.pem /etc/ssl/certs/vcf_https.crt
    chmod 644 /etc/ssl/certs/vcf_https.crt
    chmod 640 /etc/ssl/private/vcf_https.key
    nginx -t && systemctl reload nginx

    You should now have renewed your VMCA signed certificate on SDDC Manager.

    Adding LDAP Users to vSphere SSO Groups Using PowerShell

    I got a query from a customer how to add a user from an LDAP directory to an SSO group programmatically. There is no support in native PowerCLI for this that I am aware of but there is an open source module called VMware.vSphere.SsoAdmin which can be used to achieve the goal. I checked with my colleague Gary Blake and he had an example in the Power Validated Solutions Module that I was able to reference.

    First off you need to install the VMware.vSphere.SsoAdmin module. This can be done from the PowerShell Gallery.

    Install-Module VMware.vSphere.SsoAdmin

    Once it is installed you can run the following to add an LDAP user to an SSO group

    $vcFqdn = 'sfo-m01-vc01.sfo.rainpole.io'
    $vcUser = 'administrator@vsphere.local'
    $vcPassword = 'VMw@re1!'
    $ldapDomain = 'sfo.rainpole.io'
    $ldapUser = 'ldap_user'
    $ssoDomain = 'vsphere.local'
    $ssoGroup = 'administrators'
    
    $ssoConnection = Connect-SsoAdminServer -Server $vcFqdn -User $vcUser -Password $vcPassword -SkipCertificateCheck
    $targetGroup = Get-SsoGroup -Domain $ssoDomain -Name $ssoGroup -Server $ssoConnection
    $ldapUserToAdd = Get-SsoPersonUser -Domain $ldapDomain -Name $ldapUser -Server $ssoConnection
    $ldapUserToAdd | Add-UserToSsoGroup -TargetGroup $targetGroup

    Running the code above results in the LDAP user being added to the SSO administrators group

    Deploy VMware Cloud Foundation Cloud Builder using the vSphere Terraform Provider

    As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.

    Check out the other posts on Terraform with VMware Cloud Foundation here:

    Deploy Cloud Builder with the vSphere Terraform Provider

    As before, you first need to define your provider configuration

    # providers.tf
     
    terraform {
      required_providers {
        vsphere = {
          source  = "hashicorp/vsphere"
          version = "2.5.1"
        }
      }
    }
    provider "vsphere" {
      user                 = var.vsphere_user
      password             = var.vsphere_password
      vsphere_server       = var.vsphere_server
      allow_unverified_ssl = true
    }

    Then we define our variables

    # variables.tf
     
    # vSphere Infrastructure Details
    variable "data_center" { default = "sfo-m01-dc01" }
    variable "cluster" { default = "sfo-m01-cl01" }
    variable "vds" { default = "sfo-m01-vds01" }
    variable "datastore" { default = "vsanDatastore" }
    variable "compute_pool" { default = "sfo-m01-cl01" }
    variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"}
    variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"}
     
    # vCenter Credential Variables
    variable "vsphere_user" {}
    variable "vsphere_password" {}
     
    # Cloud Builder Deployment
    variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" }
    variable "vm_name" { default = "sfo-cb01" }
    variable "local_ovf_path" { default = "F:\\binaries\\VMware-Cloud-Builder-4.5.2.0-22223457_OVF10.ova" }
    variable "ip0" { default = "172.16.225.66" }
    variable "netmask0" { default = "255.255.255.0" }
    variable "gateway" { default = "172.16.225.1" }
    variable "dns" { default = "172.16.225.4" }
    variable "domain" { default = "sfo.rainpole.io" }
    variable "ntp" { default = "ntp.sfo.rainpole.io" }
    variable "searchpath" { default = "sfo.rainpole.io" }
    variable "ADMIN_PASSWORD" { default = "VMw@re1!" }
    variable "ROOT_PASSWORD" { default = "VMw@re1!" }
    variable "hostname" { default = "sfo-cb01.sfo.rainpole.io" }

    Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.

    # terraform.tfvars
     
    # vSphere Provider Credentials
    vsphere_user     = "administrator@vsphere.local"
    vsphere_password = "VMw@re1!"

    Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.

    # main.tf
     
    # Data source for vCenter Datacenter
    data "vsphere_datacenter" "datacenter" {
      name = var.data_center
    }
     
    # Data source for vCenter Cluster
    data "vsphere_compute_cluster" "cluster" {
      name          = var.cluster
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Datastore
    data "vsphere_datastore" "datastore" {
      name          = var.datastore
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Portgroup
    data "vsphere_network" "mgmt" {
      name          = var.mgmt_pg
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Resource Pool. In our case we will use the root resource pool
    data "vsphere_resource_pool" "pool" {
      name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for ESXi host to deploy to
    data "vsphere_host" "host" {
      name          = var.compute_host
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for the OVF to read the required OVF Properties
    data "vsphere_ovf_vm_template" "ovfLocal" {
      name             = var.vm_name
      resource_pool_id = data.vsphere_resource_pool.pool.id
      datastore_id     = data.vsphere_datastore.datastore.id
      host_system_id   = data.vsphere_host.host.id
      local_ovf_path   = var.local_ovf_path
      ovf_network_map = {
        "Network 1" = data.vsphere_network.mgmt.id
      }
    }
     
    # Deployment of VM from Local OVA
    resource "vsphere_virtual_machine" "cb01" {
      name                 = var.vm_name
      datacenter_id        = data.vsphere_datacenter.datacenter.id
      datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
      host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
      resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
      num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
      num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
      memory               = data.vsphere_ovf_vm_template.ovfLocal.memory
      guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id
      scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
     
      wait_for_guest_net_timeout = 5
     
      ovf_deploy {
        allow_unverified_ssl_cert = true
        local_ovf_path            = var.local_ovf_path
        disk_provisioning         = "thin"
        ovf_network_map   = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
     
      }
      vapp {
        properties = {
          "ip0"               = var.ip0,
          "netmask0"          = var.netmask0,
          "gateway"          = var.gateway,
          "dns"             = var.dns,
          "domain"           = var.domain,
          "ntp"              = var.ntp,
          "searchpath"       = var.searchpath,
          "ADMIN_USERNAME"  = "admin",
          "ADMIN_PASSWORD"           = var.ADMIN_PASSWORD,
          "ROOT_PASSWORD"       = var.ROOT_PASSWORD,
          "hostname"           = var.hostname
        }
      }
      lifecycle {
        ignore_changes = [
          #vapp # Enable this to ignore all vapp properties if the plan is re-run
          vapp[0].properties["ADMIN_PASSWORD"],
          vapp[0].properties["ROOT_PASSWORD"],
          host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
        ]
      }
    }

    Now we can run the following to initialise Terraform and the required vSphere provider

    terraform init 

    One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.

    terraform plan -out=DeployCB

    Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.

    terraform apply DeployCB

    VMware Cloud Foundation Terraform Provider: Create a New VCF Instance

    Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).

    As of writing this post I am using version 0.5.0 of the provider.

    First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.

    variable "cloud_builder_username" {
      description = "Username to authenticate to CloudBuilder"
      default = "admin"
    }
    
    variable "cloud_builder_password" {
      description = "Password to authenticate to CloudBuilder"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "cloud_builder_host" {
      description = "Fully qualified domain name or IP address of the CloudBuilder"
      default = "sfo-cb01.sfo.rainpole.io"
    }
    
    variable "sddc_manager_root_user_password" {
      description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "sddc_manager_secondary_user_password" {
      description = "Second user (vcf) password for the SDDC Manager VM.  Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "vcenter_root_password" {
      description = "root password for the vCenter Server Appliance (8-20 characters)"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_admin_password" {
      description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_audit_password" {
      description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_root_password" {
      description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host1_pass" {
      description = "Password to authenticate to the ESXi host 1"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host2_pass" {
      description = "Password to authenticate to the ESXi host 2"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host3_pass" {
      description = "Password to authenticate to the ESXi host 3"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host4_pass" {
      description = "Password to authenticate to the ESXi host 4"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_license_key" {
      description = "NSX license to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "vcenter_license_key" {
      description = "vCenter license to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "vsan_license_key" {
      description = "vSAN license key to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "esx_license_key" {
      description = "ESXi license key to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }

    Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.

    terraform {
      required_providers {
        vcf = {
          source = "vmware/vcf"
        }
      }
    }
    provider "vcf" {
      cloud_builder_host = var.cloud_builder_host
      cloud_builder_username = var.cloud_builder_username
      cloud_builder_password = var.cloud_builder_password
      allow_unverified_tls = true
    }
    
    resource "vcf_instance" "sddc_1" {
      instance_id = "sfo-m01"
      dv_switch_version = "7.0.3"
      skip_esx_thumbprint_validation = true
      management_pool_name = "sfo-m01-np"
      ceip_enabled = false
      esx_license = var.esx_license_key
      task_name = "workflowconfig/workflowspec-ems.json"
      sddc_manager {
        ip_address = "172.16.11.59"
        hostname = "sfo-vcf01"
        root_user_credentials {
          username = "root"
          password = var.sddc_manager_root_user_password
        }
        second_user_credentials {
          username = "vcf"
          password = var.sddc_manager_secondary_user_password
        }
      }
      ntp_servers = [
        "172.16.11.4"
      ]
      dns {
        domain = "sfo.rainpole.io"
        name_server = "172.16.11.4"
        secondary_name_server = "172.16.11.5"
      }
      network {
        subnet = "172.16.11.0/24"
        vlan_id = "1611"
        mtu = "1500"
        network_type = "MANAGEMENT"
        gateway = "172.16.11.1"
      }
      network {
        subnet = "172.16.13.0/24"
        include_ip_address_ranges {
          start_ip_address = "172.16.13.101"
          end_ip_address = "172.16.13.108"
        }
        vlan_id = "1613"
        mtu = "8900"
        network_type = "VSAN"
        gateway = "172.16.13.1"
      }
      network {
        subnet = "172.16.12.0/24"
        include_ip_address_ranges {
          start_ip_address = "172.16.12.101"
          end_ip_address = "172.16.12.104"
        }
        vlan_id = "1612"
        mtu = "8900"
        network_type = "VMOTION"
        gateway = "172.16.12.1"
      }
      nsx {
        nsx_manager_size = "medium"
        nsx_manager {
          hostname = "sfo-m01-nsx01a"
          ip = "172.16.11.72"
        }
        root_nsx_manager_password = var.nsx_manager_root_password
        nsx_admin_password = var.nsx_manager_admin_password
        nsx_audit_password = var.nsx_manager_audit_password
        overlay_transport_zone {
          zone_name = "sfo-m01-overlay-tz"
          network_name = "sfo-m01-overlay"
        }
        vip = "172.16.11.71"
        vip_fqdn = "sfo-m01-nsx01"
        license = var.nsx_license_key
        transport_vlan_id = 1614
      }
      vsan {
        license = var.vsan_license_key
        datastore_name = "sfo-m01-vsan"
      }
      dvs {
        mtu = 8900
        nioc {
          traffic_type = "VSAN"
          value = "HIGH"
        }
        nioc {
          traffic_type = "VMOTION"
          value = "LOW"
        }
        nioc {
          traffic_type = "VDP"
          value = "LOW"
        }
        nioc {
          traffic_type = "VIRTUALMACHINE"
          value = "HIGH"
        }
        nioc {
          traffic_type = "MANAGEMENT"
          value = "NORMAL"
        }
        nioc {
          traffic_type = "NFS"
          value = "LOW"
        }
        nioc {
          traffic_type = "HBR"
          value = "LOW"
        }
        nioc {
          traffic_type = "FAULTTOLERANCE"
          value = "LOW"
        }
        nioc {
          traffic_type = "ISCSI"
          value = "LOW"
        }
        dvs_name = "SDDC-Dswitch-Private"
        vmnics = [
          "vmnic0",
          "vmnic1"
        ]
        networks = [
          "MANAGEMENT",
          "VSAN",
          "VMOTION"
        ]
      }
      cluster {
        cluster_name = "sfo-m01-cl01"
        cluster_evc_mode = ""
        resource_pool {
          name = "Mgmt-ResourcePool"
          type = "management"
        }
        resource_pool {
          name = "Network-ResourcePool"
          type = "network"
        }
        resource_pool {
          name = "Compute-ResourcePool"
          type = "compute"
        }
        resource_pool {
          name = "User-RP"
          type = "compute"
        }
      }
      psc {
        psc_sso_domain = "vsphere.local"
        admin_user_sso_password = "VMw@re1!"
      }
      vcenter {
        vcenter_ip = "172.16.11.70"
        vcenter_hostname = "sfo-m01-vc01"
        license = var.vcenter_license_key
        root_vcenter_password = var.vcenter_root_password
        vm_size = "tiny"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.101"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx01"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.102"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx02"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.103"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx03"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.104"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx04"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
    }

    Once the above is defined you can run the following to create your Terraform Plan:

    terraform init
    terraform plan -out=vcf-bringup

    Once there are no errors from the above plan command you can run the following to start the VCF bring-up

    terraform apply .\vcf-bringup

    All going well, this should result in a successful VMware Cloud Foundation bring-up

    VMware Cloud Foundation Terraform Provider: Introduction

    HashiCorp Terraform has become an industry standard, infrastructure-as-code & desired-state configuration tool for managing on-premises and cloud-based entities. If you are not familiar with Terraform, I’ve covered some early general learnings on Terraform in some posts here & here. The internal engineering team are working on a Terraform provider for VCF, so I decided to give it a spin to review its capabilities & test drive it in the lab.

    First off what VCF operations is the Provider capable of supporting today:

    • Deploying a new VCF instance (bring-up)
    • Commissioning hosts
    • Creating network pools
    • Deploying a new VI Workload domain
    • Creating clusters
    • Expanding clusters
    • Adding users

    New functionality is being added every week, and as with all new initiatives like this, customer consumption and adoption will drive innovation and progress.

    The GitHub repo contains some great example files to get you started. I am going to do a few blog posts on what I’ve learned so far but for now, here are the important links you need if you would like to take a look at the provider

    If you want to get started by using the examples take a look here.