QuickTip: Renew SDDC Manager VMCA Certificate

I got a question from someone internally if renewing the VMCA signed certificate on SDDC Manager in a VCF instance is possible. For context, out-of-the-box SDDC Manager is signed by the VMCA on the management domain vCenter Server, but there is no supported way to renew that certificate. So before the VMCA certificate expires, you must replace it with a signed CA cert from your internal CA, or from an external 3rd party CA.

That said, it is possible to leverage VMCA to renew the cert on SDDC Manager. Here are some notes I had from doing this previously in the lab.

Disclaimer: This is not officially supported by VMware/Broadcom, use at your own risk.

First generate a CSR for SDDC Manager in the normal way using the SDDC Manager UI

Download the CSR as sfo-vcf01.sfo.rainpole.io.csr

SSH to the Management vCenter Server and do the following

    mkdir /tmp/certs
    upload CSR to /tmp/certs
    cd /tmp/certs
    vi /tmp/certs/cert.cfg
    
    # cert.cfg contents replacing FQDN appropriately
    [ req ]
    req_extensions = v3_req
    
    [ v3_req ]
    extendedKeyUsage = serverAuth, clientAuth
    authorityKeyIdentifier=keyid,issuer
    authorityInfoAccess = caIssuers;URI:https://sfo-m01-vc01.sfo.rainpole.io/afd/vecs/ca
    
    Save /tmp/certs/cert.cfg

    On the management vCenter Server, generate the cert

    openssl x509 -req -days 365 -in sfo-vcf01.sfo.rainpole.io.csr -out sfo-vcf01.sfo.rainpole.io.crt -CA /var/lib/vmware/vmca/root.cer -CAkey /var/lib/vmware/vmca/privatekey.pem -extensions v3_req -CAcreateserial -extfile cert.cfg
    

    Create a certificate chain

    cat sfo-vcf01.sfo.rainpole.io.crt>>sfo-vcf01.sfo.rainpole.io.chain.pem
    cat /var/lib/vmware/vmca/root.cer>>sfo-vcf01.sfo.rainpole.io.chain.pem
    

    SSH to SDDC Manager to install the cert

    su
    cp /etc/ssl/private/vcf_https.key /etc/ssl/private/old_vcf_https.key
    mv /var/opt/vmware/vcf/commonsvcs/workdir/vcf_https.key /etc/ssl/private/vcf_https.key
    cp /etc/ssl/certs/vcf_https.crt /etc/ssl/certs/old_vcf_https.crt
    rm /etc/ssl/certs/vcf_https.crt
    
    SCP sfo-vcf01.sfo.rainpole.io.chain.pem to /etc/ssl/certs/
    
    mv /etc/ssl/certs/sfo-vcf01.sfo.rainpole.io.chain.pem /etc/ssl/certs/vcf_https.crt
    chmod 644 /etc/ssl/certs/vcf_https.crt
    chmod 640 /etc/ssl/private/vcf_https.key
    nginx -t && systemctl reload nginx

    You should now have renewed your VMCA signed certificate on SDDC Manager.

    PowerCLI Module For VMware Cloud Foundation: Introduction

    As you are no doubt aware I am a fan of PowerShell and PowerCLI. Since my early days working with VMware products, whether it was vCenter, vCloud Director or VMware Cloud Foundation (VCF), I have always leveraged PowerCLI to get the job done. Up until recently, there was no native PowerCLI support for the VMware Cloud Foundation API. Hence why I started the open-source PowerVCF project almost 5 years ago! PowerVCF has grown and matured as new maintainers came onboard. Open-source projects are a great way to deliver functionality to our customers that is not yet available in officially supported channels. Since the release of PowerCLI 13.1 I am delighted to say that we now have officially supported, native PowerCLI modules for VMware Cloud Foundation.

    2 distinct modules are now part of PowerCLI. One for the Cloud Builder API and one for the SDDC Manager API.

    Install-Module -Name VMware.Sdk.Vcf.CloudBuilder
    Install-Module -Name VMware.Sdk.Vcf.SddcManager

    The cmdlets for each module are too many to list here but to see what’s available once you have them installed do the following

    get-command -module VMware.Sdk.Vcf.CloudBuilder
    get-command -module VMware.Sdk.Vcf.SDDCManager

    You will see from the output that the cmdlets are broken into primarily 2 types:

    • Initialize-Vcf<xyz>
      • Used to gather information and generate input specs
    • Invoke-Vcf<xyz>
      • Used to execute the API request with an input spec

    Each module also has a connect/disconnect cmdlet which can be used in the following way

    Connect-VcfCloudBuilderServer -Server sfo-cb01.sfo.rainpole.io -User admin -Password VMw@re1!VMw@re1!

    This connection object is then stored in $defaultCloudBuilderConnections

    Connect-VcfSddcManagerServer -Server sfo-vcf01.sfo.rainpole.io -User administrator@vsphere.local -Password VMw@re1!VMw@re1!

    This connection object is then stored in $defaultsddcManagerConnections

    Note: If you are working in a lab environment with untrusted certs you can pass -IgnoreInvalidCertificate to each of the above commands.

    Once you have an active connection, you can begin to query the API. The example below returns a list of all hosts from SDDC Manager. One thing you will notice, if you are a PowerVCF user, is that you will need to parse the response a little more than you needed to with the PowerVCF cmdlet Get-VCFHost.

    Running Invoke-VcfGetHosts will return a list of host elements

    So to parse the response, you can do something like this, which will return the details of all hosts

    But lets say you would like to filter the response to just the hosts from a specific workload domain. You first need the Id of the workload domain, in this case sfo-m01.

    And you can then get a filtered list of hosts for that domain

    Hopefully, this introduction was helpful, I will put together a series of blogs over the next few weeks covering some of the main VCF operations, such as bringup, commissioning hosts, deploying workload domains etc. As always, comments & feedback are welcome. Please let me know what your experience is with the new modules and I can feed it back to the engineering team.

    Adding LDAP Users to vSphere SSO Groups Using PowerShell

    I got a query from a customer how to add a user from an LDAP directory to an SSO group programmatically. There is no support in native PowerCLI for this that I am aware of but there is an open source module called VMware.vSphere.SsoAdmin which can be used to achieve the goal. I checked with my colleague Gary Blake and he had an example in the Power Validated Solutions Module that I was able to reference.

    First off you need to install the VMware.vSphere.SsoAdmin module. This can be done from the PowerShell Gallery.

    Install-Module VMware.vSphere.SsoAdmin

    Once it is installed you can run the following to add an LDAP user to an SSO group

    $vcFqdn = 'sfo-m01-vc01.sfo.rainpole.io'
    $vcUser = 'administrator@vsphere.local'
    $vcPassword = 'VMw@re1!'
    $ldapDomain = 'sfo.rainpole.io'
    $ldapUser = 'ldap_user'
    $ssoDomain = 'vsphere.local'
    $ssoGroup = 'administrators'
    
    $ssoConnection = Connect-SsoAdminServer -Server $vcFqdn -User $vcUser -Password $vcPassword -SkipCertificateCheck
    $targetGroup = Get-SsoGroup -Domain $ssoDomain -Name $ssoGroup -Server $ssoConnection
    $ldapUserToAdd = Get-SsoPersonUser -Domain $ldapDomain -Name $ldapUser -Server $ssoConnection
    $ldapUserToAdd | Add-UserToSsoGroup -TargetGroup $targetGroup

    Running the code above results in the LDAP user being added to the SSO administrators group

    Cleanup Failed Credential Tasks in VMware Cloud Foundation

    I have covered how to clean up general failed tasks in Cleanup Failed Credentials Tasks in VMware Cloud Foundation in a previous post. Another type of task that can be in a failed state is a credentials rotation operation. Credential operations can fail for a number of reasons (the underlying component is unreachable at the time of the operation etc), and this type of failed task is a blocking task – i.e. you cannot perform another credential task until you clean up or cancel the failed task. The script below leverages the PowerVCF cmdlet Get-VCFCredentialTask to discover failed credential tasks and Stop-VCFCredentialTask to clean them up. As with all scripts, please test thoroughly in a lab before using it in production.

    # Script to cleanup failed credential tasks in SDDC Manager
    # Written by Brian O'Connell - Staff II Solutions Architect @ VMware
    #User Variables
    # SDDC Manager FQDN. This is the target that is queried for failed tasks
    $sddcManagerFQDN = "sfo-vcf01.sfo.rainpole.io"
    # SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
    $sddcManagerAPIUser = "administrator@vsphere.local"
    $sddcManagerAPIPassword = "VMw@re1!"
    # DO NOT CHANGE ANYTHING BELOW THIS LINE
    #########################################
    # Set TLS to 1.2 to avoid certificate mismatch errors
    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    # Install PowerVCF if not already installed
    if (!(Get-InstalledModule -name PowerVCF -MinimumVersion 2.4.0 -ErrorAction SilentlyContinue)) {
    Install-Module -Name PowerVCF -MinimumVersion 2.4.0 -Force
    }
    # Request a VCF Token using PowerVCF
    Request-VCFToken -fqdn $sddcManagerFQDN -username $sddcManagerAPIUser -password $sddcManagerAPIPassword
    # Retrieve a list of failed tasks
    $failedTaskIDs = @()
    $ids = (Get-VCFCredentialTask -status "Failed").id
    Foreach ($id in $ids) {
    $failedTaskIDs += ,$id
    }
    # Cleanup the failed tasks
    Foreach ($taskID in $failedTaskIDs) {
    Stop-VCFCredentialTask -id $taskID
    # Verify the task was deleted
    Try {
    $verifyTaskDeleted = (Get-VCFCredentialTask -id $taskID)
    if (!$verifyTaskDeleted) {
    Write-Output "Task ID $taskID Deleted Successfully"
    }
    }
    catch {
    Write-Error "Something went wrong. Please check your SDDC Manager state"
    }
    }

    Install HashiCorp Terraform on a PhotonOS Appliance

    HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.

    #!/usr/bin/env bash
    
    # Versions
    GO_VERSION="1.21.4"
    TERRAFORM_VERSION="1.6.3"
    
    # Arch
    if [[ $(uname -m) == "x86_64" ]]; then
      LINUX_ARCH="amd64"
    elif [[ $(uname -m) == "aarch64" ]]; then
      LINUX_ARCH="arm64"
    fi
    
    # Directory
    if ! [[ -d ~/code ]]; then
      mkdir ~/code
    fi
    
    # Go
    wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    PATH=$PATH:/usr/local/go/bin
    go version
    rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    export GOPATH=${HOME}/code/go
    
    # HashiCorp
    wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
    unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
    rm ./*.zip

    Terraform Module for Deploying VMware Cloud Foundation VI Workload Domains

    I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.

    I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.

    The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.

    # main.tf
    
    module "vidomain" {
    source= "LifeOfBrianOC/vidomain"
    version = "0.1.0"
    
    sddc_manager_fqdn     = "sfo-vcf01.sfo.rainpole.io"
    sddc_manager_username = "administrator@vsphere.local"
    sddc_manager_password = "VMw@re1!"
    allow_unverified_tls  = "true"
    
    network_pool_name                     = "sfo-w01-np"
    network_pool_storage_gateway          = "172.16.13.1"
    network_pool_storage_netmask          = "255.255.255.0"
    network_pool_storage_mtu              = "8900"
    network_pool_storage_subnet           = "172.16.13.0"
    network_pool_storage_type             = "VSAN"
    network_pool_storage_vlan_id          = "1633"
    network_pool_storage_ip_pool_start_ip = "172.16.13.101"
    network_pool_storage_ip_pool_end_ip   = "172.16.13.108"
    
    network_pool_vmotion_gateway          = "172.16.12.1"
    network_pool_vmotion_netmask          = "255.255.255.0"
    network_pool_vmotion_mtu              = "8900"
    network_pool_vmotion_subnet           = "172.16.12.0"
    network_pool_vmotion_vlan_id          = "1632"
    network_pool_vmotion_ip_pool_start_ip = "172.16.12.101"
    network_pool_vmotion_ip_pool_end_ip   = "172.16.12.108"
    
    esx_host_storage_type = "VSAN"
    esx_host1_fqdn        = "sfo01-w01-esx01.sfo.rainpole.io"
    esx_host1_username    = "root"
    esx_host1_pass        = "VMw@re1!"
    esx_host2_fqdn        = "sfo01-w01-esx02.sfo.rainpole.io"
    esx_host2_username    = "root"
    esx_host2_pass        = "VMw@re1!"
    esx_host3_fqdn        = "sfo01-w01-esx03.sfo.rainpole.io"
    esx_host3_username    = "root"
    esx_host3_pass        = "VMw@re1!"
    esx_host4_fqdn        = "sfo01-w01-esx04.sfo.rainpole.io"
    esx_host4_username    = "root"
    esx_host4_pass        = "VMw@re1!"
    
    vcf_domain_name                    = "sfo-w01"
    vcf_domain_vcenter_name            = "sfo-w01-vc01"
    vcf_domain_vcenter_datacenter_name = "sfo-w01-dc01"
    vcenter_root_password              = "VMw@re1!"
    vcenter_vm_size                    = "small"
    vcenter_storage_size               = "lstorage"
    vcenter_ip_address                 = "172.16.11.130"
    vcenter_subnet_mask                = "255.255.255.0"
    vcenter_gateway                    = "172.16.11.1"
    vcenter_fqdn                       = "sfo-w01-vc01.sfo.rainpole.io"
    vsphere_cluster_name               = "sfo-w01-cl01"
    vds_name                           = "sfo-w01-cl01-vds01"
    vsan_datastore_name                = "sfo-w01-cl01-ds-vsan01"
    vsan_failures_to_tolerate          = "1"
    esx_vmnic0                         = "vmnic0"
    vmnic0_vds_name                    = "sfo-w01-cl01-vds01"
    esx_vmnic1                         = "vmnic1"
    vmnic1_vds_name                    = "sfo-w01-cl01-vds01"
    portgroup_management_name          = "sfo-w01-cl01-vds01-pg-mgmt"
    portgroup_vsan_name                = "sfo-w01-cl01-vds01-pg-vsan"
    portgroup_vmotion_name             = "sfo-w01-cl01-vds01-pg-vmotion"
    esx_license_key                    = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    vsan_license_key                   = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    
    nsx_vip_ip                    = "172.16.11.131"
    nsx_vip_fqdn                  = "sfo-w01-nsx01.sfo.rainpole.io"
    nsx_manager_admin_password    = "VMw@re1!VMw@re1!"
    nsx_manager_form_factor       = "small"
    nsx_license_key               = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    nsx_manager_node1_name        = "sfo-w01-nsx01a"
    nsx_manager_node1_ip_address  = "172.16.11.132"
    nsx_manager_node1_fqdn        = "sfo-w01-nsx01a.sfo.rainpole.io"
    nsx_manager_node1_subnet_mask = "255.255.255.0"
    nsx_manager_node1_gateway     = "172.16.11.1"
    nsx_manager_node2_name        = "sfo-w01-nsx01b"
    nsx_manager_node2_ip_address  = "172.16.11.133"
    nsx_manager_node2_fqdn        = "sfo-w01-nsx01b.sfo.rainpole.io"
    nsx_manager_node2_subnet_mask = "255.255.255.0"
    nsx_manager_node2_gateway     = "172.16.11.1"
    nsx_manager_node3_name        = "sfo-w01-nsx01c"
    nsx_manager_node3_ip_address  = "172.16.11.134"
    nsx_manager_node3_fqdn        = "sfo-w01-nsx01c.sfo.rainpole.io"
    nsx_manager_node3_subnet_mask = "255.255.255.0"
    nsx_manager_node3_gateway     = "172.16.11.1"
    geneve_vlan_id                = "1634"
    }

    Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version

    terraform init

    Then create the and apply the plan

    terraform plan -out=create-vi-wld
    terraform apply create-vi-wld

    Deploy VMware Cloud Foundation Cloud Builder using the vSphere Terraform Provider

    As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.

    Check out the other posts on Terraform with VMware Cloud Foundation here:

    Deploy Cloud Builder with the vSphere Terraform Provider

    As before, you first need to define your provider configuration

    # providers.tf
     
    terraform {
      required_providers {
        vsphere = {
          source  = "hashicorp/vsphere"
          version = "2.5.1"
        }
      }
    }
    provider "vsphere" {
      user                 = var.vsphere_user
      password             = var.vsphere_password
      vsphere_server       = var.vsphere_server
      allow_unverified_ssl = true
    }

    Then we define our variables

    # variables.tf
     
    # vSphere Infrastructure Details
    variable "data_center" { default = "sfo-m01-dc01" }
    variable "cluster" { default = "sfo-m01-cl01" }
    variable "vds" { default = "sfo-m01-vds01" }
    variable "datastore" { default = "vsanDatastore" }
    variable "compute_pool" { default = "sfo-m01-cl01" }
    variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"}
    variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"}
     
    # vCenter Credential Variables
    variable "vsphere_user" {}
    variable "vsphere_password" {}
     
    # Cloud Builder Deployment
    variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" }
    variable "vm_name" { default = "sfo-cb01" }
    variable "local_ovf_path" { default = "F:\\binaries\\VMware-Cloud-Builder-4.5.2.0-22223457_OVF10.ova" }
    variable "ip0" { default = "172.16.225.66" }
    variable "netmask0" { default = "255.255.255.0" }
    variable "gateway" { default = "172.16.225.1" }
    variable "dns" { default = "172.16.225.4" }
    variable "domain" { default = "sfo.rainpole.io" }
    variable "ntp" { default = "ntp.sfo.rainpole.io" }
    variable "searchpath" { default = "sfo.rainpole.io" }
    variable "ADMIN_PASSWORD" { default = "VMw@re1!" }
    variable "ROOT_PASSWORD" { default = "VMw@re1!" }
    variable "hostname" { default = "sfo-cb01.sfo.rainpole.io" }

    Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.

    # terraform.tfvars
     
    # vSphere Provider Credentials
    vsphere_user     = "administrator@vsphere.local"
    vsphere_password = "VMw@re1!"

    Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.

    # main.tf
     
    # Data source for vCenter Datacenter
    data "vsphere_datacenter" "datacenter" {
      name = var.data_center
    }
     
    # Data source for vCenter Cluster
    data "vsphere_compute_cluster" "cluster" {
      name          = var.cluster
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Datastore
    data "vsphere_datastore" "datastore" {
      name          = var.datastore
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Portgroup
    data "vsphere_network" "mgmt" {
      name          = var.mgmt_pg
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for vCenter Resource Pool. In our case we will use the root resource pool
    data "vsphere_resource_pool" "pool" {
      name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for ESXi host to deploy to
    data "vsphere_host" "host" {
      name          = var.compute_host
      datacenter_id = data.vsphere_datacenter.datacenter.id
    }
     
    # Data source for the OVF to read the required OVF Properties
    data "vsphere_ovf_vm_template" "ovfLocal" {
      name             = var.vm_name
      resource_pool_id = data.vsphere_resource_pool.pool.id
      datastore_id     = data.vsphere_datastore.datastore.id
      host_system_id   = data.vsphere_host.host.id
      local_ovf_path   = var.local_ovf_path
      ovf_network_map = {
        "Network 1" = data.vsphere_network.mgmt.id
      }
    }
     
    # Deployment of VM from Local OVA
    resource "vsphere_virtual_machine" "cb01" {
      name                 = var.vm_name
      datacenter_id        = data.vsphere_datacenter.datacenter.id
      datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
      host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
      resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
      num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
      num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
      memory               = data.vsphere_ovf_vm_template.ovfLocal.memory
      guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id
      scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
     
      wait_for_guest_net_timeout = 5
     
      ovf_deploy {
        allow_unverified_ssl_cert = true
        local_ovf_path            = var.local_ovf_path
        disk_provisioning         = "thin"
        ovf_network_map   = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
     
      }
      vapp {
        properties = {
          "ip0"               = var.ip0,
          "netmask0"          = var.netmask0,
          "gateway"          = var.gateway,
          "dns"             = var.dns,
          "domain"           = var.domain,
          "ntp"              = var.ntp,
          "searchpath"       = var.searchpath,
          "ADMIN_USERNAME"  = "admin",
          "ADMIN_PASSWORD"           = var.ADMIN_PASSWORD,
          "ROOT_PASSWORD"       = var.ROOT_PASSWORD,
          "hostname"           = var.hostname
        }
      }
      lifecycle {
        ignore_changes = [
          #vapp # Enable this to ignore all vapp properties if the plan is re-run
          vapp[0].properties["ADMIN_PASSWORD"],
          vapp[0].properties["ROOT_PASSWORD"],
          host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
        ]
      }
    }

    Now we can run the following to initialise Terraform and the required vSphere provider

    terraform init 

    One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.

    terraform plan -out=DeployCB

    Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.

    terraform apply DeployCB

    VMware Cloud Foundation 5.1 is GA!

    While VMware Explore EMEA is in full swing, VMware Cloud Foundation 5.1 went GA! As of yesterday, you can now review the 5.1 design guide to see the exciting additions to VMware Cloud Foundation 5.1 is GA! Some of the main highlights are listed below.

    • Support for vSAN ESA: vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.
    • Non-DHCP option for Tunnel Endpoint (TEP) IP assignment: SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs for stretched clusters and L3 aware clusters.
    • vSphere Distributed Services engine for Ready nodes: AMD-Pensando and NVIDIA BlueField-2 DPUs are now supported. Offloading the Virtual Distributed Switch (VDS) and NSX network and security functions to the hardware provides significant performance improvements for low latency and high bandwidth applications. NSX distributed firewall processing is also offloaded from the server CPUs to the network silicon.
    • Multi-pNIC/Multi-vSphere Distributed Switch UI enhancements: VCF users can configure complex networking configurations, including more vSphere Distributed Switch and NSX switch-related configurations, through the SDDC Manager UI.
    • Distributed Virtual Port Group Separation for management domain appliances: Enables the traffic isolation between management VMs (such as SDDC Manager, NSX Manager, and vCenter) and ESXi Management VMkernel interfaces
    • Support for vSphere Lifecycle Manager images in management domain:VCF users can deploy management domain using vSphere Lifecycle Manager (vLCM) images during new VCF instance deployment
    • Mixed-mode Support for Workload Domains​: A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.
    • Asynchronous update of the pre-check files: The upgrade pre-checks can be updated asynchronously with new pre-checks using a pre-check file provided by VMware.
    • Workload domain NSX integration: Support for multiple NSX enabled VDSs for Distributed Firewall use cases
    • Tier-0/1 optional for VCF Edge cluster: When creating an Edge cluster with the VCF API, the Tier-0 and Tier-1 gateways are now optional.
    • VCF Edge nodes support static or pooled IP: When creating or expanding an Edge cluster using VCF APIs, Edge node TEP configuration may come from an NSX IP pool or be specified statically as in earlier releases.
    • Support for mixed license deployment: A combination of keyed and keyless licenses can be used within the same VCF instance.
    • Integration with Workspace ONE Broker: Provides identity federation and SSO across vCenter, NSX, and SDDC Manager. VCF administrators can add Okta to Workspace ONE Broker as a Day-N operation using the SDDC Manger UI.
    • VMware vRealize rebranding: VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details..
    • VMware Validated Solutions: All VMware Validated Solutions are updated to support VMware Cloud Foundation 5.1. Visit VMware Validated Solutions for the updated guides.

    VMware Cloud Foundation Terraform Provider: Create a New VCF Instance

    Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).

    As of writing this post I am using version 0.5.0 of the provider.

    First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.

    variable "cloud_builder_username" {
      description = "Username to authenticate to CloudBuilder"
      default = "admin"
    }
    
    variable "cloud_builder_password" {
      description = "Password to authenticate to CloudBuilder"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "cloud_builder_host" {
      description = "Fully qualified domain name or IP address of the CloudBuilder"
      default = "sfo-cb01.sfo.rainpole.io"
    }
    
    variable "sddc_manager_root_user_password" {
      description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "sddc_manager_secondary_user_password" {
      description = "Second user (vcf) password for the SDDC Manager VM.  Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "vcenter_root_password" {
      description = "root password for the vCenter Server Appliance (8-20 characters)"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_admin_password" {
      description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_audit_password" {
      description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_manager_root_password" {
      description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
      default = "VMw@re1!VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host1_pass" {
      description = "Password to authenticate to the ESXi host 1"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host2_pass" {
      description = "Password to authenticate to the ESXi host 2"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host3_pass" {
      description = "Password to authenticate to the ESXi host 3"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "esx_host4_pass" {
      description = "Password to authenticate to the ESXi host 4"
      default = "VMw@re1!"
      sensitive = true
    }
    
    variable "nsx_license_key" {
      description = "NSX license to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "vcenter_license_key" {
      description = "vCenter license to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "vsan_license_key" {
      description = "vSAN license key to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }
    
    variable "esx_license_key" {
      description = "ESXi license key to be used"
      default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
      sensitive = true
    }

    Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.

    terraform {
      required_providers {
        vcf = {
          source = "vmware/vcf"
        }
      }
    }
    provider "vcf" {
      cloud_builder_host = var.cloud_builder_host
      cloud_builder_username = var.cloud_builder_username
      cloud_builder_password = var.cloud_builder_password
      allow_unverified_tls = true
    }
    
    resource "vcf_instance" "sddc_1" {
      instance_id = "sfo-m01"
      dv_switch_version = "7.0.3"
      skip_esx_thumbprint_validation = true
      management_pool_name = "sfo-m01-np"
      ceip_enabled = false
      esx_license = var.esx_license_key
      task_name = "workflowconfig/workflowspec-ems.json"
      sddc_manager {
        ip_address = "172.16.11.59"
        hostname = "sfo-vcf01"
        root_user_credentials {
          username = "root"
          password = var.sddc_manager_root_user_password
        }
        second_user_credentials {
          username = "vcf"
          password = var.sddc_manager_secondary_user_password
        }
      }
      ntp_servers = [
        "172.16.11.4"
      ]
      dns {
        domain = "sfo.rainpole.io"
        name_server = "172.16.11.4"
        secondary_name_server = "172.16.11.5"
      }
      network {
        subnet = "172.16.11.0/24"
        vlan_id = "1611"
        mtu = "1500"
        network_type = "MANAGEMENT"
        gateway = "172.16.11.1"
      }
      network {
        subnet = "172.16.13.0/24"
        include_ip_address_ranges {
          start_ip_address = "172.16.13.101"
          end_ip_address = "172.16.13.108"
        }
        vlan_id = "1613"
        mtu = "8900"
        network_type = "VSAN"
        gateway = "172.16.13.1"
      }
      network {
        subnet = "172.16.12.0/24"
        include_ip_address_ranges {
          start_ip_address = "172.16.12.101"
          end_ip_address = "172.16.12.104"
        }
        vlan_id = "1612"
        mtu = "8900"
        network_type = "VMOTION"
        gateway = "172.16.12.1"
      }
      nsx {
        nsx_manager_size = "medium"
        nsx_manager {
          hostname = "sfo-m01-nsx01a"
          ip = "172.16.11.72"
        }
        root_nsx_manager_password = var.nsx_manager_root_password
        nsx_admin_password = var.nsx_manager_admin_password
        nsx_audit_password = var.nsx_manager_audit_password
        overlay_transport_zone {
          zone_name = "sfo-m01-overlay-tz"
          network_name = "sfo-m01-overlay"
        }
        vip = "172.16.11.71"
        vip_fqdn = "sfo-m01-nsx01"
        license = var.nsx_license_key
        transport_vlan_id = 1614
      }
      vsan {
        license = var.vsan_license_key
        datastore_name = "sfo-m01-vsan"
      }
      dvs {
        mtu = 8900
        nioc {
          traffic_type = "VSAN"
          value = "HIGH"
        }
        nioc {
          traffic_type = "VMOTION"
          value = "LOW"
        }
        nioc {
          traffic_type = "VDP"
          value = "LOW"
        }
        nioc {
          traffic_type = "VIRTUALMACHINE"
          value = "HIGH"
        }
        nioc {
          traffic_type = "MANAGEMENT"
          value = "NORMAL"
        }
        nioc {
          traffic_type = "NFS"
          value = "LOW"
        }
        nioc {
          traffic_type = "HBR"
          value = "LOW"
        }
        nioc {
          traffic_type = "FAULTTOLERANCE"
          value = "LOW"
        }
        nioc {
          traffic_type = "ISCSI"
          value = "LOW"
        }
        dvs_name = "SDDC-Dswitch-Private"
        vmnics = [
          "vmnic0",
          "vmnic1"
        ]
        networks = [
          "MANAGEMENT",
          "VSAN",
          "VMOTION"
        ]
      }
      cluster {
        cluster_name = "sfo-m01-cl01"
        cluster_evc_mode = ""
        resource_pool {
          name = "Mgmt-ResourcePool"
          type = "management"
        }
        resource_pool {
          name = "Network-ResourcePool"
          type = "network"
        }
        resource_pool {
          name = "Compute-ResourcePool"
          type = "compute"
        }
        resource_pool {
          name = "User-RP"
          type = "compute"
        }
      }
      psc {
        psc_sso_domain = "vsphere.local"
        admin_user_sso_password = "VMw@re1!"
      }
      vcenter {
        vcenter_ip = "172.16.11.70"
        vcenter_hostname = "sfo-m01-vc01"
        license = var.vcenter_license_key
        root_vcenter_password = var.vcenter_root_password
        vm_size = "tiny"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.101"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx01"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.102"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx02"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.103"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx03"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
      host {
        credentials {
          username = "root"
          password = "VMw@re1!"
        }
        ip_address_private {
          subnet = "255.255.255.0"
          cidr = ""
          ip_address = "172.16.11.104"
          gateway = "172.16.11.1"
        }
        hostname = "sfo01-m01-esx04"
        vswitch = "vSwitch0"
        association = "SDDC-Datacenter"
      }
    }

    Once the above is defined you can run the following to create your Terraform Plan:

    terraform init
    terraform plan -out=vcf-bringup

    Once there are no errors from the above plan command you can run the following to start the VCF bring-up

    terraform apply .\vcf-bringup

    All going well, this should result in a successful VMware Cloud Foundation bring-up

    VMware Cloud Foundation Terraform Provider: Introduction

    HashiCorp Terraform has become an industry standard, infrastructure-as-code & desired-state configuration tool for managing on-premises and cloud-based entities. If you are not familiar with Terraform, I’ve covered some early general learnings on Terraform in some posts here & here. The internal engineering team are working on a Terraform provider for VCF, so I decided to give it a spin to review its capabilities & test drive it in the lab.

    First off what VCF operations is the Provider capable of supporting today:

    • Deploying a new VCF instance (bring-up)
    • Commissioning hosts
    • Creating network pools
    • Deploying a new VI Workload domain
    • Creating clusters
    • Expanding clusters
    • Adding users

    New functionality is being added every week, and as with all new initiatives like this, customer consumption and adoption will drive innovation and progress.

    The GitHub repo contains some great example files to get you started. I am going to do a few blog posts on what I’ve learned so far but for now, here are the important links you need if you would like to take a look at the provider

    If you want to get started by using the examples take a look here.