Quick Tip: Retrieve vSphere Supervisor Control Plane SSH Password

SSH access to the Supervisor Control Plane VMs uses an auto-generated password. The password for the system user on these VMs needs to be retrieved from associated vCenter.

IMPORTANT: Direct access to the vSphere Supervisor Control Plane VMs should be used with caution and for troubleshooting purposes only.

SSH to the vCenter as the root user, and enter the bash shell by typing shell

Run the following to retrieve the password

root@sfo-w01-vc01 [ ~ ]# /usr/lib/vmware-wcp/decryptK8Pwd.py

You should now be able to SSH to the IP of the vSphere Supervisor Control Plane VM (The IP returned will be the floating IP or FIP) with username root.

Add VMKernels for vSphere Replication using PowerCLI

When configuring vSphere Replication between 2 vCenter Servers, you need to add a dedicated VMKernel to each host in the source & target vCEnter Server clusters. Depending on the number of hosts per cluster, this can be a time-consuming manual task. Here is a quick script leveraging PowerCLI to retrieve the hosst from a specified cluster and loop through them adding a dedicated vSphere Replication VMKernel.

#Source VC vmks
$vCenterServer = "sfo-m01-vc01.sfo.rainpole.io"
$vCenterServerUser = "administrator@vsphere.local"
$vCenterServerPassword = "VMw@re1!VMw@re1!"
$clusterName = "sfo-m01-cl01"
$PortGroupName = "sfo-m01-cl01-vds01-pg-vlr"
$VLANID = 1116
$VSwitch = "sfo-m01-cl01-vds01"
$VMKIP = "10.11.16."  # last octet will be incremented
$lastOctetStart = 101
$SubnetMask = "255.255.255.0"
$mtu = 9000

# Connect to vCenter
Connect-VIServer -Server $vCenterServer -user $vCenterServerUser -password $vCenterServerPassword

# Get Esxi hosts in cluster
$ESXiHosts = Get-cluster -name $clusterName | Get-VMHost

# Loop through each host and add an adapter with vSphere Replication Services enabled
$index = $lastOctetStart
foreach ($ESXi in $ESXiHosts) {
    Write-Host "Processing host: $ESXi"

# Define VMkernel IP for this host
    $VMKIPAddress = "$VMKIP$index"
    $index++

# Add VMkernel adapter
    Write-Host "Adding VMkernel adapter $VMKIPAddress to $ESXi"
    $vmk = New-VMHostNetworkAdapter -VMHost $ESXi -VirtualSwitch $VSwitch -PortGroup $PortGroupName -IP $VMKIPAddress -SubnetMask $SubnetMask -VMotionEnabled $false -mtu $mtu
    $vnicMgr = Get-View -Id $ESXi.ExtensionData.ConfigManager.VirtualNicManager
    $vnicMgr.SelectVnicForNicType('vSphereReplication',$vmk.Name)
$vnicMgr.SelectVnicForNicType('vSphereReplicationNFC',$vmk.Name)

    Write-Host "VMkernel adapter added successfully on $ESXi"
}

# Disconnect from vCenter
Disconnect-VIServer -Confirm:$false

Retrieve VCF Operations Appliance Root Password from the VMware Aria Suite Lifecycle Locker

When you deploy a component using VMware Aria Suite Lifecycle, it stores the credentials in it’s locker. If you need to SSH to a VCF Operations appliance and you dont know the root password, you need to retrieve the root password from the VMware Aria Suite Lifecycle locker. To do this you need to query the Aria Suite Lifecycle API for a list of locker entries using basic auth.

GET https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords?from=0&size=10

From the response, locate the corresponding vmid for the VCF OPs appliance

{            
"vmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",            "tenant": "default",            
"alias": "VCF-flt-ops01a.rainpole.io-rootUserPassword",          "password": "PASSWORD****",            
"createdOn": 1737740091124,            
"lastUpdatedOn": 1737740091124,            
"referenced": true        
}

Query the Aria Suite Lifecycle locker for the decrypted password, again with basic auth, passing the Aria Suite Lifecycle root password in the payload body.

#BODY (Aria Suite Lifecycle root password)
{
  "rootPassword": "VMw@re1!VMw@re1!"
}

POST https://flt-fm01.rainpole.io/lcm/locker/api/v2/passwords/a789765f-6cfc-497a-8273-9d8bff2684a5/decrypted

If all goes well, it should return the password

{
    "passwordVmid": "a789765f-6cfc-497a-8273-9d8bff2684a5",
    "password": "u!B1U9#Q5L^o2Vqer@6f"
}

PowerCLI Module For VMware Cloud Foundation: Bringup Using an Existing JSON

This is the 2nd post in a series on the native PowerCLI Module For VMware Cloud Foundation (VCF). If you haven’t seen the previous post, it is available here:

  1. PowerCLI Module For VMware Cloud Foundation: Introduction

This post will focus on the Cloud Builder module to perform a bringup of a VCF instance. For this example, I am using a pre-populated JSON file. I will do a follow-up post on how to create the spec from scratch.

To get started we need a Cloud Builder connection.

Connect-VcfCloudBuilderServer -Server sfo-cb01.sfo.rainpole.io -User admin -Password VMw@re1!VMw@re1!

If you have a pre-populated json spec, you can simply do the following to perform a validation using the Cloud Builder API

$sddcSpec = (Get-Content -Raw .\sfo-m01-bringup-spec.json)
Invoke-VcfCbValidateBringupSpec -SddcSpec $sddcSpec

And once the validation passes, do the following to start the bringup:

Invoke-VcfCbStartBringup -sddcSpec $sddcSpec

Bringup is a long-running task but you can monitor the status using something like this

# Retrieve the bringup task id
$bringupTaskId = (Invoke-VcfCbGetBringupTasks).elements.Id

#Poll the status of the task until it is no longer in progress
Do {
$bringupTask = Invoke-VcfCbGetBringupTaskByID -id $bringupTaskId
}
Until ($bringupTask.Status -ne 'IN_PROGRESS')

QuickTip: Renew SDDC Manager VMCA Certificate

I got a question from someone internally if renewing the VMCA signed certificate on SDDC Manager in a VCF instance is possible. For context, out-of-the-box SDDC Manager is signed by the VMCA on the management domain vCenter Server, but there is no supported way to renew that certificate. So before the VMCA certificate expires, you must replace it with a signed CA cert from your internal CA, or from an external 3rd party CA.

That said, it is possible to leverage VMCA to renew the cert on SDDC Manager. Here are some notes I had from doing this previously in the lab.

Disclaimer: This is not officially supported by VMware/Broadcom, use at your own risk.

First generate a CSR for SDDC Manager in the normal way using the SDDC Manager UI

Download the CSR as sfo-vcf01.sfo.rainpole.io.csr

SSH to the Management vCenter Server and do the following

    mkdir /tmp/certs
    upload CSR to /tmp/certs
    cd /tmp/certs
    vi /tmp/certs/cert.cfg
    
    # cert.cfg contents replacing FQDN appropriately
    [ req ]
    req_extensions = v3_req
    
    [ v3_req ]
    extendedKeyUsage = serverAuth, clientAuth
    authorityKeyIdentifier=keyid,issuer
    authorityInfoAccess = caIssuers;URI:https://sfo-m01-vc01.sfo.rainpole.io/afd/vecs/ca
    
    Save /tmp/certs/cert.cfg

    On the management vCenter Server, generate the cert

    openssl x509 -req -days 365 -in sfo-vcf01.sfo.rainpole.io.csr -out sfo-vcf01.sfo.rainpole.io.crt -CA /var/lib/vmware/vmca/root.cer -CAkey /var/lib/vmware/vmca/privatekey.pem -extensions v3_req -CAcreateserial -extfile cert.cfg
    

    Create a certificate chain

    cat sfo-vcf01.sfo.rainpole.io.crt>>sfo-vcf01.sfo.rainpole.io.chain.pem
    cat /var/lib/vmware/vmca/root.cer>>sfo-vcf01.sfo.rainpole.io.chain.pem
    

    SSH to SDDC Manager to install the cert

    su
    cp /etc/ssl/private/vcf_https.key /etc/ssl/private/old_vcf_https.key
    mv /var/opt/vmware/vcf/commonsvcs/workdir/vcf_https.key /etc/ssl/private/vcf_https.key
    cp /etc/ssl/certs/vcf_https.crt /etc/ssl/certs/old_vcf_https.crt
    rm /etc/ssl/certs/vcf_https.crt
    
    SCP sfo-vcf01.sfo.rainpole.io.chain.pem to /etc/ssl/certs/
    
    mv /etc/ssl/certs/sfo-vcf01.sfo.rainpole.io.chain.pem /etc/ssl/certs/vcf_https.crt
    chmod 644 /etc/ssl/certs/vcf_https.crt
    chmod 640 /etc/ssl/private/vcf_https.key
    nginx -t && systemctl reload nginx

    You should now have renewed your VMCA signed certificate on SDDC Manager.

    Adding LDAP Users to vSphere SSO Groups Using PowerShell

    I got a query from a customer how to add a user from an LDAP directory to an SSO group programmatically. There is no support in native PowerCLI for this that I am aware of but there is an open source module called VMware.vSphere.SsoAdmin which can be used to achieve the goal. I checked with my colleague Gary Blake and he had an example in the Power Validated Solutions Module that I was able to reference.

    First off you need to install the VMware.vSphere.SsoAdmin module. This can be done from the PowerShell Gallery.

    Install-Module VMware.vSphere.SsoAdmin

    Once it is installed you can run the following to add an LDAP user to an SSO group

    $vcFqdn = 'sfo-m01-vc01.sfo.rainpole.io'
    $vcUser = 'administrator@vsphere.local'
    $vcPassword = 'VMw@re1!'
    $ldapDomain = 'sfo.rainpole.io'
    $ldapUser = 'ldap_user'
    $ssoDomain = 'vsphere.local'
    $ssoGroup = 'administrators'
    
    $ssoConnection = Connect-SsoAdminServer -Server $vcFqdn -User $vcUser -Password $vcPassword -SkipCertificateCheck
    $targetGroup = Get-SsoGroup -Domain $ssoDomain -Name $ssoGroup -Server $ssoConnection
    $ldapUserToAdd = Get-SsoPersonUser -Domain $ldapDomain -Name $ldapUser -Server $ssoConnection
    $ldapUserToAdd | Add-UserToSsoGroup -TargetGroup $targetGroup

    Running the code above results in the LDAP user being added to the SSO administrators group

    Cleanup Failed Credential Tasks in VMware Cloud Foundation

    I have covered how to clean up general failed tasks in Cleanup Failed Credentials Tasks in VMware Cloud Foundation in a previous post. Another type of task that can be in a failed state is a credentials rotation operation. Credential operations can fail for a number of reasons (the underlying component is unreachable at the time of the operation etc), and this type of failed task is a blocking task – i.e. you cannot perform another credential task until you clean up or cancel the failed task. The script below leverages the PowerVCF cmdlet Get-VCFCredentialTask to discover failed credential tasks and Stop-VCFCredentialTask to clean them up. As with all scripts, please test thoroughly in a lab before using it in production.

    # Script to cleanup failed credential tasks in SDDC Manager
    # Written by Brian O'Connell - Staff II Solutions Architect @ VMware
    #User Variables
    # SDDC Manager FQDN. This is the target that is queried for failed tasks
    $sddcManagerFQDN = "sfo-vcf01.sfo.rainpole.io"
    # SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
    $sddcManagerAPIUser = "administrator@vsphere.local"
    $sddcManagerAPIPassword = "VMw@re1!"
    # DO NOT CHANGE ANYTHING BELOW THIS LINE
    #########################################
    # Set TLS to 1.2 to avoid certificate mismatch errors
    [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
    # Install PowerVCF if not already installed
    if (!(Get-InstalledModule -name PowerVCF -MinimumVersion 2.4.0 -ErrorAction SilentlyContinue)) {
    Install-Module -Name PowerVCF -MinimumVersion 2.4.0 -Force
    }
    # Request a VCF Token using PowerVCF
    Request-VCFToken -fqdn $sddcManagerFQDN -username $sddcManagerAPIUser -password $sddcManagerAPIPassword
    # Retrieve a list of failed tasks
    $failedTaskIDs = @()
    $ids = (Get-VCFCredentialTask -status "Failed").id
    Foreach ($id in $ids) {
    $failedTaskIDs += ,$id
    }
    # Cleanup the failed tasks
    Foreach ($taskID in $failedTaskIDs) {
    Stop-VCFCredentialTask -id $taskID
    # Verify the task was deleted
    Try {
    $verifyTaskDeleted = (Get-VCFCredentialTask -id $taskID)
    if (!$verifyTaskDeleted) {
    Write-Output "Task ID $taskID Deleted Successfully"
    }
    }
    catch {
    Write-Error "Something went wrong. Please check your SDDC Manager state"
    }
    }

    Install HashiCorp Terraform on a PhotonOS Appliance

    HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.

    #!/usr/bin/env bash
    
    # Versions
    GO_VERSION="1.21.4"
    TERRAFORM_VERSION="1.6.3"
    
    # Arch
    if [[ $(uname -m) == "x86_64" ]]; then
      LINUX_ARCH="amd64"
    elif [[ $(uname -m) == "aarch64" ]]; then
      LINUX_ARCH="arm64"
    fi
    
    # Directory
    if ! [[ -d ~/code ]]; then
      mkdir ~/code
    fi
    
    # Go
    wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    PATH=$PATH:/usr/local/go/bin
    go version
    rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
    export GOPATH=${HOME}/code/go
    
    # HashiCorp
    wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
    unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
    rm ./*.zip

    Terraform Module for Deploying VMware Cloud Foundation VI Workload Domains

    I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.

    I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.

    The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.

    # main.tf
    
    module "vidomain" {
    source= "LifeOfBrianOC/vidomain"
    version = "0.1.0"
    
    sddc_manager_fqdn     = "sfo-vcf01.sfo.rainpole.io"
    sddc_manager_username = "administrator@vsphere.local"
    sddc_manager_password = "VMw@re1!"
    allow_unverified_tls  = "true"
    
    network_pool_name                     = "sfo-w01-np"
    network_pool_storage_gateway          = "172.16.13.1"
    network_pool_storage_netmask          = "255.255.255.0"
    network_pool_storage_mtu              = "8900"
    network_pool_storage_subnet           = "172.16.13.0"
    network_pool_storage_type             = "VSAN"
    network_pool_storage_vlan_id          = "1633"
    network_pool_storage_ip_pool_start_ip = "172.16.13.101"
    network_pool_storage_ip_pool_end_ip   = "172.16.13.108"
    
    network_pool_vmotion_gateway          = "172.16.12.1"
    network_pool_vmotion_netmask          = "255.255.255.0"
    network_pool_vmotion_mtu              = "8900"
    network_pool_vmotion_subnet           = "172.16.12.0"
    network_pool_vmotion_vlan_id          = "1632"
    network_pool_vmotion_ip_pool_start_ip = "172.16.12.101"
    network_pool_vmotion_ip_pool_end_ip   = "172.16.12.108"
    
    esx_host_storage_type = "VSAN"
    esx_host1_fqdn        = "sfo01-w01-esx01.sfo.rainpole.io"
    esx_host1_username    = "root"
    esx_host1_pass        = "VMw@re1!"
    esx_host2_fqdn        = "sfo01-w01-esx02.sfo.rainpole.io"
    esx_host2_username    = "root"
    esx_host2_pass        = "VMw@re1!"
    esx_host3_fqdn        = "sfo01-w01-esx03.sfo.rainpole.io"
    esx_host3_username    = "root"
    esx_host3_pass        = "VMw@re1!"
    esx_host4_fqdn        = "sfo01-w01-esx04.sfo.rainpole.io"
    esx_host4_username    = "root"
    esx_host4_pass        = "VMw@re1!"
    
    vcf_domain_name                    = "sfo-w01"
    vcf_domain_vcenter_name            = "sfo-w01-vc01"
    vcf_domain_vcenter_datacenter_name = "sfo-w01-dc01"
    vcenter_root_password              = "VMw@re1!"
    vcenter_vm_size                    = "small"
    vcenter_storage_size               = "lstorage"
    vcenter_ip_address                 = "172.16.11.130"
    vcenter_subnet_mask                = "255.255.255.0"
    vcenter_gateway                    = "172.16.11.1"
    vcenter_fqdn                       = "sfo-w01-vc01.sfo.rainpole.io"
    vsphere_cluster_name               = "sfo-w01-cl01"
    vds_name                           = "sfo-w01-cl01-vds01"
    vsan_datastore_name                = "sfo-w01-cl01-ds-vsan01"
    vsan_failures_to_tolerate          = "1"
    esx_vmnic0                         = "vmnic0"
    vmnic0_vds_name                    = "sfo-w01-cl01-vds01"
    esx_vmnic1                         = "vmnic1"
    vmnic1_vds_name                    = "sfo-w01-cl01-vds01"
    portgroup_management_name          = "sfo-w01-cl01-vds01-pg-mgmt"
    portgroup_vsan_name                = "sfo-w01-cl01-vds01-pg-vsan"
    portgroup_vmotion_name             = "sfo-w01-cl01-vds01-pg-vmotion"
    esx_license_key                    = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    vsan_license_key                   = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    
    nsx_vip_ip                    = "172.16.11.131"
    nsx_vip_fqdn                  = "sfo-w01-nsx01.sfo.rainpole.io"
    nsx_manager_admin_password    = "VMw@re1!VMw@re1!"
    nsx_manager_form_factor       = "small"
    nsx_license_key               = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
    nsx_manager_node1_name        = "sfo-w01-nsx01a"
    nsx_manager_node1_ip_address  = "172.16.11.132"
    nsx_manager_node1_fqdn        = "sfo-w01-nsx01a.sfo.rainpole.io"
    nsx_manager_node1_subnet_mask = "255.255.255.0"
    nsx_manager_node1_gateway     = "172.16.11.1"
    nsx_manager_node2_name        = "sfo-w01-nsx01b"
    nsx_manager_node2_ip_address  = "172.16.11.133"
    nsx_manager_node2_fqdn        = "sfo-w01-nsx01b.sfo.rainpole.io"
    nsx_manager_node2_subnet_mask = "255.255.255.0"
    nsx_manager_node2_gateway     = "172.16.11.1"
    nsx_manager_node3_name        = "sfo-w01-nsx01c"
    nsx_manager_node3_ip_address  = "172.16.11.134"
    nsx_manager_node3_fqdn        = "sfo-w01-nsx01c.sfo.rainpole.io"
    nsx_manager_node3_subnet_mask = "255.255.255.0"
    nsx_manager_node3_gateway     = "172.16.11.1"
    geneve_vlan_id                = "1634"
    }

    Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version

    terraform init

    Then create the and apply the plan

    terraform plan -out=create-vi-wld
    terraform apply create-vi-wld