HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.
#!/usr/bin/env bash
# Versions
GO_VERSION="1.21.4"
TERRAFORM_VERSION="1.6.3"
# Arch
if [[ $(uname -m) == "x86_64" ]]; then
LINUX_ARCH="amd64"
elif [[ $(uname -m) == "aarch64" ]]; then
LINUX_ARCH="arm64"
fi
# Directory
if ! [[ -d ~/code ]]; then
mkdir ~/code
fi
# Go
wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
PATH=$PATH:/usr/local/go/bin
go version
rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
export GOPATH=${HOME}/code/go
# HashiCorp
wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
rm ./*.zip
I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.
I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.
The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.
Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version
terraform init
Then create the and apply the plan
terraform plan -out=create-vi-wld
terraform apply create-vi-wld
As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.
Check out the other posts on Terraform with VMware Cloud Foundation here:
Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.
Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.
# main.tf
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
name = var.data_center
}
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
name = var.cluster
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
name = var.datastore
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
name = var.mgmt_pg
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
name = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
name = var.compute_host
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
name = var.vm_name
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastore.id
host_system_id = data.vsphere_host.host.id
local_ovf_path = var.local_ovf_path
ovf_network_map = {
"Network 1" = data.vsphere_network.mgmt.id
}
}
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
name = var.vm_name
datacenter_id = data.vsphere_datacenter.datacenter.id
datastore_id = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
host_system_id = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
resource_pool_id = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
num_cpus = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
memory = data.vsphere_ovf_vm_template.ovfLocal.memory
guest_id = data.vsphere_ovf_vm_template.ovfLocal.guest_id
scsi_type = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
wait_for_guest_net_timeout = 5
ovf_deploy {
allow_unverified_ssl_cert = true
local_ovf_path = var.local_ovf_path
disk_provisioning = "thin"
ovf_network_map = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
}
vapp {
properties = {
"ip0" = var.ip0,
"netmask0" = var.netmask0,
"gateway" = var.gateway,
"dns" = var.dns,
"domain" = var.domain,
"ntp" = var.ntp,
"searchpath" = var.searchpath,
"ADMIN_USERNAME" = "admin",
"ADMIN_PASSWORD" = var.ADMIN_PASSWORD,
"ROOT_PASSWORD" = var.ROOT_PASSWORD,
"hostname" = var.hostname
}
}
lifecycle {
ignore_changes = [
#vapp # Enable this to ignore all vapp properties if the plan is re-run
vapp[0].properties["ADMIN_PASSWORD"],
vapp[0].properties["ROOT_PASSWORD"],
host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
]
}
}
Now we can run the following to initialise Terraform and the required vSphere provider
terraform init
One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.
terraform plan -out=DeployCB
Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.
Virtual NVMe isn’t a new concept. It’s been around since the 6.x days. As part of some lab work I needed to automate adding an NVMe controller and some devices to a VM. This can be accomplished using the PowerCLI cmdlets for the vSphere API.
# NVME Using PowerCLI
$vcenterFQDN = "sfo-m01-vc01.sfo.rainpole.io"
$vcenterUsername = "administrator@vsphere.local"
$vcenterPassword = "VMw@re1!"
$vms = @("sfo01-w01-esx01","sfo01-w01-esx02","sfo01-w01-esx03","sfo01-w01-esx04")
# Install the required module
Install-Module VMware.Sdk.vSphere.vCenter.Vm
# Connect to vCenter
connect-viserver -server $vcenterFQDN -user $vcenterUsername -password $vcenterPassword
# Add an NVMe controller to each VM
Foreach ($vmName in $vms)
{
$VmHardwareAdapterNvmeCreateSpec = Initialize-VmHardwareAdapterNvmeCreateSpec -Bus 0 -PciSlotNumber 0
Invoke-CreateVmHardwareAdapterNvme -vm (get-vm $vmName).ExtensionData.MoRef.Value -VmHardwareAdapterNvmeCreateSpec $VmHardwareAdapterNvmeCreateSpec
}
# Add an NVMe device
Foreach ($vmName in $vms)
{
$VmHardwareDiskVmdkCreateSpec = Initialize-VmHardwareDiskVmdkCreateSpec -Capacity 274877906944
$VmHardwareDiskCreateSpec = Initialize-VmHardwareDiskCreateSpec -Type "NVME" -NewVmdk $VmHardwareDiskVmdkCreateSpec
Invoke-CreateVmHardwareDisk -Vm (get-vm $vmName).ExtensionData.MoRef.Value -VmHardwareDiskCreateSpec $VmHardwareDiskCreateSpec
}
I was chatting with my colleague Paudie O’Riordan yesterday about PowerVCF as he was doing some testing internally and he mentioned that a great addition would be to have the ability to find, and cleanup failed tasks in SDDC Manager. Some use cases for this would be, cleaning up an environment before handing it off to a customer, or before recording a demo etc.
Currently there isnt a supported public API to delete a failed task so you have to run a curl command on SDDC Manager with the task ID. So getting a list of failed tasks and then running a command to delete each one can take time. See Martin Gustafson’s post on how to do it manually here.
I took a look at our existing code for retrieving tasks (and discovered a bug in the logic that is now fixed in PowerVCF 2.1.5!) and we have the ability to specify -status. So requesting a list of tasks with -status “failed” returns a list. So i put the script below together to retrieve a list of failed tasks, loop through them and delete them. The script requires the following inputs
SDDC Manager FQDN. This is the target that is queried for failed tasks
SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
Password for the above user
Password for the SDDC Manager appliance vcf user. This is used to run the task deletion. This is not tracked in the credentials DB so we need to pass it.
Once the above variables are populated the script does the following:
Checks for PowerVCF (minimum version 2.1.5) and installs if not present
Requests an API token from SDDC Manager
Queries SDDC Manager for the management domain vCenter Server details
Uses the management domain vCenter Server details to retrieve the SDDC Manager VM name
Queries SDDC Manager for a list of tasks in a failed state
Loops through the list of failed tasks and deletes them from SDDC Manager
Verifies the task is no longer present
Here is the script. It is also published here if you would like to enhance it
# Script to cleanup failed tasks in SDDC Manager
# Written by Brian O'Connell - Staff Solutions Architect @ VMware
#User Variables
# SDDC Manager FQDN. This is the target that is queried for failed tasks
$sddcManagerFQDN = "lax-vcf01.lax.rainpole.io"
# SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
$sddcManagerAPIUser = "administrator@vsphere.local"
$sddcManagerAPIPassword = "VMw@re1!"
# Password for the SDDC Manager appliance vcf user. This is used to run the task deletion
$sddcManagerVCFPassword = "VMw@re1!"
# DO NOT CHANGE ANYTHING BELOW THIS LINE
#########################################
# Set TLS to 1.2 to avoid certificate mismatch errors
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
# Install PowerVCF if not already installed
if (!(Get-InstalledModule -name PowerVCF -MinimumVersion 2.1.5 -ErrorAction SilentlyContinue)) {
Install-Module -Name PowerVCF -MinimumVersion 2.1.5 -Force
}
# Request a VCF Token using PowerVCF
Request-VCFToken -fqdn $sddcManagerFQDN -username $sddcManagerAPIUser -password $sddcManagerAPIPassword
# Disconnect all connected vCenters to ensure only the desired vCenter is available
if ($defaultviservers) {
$server = $defaultviservers.Name
foreach ($server in $defaultviservers) {
Disconnect-VIServer -Server $server -Confirm:$False
}
}
# Retrieve the Management Domain vCenter Server FQDN
$vcenterFQDN = ((Get-VCFWorkloadDomain | where-object {$_.type -eq "MANAGEMENT"}).vcenters.fqdn)
$vcenterUser = (Get-VCFCredential -resourceType "PSC").username
$vcenterPassword = (Get-VCFCredential -resourceType "PSC").password
# Retrieve SDDC Manager VM Name
if ($vcenterFQDN) {
Write-Output "Getting SDDC Manager Manager VM Name"
Connect-VIServer -server $vcenterFQDN -user $vcenterUser -password $vcenterPassword | Out-Null
$sddcmVMName = ((Get-VM * | Where-Object {$_.Guest.Hostname -eq $sddcManagerFQDN}).Name)
}
# Retrieve a list of failed tasks
$failedTaskIDs = @()
$ids = (Get-VCFTask -status "Failed").id
Foreach ($id in $ids) {
$failedTaskIDs += ,$id
}
# Cleanup the failed tasks
Foreach ($taskID in $failedTaskIDs) {
$scriptCommand = "curl -X DELETE 127.0.0.1/tasks/registrations/$taskID"
Write-Output "Deleting Failed Task ID $taskID"
$output = Invoke-VMScript -ScriptText $scriptCommand -vm $sddcmVMName -GuestUser "vcf" -GuestPassword $sddcManagerVCFPassword
# Verify the task was deleted
Try {
$verifyTaskDeleted = (Get-VCFTask -id $taskID)
if ($verifyTaskDeleted -eq "Task ID Not Found") {
Write-Output "Task ID $taskID Deleted Successfully"
}
}
catch {
Write-Error "Something went wrong. Please check your SDDC Manager state"
}
}
Disconnect-VIServer -server $vcenterFQDN -Confirm:$False
I’ve recently been doing a lot of work with VMware Site Recovery Manager (SRM) and vSphere Replication (vSR) with VMware Cloud Foundation. Earlier this year we (Ken Gould & I) published an early access design for Site Protection & Recovery for VMware Cloud Foundation 4.2. We have been working to refresh & enhance this design for a new release. Part of this effort includes trying to add some automation to assist with the manual steps to speed up time to deploy. SRM & vSR do not have publicly documented VAMI APIs so we set about trying to automate the configuration with a little bit of reverse engineering.
As with most APIs, whether public or private, you must authenticate before you can run an API workflow, so the first task is figuring out how the authentication to perform a workflow works. Typically, if you hit F12 in your browser you will get a developer console that exposes what goes on behind the scenes in a browser session. So to inspect the process, use the browser to perform a manual login, and review the header & response tabs in the developer view. This exposes the Request URL to use, the method (POST) and the required headers (accept: application/json)
The Response tab shows a sessionId which can be used for further configuration API calls in the headers as dr.config.service.sessionid
So with the above information you can use an API client like Postman to retrieve a sessionId with the URL & headers like this
And your VAMI admin user and password in JSON format in the body payload
You can also use the information to retrieve a sessionId using PowerShell
Within a VMware Cloud Foundation instance, SDDC Manager is used to manage the lifecycle of passwords (or credentials). While we provide the ability to rotate (either scheduled or manually) currently there is no easy way to check when a particular password is due to expire, which can lead to appliance root passwords expiring, which will cause all sorts of issues. The ability to monitor expiry is something that is being worked on, but as a stop gap I put together the script below which leverages PowerVCF and also a currently undocumented API for validating credentials.
The script has a function called Get-VCFPasswordExpiry that accepts the following parameters
-fqdn (FQDN of the SDDC Manager)
-username (SDDC Manager Username – Must have the ADMIN role)
-password (SDDC Manager password)
-resourceType (Optional parameter to specify a resourceType. If not passed, all resources will be checked. If passed (e.g. VCENTER) then only that resourceType will be checked. Supported resource types are
PowerVCF is a requirement. If you dont already have it run the following
Install-Module -Name PowerVCF
The code takes a while to run as it needs to do the following to check password expiry
Connect to SDDC Manager to retrieve an API token
Retrieve a list of all credentials
Using the resourceID of each credential
Perform a credential validation
Wait for the validation to complete
Parse the results for the expiry details
Add all the results to an array and present in a table (Kudos to Ken Gould for assistance with the presentation of this piece!)
In this example script I am returning all non SERVICE user accounts regardless of expiry (SERVICE account passwords are system managed). You could get more granular by adding something like this to only display accounts with passwords due to expire in less than 14 days
if ($validationTaskResponse.validationChecks.passwordDetails.numberOfDaysToExpiry -lt 14) {
Write-Output "Password for username $($validationTaskResponse.validationChecks.username) expires in $($validationTaskResponse.validationChecks.passwordDetails.numberOfDaysToExpiry) days"
}
From time to time we all need to look at logs, whether its a failed operation or to trace who did what when. In VMware Cloud Foundation there are many different logs, each one serving a different purpose. Its not always clear which log you should look at for each operation so here is a useful reference table.
vRealize Suite Lifecycle Manager (vRSLCM) is a one stop shop for lifecycle management (LCM) of your VMware vRealize Suite (vRA, vRB, vROPs, vRLI) . VMware Validate Designs leverages this via Cloud Builder for initial SDDC deployment but it also covers upgrade from a single interface, reducing the need to jump between interfaces by bringing all LCM tasks into a single UI. This doesn’t come without its challenges however, as vRSLCM is now responsible for aggregating all the install/upgrade logs and presenting them in a coherent manner to the user…which isn’t always the case. vRSLCM logs activity in /var/log/vlcm/vrlcm-server.log but at best you get something like this
Which let’s face it isnt very helpful…or is it? At first glance its just a job ID but thanks to @leahy_s in VMware CMBU I can now make this job ID give me more information in a much more structured way, similar to tail -f. Here’s how
From time to time a host may be unmanageable from vCenter / web client or you may only have console access. In my case I was bringing up a Dell EMC VxRail. During initial bringup the ESXi hosts do not get a mgmt IP if you do not have DHCP available so management with the web client is not possible. I do have iDRAC access though so can access the console. I needed to see where the VxRail manager VM was running as it comes up during an election process between the hosts. With console access it is still possible to manage VMs using esxcli.
To discover all VMs on a host run the following
vim-cmd vmsvc/getallvms
Once you have the output you can use the Vmid to manipulate the powerstate of a VM
vim-cmd vmsvc/power.get 2
In my case the VM i wanted was powered off. You can run the following to power it on
vim-cmd vmsvc/power.on 2
And there you have it. Simple VM management using vim-cmd. Explore what else you can leverage it for here