HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.
#!/usr/bin/env bash
# Versions
GO_VERSION="1.21.4"
TERRAFORM_VERSION="1.6.3"
# Arch
if [[ $(uname -m) == "x86_64" ]]; then
LINUX_ARCH="amd64"
elif [[ $(uname -m) == "aarch64" ]]; then
LINUX_ARCH="arm64"
fi
# Directory
if ! [[ -d ~/code ]]; then
mkdir ~/code
fi
# Go
wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
PATH=$PATH:/usr/local/go/bin
go version
rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
export GOPATH=${HOME}/code/go
# HashiCorp
wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
rm ./*.zip
I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.
I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.
The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.
Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version
terraform init
Then create the and apply the plan
terraform plan -out=create-vi-wld
terraform apply create-vi-wld
As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.
Check out the other posts on Terraform with VMware Cloud Foundation here:
Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.
Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.
# main.tf
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
name = var.data_center
}
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
name = var.cluster
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
name = var.datastore
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
name = var.mgmt_pg
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
name = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
name = var.compute_host
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
name = var.vm_name
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastore.id
host_system_id = data.vsphere_host.host.id
local_ovf_path = var.local_ovf_path
ovf_network_map = {
"Network 1" = data.vsphere_network.mgmt.id
}
}
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
name = var.vm_name
datacenter_id = data.vsphere_datacenter.datacenter.id
datastore_id = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
host_system_id = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
resource_pool_id = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
num_cpus = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
memory = data.vsphere_ovf_vm_template.ovfLocal.memory
guest_id = data.vsphere_ovf_vm_template.ovfLocal.guest_id
scsi_type = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
wait_for_guest_net_timeout = 5
ovf_deploy {
allow_unverified_ssl_cert = true
local_ovf_path = var.local_ovf_path
disk_provisioning = "thin"
ovf_network_map = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
}
vapp {
properties = {
"ip0" = var.ip0,
"netmask0" = var.netmask0,
"gateway" = var.gateway,
"dns" = var.dns,
"domain" = var.domain,
"ntp" = var.ntp,
"searchpath" = var.searchpath,
"ADMIN_USERNAME" = "admin",
"ADMIN_PASSWORD" = var.ADMIN_PASSWORD,
"ROOT_PASSWORD" = var.ROOT_PASSWORD,
"hostname" = var.hostname
}
}
lifecycle {
ignore_changes = [
#vapp # Enable this to ignore all vapp properties if the plan is re-run
vapp[0].properties["ADMIN_PASSWORD"],
vapp[0].properties["ROOT_PASSWORD"],
host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
]
}
}
Now we can run the following to initialise Terraform and the required vSphere provider
terraform init
One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.
terraform plan -out=DeployCB
Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.
While VMware Explore EMEA is in full swing, VMware Cloud Foundation 5.1 went GA! As of yesterday, you can now review the 5.1 design guide to see the exciting additions to VMware Cloud Foundation 5.1 is GA! Some of the main highlights are listed below.
Support for vSAN ESA: vSAN ESA is an alternative, single-tier architecture designed ground-up for NVMe-based platforms to deliver higher performance with more predictable I/O latencies, higher space efficiency, per-object based data services, and native, high-performant snapshots.
Non-DHCP option for Tunnel Endpoint (TEP) IP assignment: SDDC Manager now provides the option to select Static or DHCP-based IP assignments to Host TEPs for stretched clusters and L3 aware clusters.
vSphere Distributed Services engine for Ready nodes: AMD-Pensando and NVIDIA BlueField-2 DPUs are now supported. Offloading the Virtual Distributed Switch (VDS) and NSX network and security functions to the hardware provides significant performance improvements for low latency and high bandwidth applications. NSX distributed firewall processing is also offloaded from the server CPUs to the network silicon.
Multi-pNIC/Multi-vSphere Distributed Switch UI enhancements: VCF users can configure complex networking configurations, including more vSphere Distributed Switch and NSX switch-related configurations, through the SDDC Manager UI.
Distributed Virtual Port Group Separation for management domain appliances: Enables the traffic isolation between management VMs (such as SDDC Manager, NSX Manager, and vCenter) and ESXi Management VMkernel interfaces
Support for vSphere Lifecycle Manager images in management domain:VCF users can deploy management domain using vSphere Lifecycle Manager (vLCM) images during new VCF instance deployment
Mixed-mode Support for Workload Domains​: A VCF instance can exist in a mixed BOM state where the workload domains are on different VCF 5.x versions. Note: The management domain should be on the highest version in the instance.
Asynchronous update of the pre-check files: The upgrade pre-checks can be updated asynchronously with new pre-checks using a pre-check file provided by VMware.
Workload domain NSX integration: Support for multiple NSX enabled VDSs for Distributed Firewall use cases
Tier-0/1 optional for VCF Edge cluster: When creating an Edge cluster with the VCF API, the Tier-0 and Tier-1 gateways are now optional.
VCF Edge nodes support static or pooled IP: When creating or expanding an Edge cluster using VCF APIs, Edge node TEP configuration may come from an NSX IP pool or be specified statically as in earlier releases.
Support for mixed license deployment: A combination of keyed and keyless licenses can be used within the same VCF instance.
Integration with Workspace ONE Broker: Provides identity federation and SSO across vCenter, NSX, and SDDC Manager. VCF administrators can add Okta to Workspace ONE Broker as a Day-N operation using the SDDC Manger UI.
VMware vRealize rebranding: VMware recently renamed vRealize Suite of products to VMware Aria Suite. See the Aria Naming Updates blog post for more details..
VMware Validated Solutions: All VMware Validated Solutions are updated to support VMware Cloud Foundation 5.1. Visit VMware Validated Solutions for the updated guides.
Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).
As of writing this post I am using version 0.5.0 of the provider.
First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.
variable "cloud_builder_username" {
description = "Username to authenticate to CloudBuilder"
default = "admin"
}
variable "cloud_builder_password" {
description = "Password to authenticate to CloudBuilder"
default = "VMw@re1!"
sensitive = true
}
variable "cloud_builder_host" {
description = "Fully qualified domain name or IP address of the CloudBuilder"
default = "sfo-cb01.sfo.rainpole.io"
}
variable "sddc_manager_root_user_password" {
description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
default = "VMw@re1!"
sensitive = true
}
variable "sddc_manager_secondary_user_password" {
description = "Second user (vcf) password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
default = "VMw@re1!"
sensitive = true
}
variable "vcenter_root_password" {
description = "root password for the vCenter Server Appliance (8-20 characters)"
default = "VMw@re1!"
sensitive = true
}
variable "nsx_manager_admin_password" {
description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
default = "VMw@re1!VMw@re1!"
sensitive = true
}
variable "nsx_manager_audit_password" {
description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
default = "VMw@re1!VMw@re1!"
sensitive = true
}
variable "nsx_manager_root_password" {
description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
default = "VMw@re1!VMw@re1!"
sensitive = true
}
variable "esx_host1_pass" {
description = "Password to authenticate to the ESXi host 1"
default = "VMw@re1!"
sensitive = true
}
variable "esx_host2_pass" {
description = "Password to authenticate to the ESXi host 2"
default = "VMw@re1!"
sensitive = true
}
variable "esx_host3_pass" {
description = "Password to authenticate to the ESXi host 3"
default = "VMw@re1!"
sensitive = true
}
variable "esx_host4_pass" {
description = "Password to authenticate to the ESXi host 4"
default = "VMw@re1!"
sensitive = true
}
variable "nsx_license_key" {
description = "NSX license to be used"
default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
sensitive = true
}
variable "vcenter_license_key" {
description = "vCenter license to be used"
default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
sensitive = true
}
variable "vsan_license_key" {
description = "vSAN license key to be used"
default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
sensitive = true
}
variable "esx_license_key" {
description = "ESXi license key to be used"
default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
sensitive = true
}
Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.
HashiCorp Terraform has become an industry standard, infrastructure-as-code & desired-state configuration tool for managing on-premises and cloud-based entities. If you are not familiar with Terraform, I’ve covered some early general learnings on Terraform in some posts here & here. The internal engineering team are working on a Terraform provider for VCF, so I decided to give it a spin to review its capabilities & test drive it in the lab.
First off what VCF operations is the Provider capable of supporting today:
Deploying a new VCF instance (bring-up)
Commissioning hosts
Creating network pools
Deploying a new VI Workload domain
Creating clusters
Expanding clusters
Adding users
New functionality is being added every week, and as with all new initiatives like this, customer consumption and adoption will drive innovation and progress.
The GitHub repo contains some great example files to get you started. I am going to do a few blog posts on what I’ve learned so far but for now, here are the important links you need if you would like to take a look at the provider
Before you can deploy a vSphere Lifecycle Manager (vLCM) image based cluster in VMware Cloud Foundation, you must first import an image into the Image Management Inventory in SDDC Manager. You can do this via the SDDC Manager UI for a pre existing cluster.
Or you can now use PowerVCF to import the image thanks to the addition of New-VCFPersonality (vLCM images are known as personalities in VCF hence the name of the cmdlet).
The sequence of events to be able to import an image is as follows:
Extract a vLCM image from a host that you wish to use in the workload domain. The host doesn’t need to be in the vCenter or SDDC Manager inventory
Create a temporary cluster in vCenter (must be created in a VCF workload domain) and assign the image from the previous step.
Import the image from the source cluster into SDDC Manager
To achieve step 1 we can use PowerCLI
# Variables
$sourceHostUrl = "https://sfo01-w01-esx01.sfo.rainpole.io"
$sourceHostBuild = "21495797"
$sourceHostRootPassword = "VMw@re1!"
$vcenterFQDN = "sfo-m01-vc01.sfo.rainpole.io"
$ssoUsername = "administrator@vsphere.local"
$ssoPassword = "VMw@re1!"
$vcenterDC = "sfo-m01-dc01"
$sddcManagerFQDN = "sfo-vcf01.sfo.rainpole.io"
# Retrieve the source host thumbprint
$response = [System.Net.WebRequest]::Create($sourceHostUrl)
$response.GetResponse()
$cert = $response.ServicePoint.Certificate
$sourceHostThumbprint = $cert.GetCertHashString() -replace '(..(?!$))','$1:'
# Connect to vCenter and import the image from the source host to the depot
connect-viserver -server $vcenterFQDN -user $vcenterUsername -password $vcenterPassword
$OfflineHostCredentials = Initialize-SettingsDepotsOfflineHostCredentials -HostName $sourceHostUrl -UserName "root" -Password $sourceHostRootPassword -Port 443 -SslThumbPrint $sourceHostThumbprint
$OfflineConnectionSpec = Initialize-SettingsDepotsOfflineConnectionSpec -AuthType "USERNAME_PASSWORD" -HostCredential $OfflineHostCredentials
Invoke-CreateFromHostDepotsOfflineAsync -SettingsDepotsOfflineConnectionSpec $SettingsDepotsOfflineConnectionSpec
# Create a temporary cluster and assign the image
$LcmImage = Get-LcmImage -Type BaseImage | where {$_.Version -match $sourceHostBuild}
$clusterID = (New-Cluster -Location $vcenterDC -Name 'vLCM-Cluster' -HAEnabled -DrsEnabled -BaseImage $LcmImage).ExtensionData.MoRef.Value
# Import the image to SDDDC Manager
Request-VCFToken -fqdn $sddcManagerFQDN -username $ssoUsername -password $ssoPassword
$vCenterID = (Get-VCFvCEnter | where {$_.fqdn -match $vcenterFQDN}).id
New-VCFPesonality -name "21495797" -vCenterId $vCenterID -clusterId $clusterID
That should import the new image into the SDDC Manager image repo for use creating a vLCM image based workload domain.
Since the introduction of subscription based licensing for VMware Cloud Foundation (VCF+) there are now 2 licensing modes in VCF (Perpetual or Subscription). To make it easier to identify the subscription status of the system and each workload domain we have added support for Get-VCFLicenseMode into the latest release of PowerVCF 2.3.0.1004.
I was chatting with my colleague Paudie O’Riordan yesterday about PowerVCF as he was doing some testing internally and he mentioned that a great addition would be to have the ability to find, and cleanup failed tasks in SDDC Manager. Some use cases for this would be, cleaning up an environment before handing it off to a customer, or before recording a demo etc.
Currently there isnt a supported public API to delete a failed task so you have to run a curl command on SDDC Manager with the task ID. So getting a list of failed tasks and then running a command to delete each one can take time. See Martin Gustafson’s post on how to do it manually here.
I took a look at our existing code for retrieving tasks (and discovered a bug in the logic that is now fixed in PowerVCF 2.1.5!) and we have the ability to specify -status. So requesting a list of tasks with -status “failed” returns a list. So i put the script below together to retrieve a list of failed tasks, loop through them and delete them. The script requires the following inputs
SDDC Manager FQDN. This is the target that is queried for failed tasks
SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
Password for the above user
Password for the SDDC Manager appliance vcf user. This is used to run the task deletion. This is not tracked in the credentials DB so we need to pass it.
Once the above variables are populated the script does the following:
Checks for PowerVCF (minimum version 2.1.5) and installs if not present
Requests an API token from SDDC Manager
Queries SDDC Manager for the management domain vCenter Server details
Uses the management domain vCenter Server details to retrieve the SDDC Manager VM name
Queries SDDC Manager for a list of tasks in a failed state
Loops through the list of failed tasks and deletes them from SDDC Manager
Verifies the task is no longer present
Here is the script. It is also published here if you would like to enhance it
# Script to cleanup failed tasks in SDDC Manager
# Written by Brian O'Connell - Staff Solutions Architect @ VMware
#User Variables
# SDDC Manager FQDN. This is the target that is queried for failed tasks
$sddcManagerFQDN = "lax-vcf01.lax.rainpole.io"
# SDDC Manager API User. This is the user that is used to query for failed tasks. Must have the SDDC Manager ADMIN role
$sddcManagerAPIUser = "administrator@vsphere.local"
$sddcManagerAPIPassword = "VMw@re1!"
# Password for the SDDC Manager appliance vcf user. This is used to run the task deletion
$sddcManagerVCFPassword = "VMw@re1!"
# DO NOT CHANGE ANYTHING BELOW THIS LINE
#########################################
# Set TLS to 1.2 to avoid certificate mismatch errors
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
# Install PowerVCF if not already installed
if (!(Get-InstalledModule -name PowerVCF -MinimumVersion 2.1.5 -ErrorAction SilentlyContinue)) {
Install-Module -Name PowerVCF -MinimumVersion 2.1.5 -Force
}
# Request a VCF Token using PowerVCF
Request-VCFToken -fqdn $sddcManagerFQDN -username $sddcManagerAPIUser -password $sddcManagerAPIPassword
# Disconnect all connected vCenters to ensure only the desired vCenter is available
if ($defaultviservers) {
$server = $defaultviservers.Name
foreach ($server in $defaultviservers) {
Disconnect-VIServer -Server $server -Confirm:$False
}
}
# Retrieve the Management Domain vCenter Server FQDN
$vcenterFQDN = ((Get-VCFWorkloadDomain | where-object {$_.type -eq "MANAGEMENT"}).vcenters.fqdn)
$vcenterUser = (Get-VCFCredential -resourceType "PSC").username
$vcenterPassword = (Get-VCFCredential -resourceType "PSC").password
# Retrieve SDDC Manager VM Name
if ($vcenterFQDN) {
Write-Output "Getting SDDC Manager Manager VM Name"
Connect-VIServer -server $vcenterFQDN -user $vcenterUser -password $vcenterPassword | Out-Null
$sddcmVMName = ((Get-VM * | Where-Object {$_.Guest.Hostname -eq $sddcManagerFQDN}).Name)
}
# Retrieve a list of failed tasks
$failedTaskIDs = @()
$ids = (Get-VCFTask -status "Failed").id
Foreach ($id in $ids) {
$failedTaskIDs += ,$id
}
# Cleanup the failed tasks
Foreach ($taskID in $failedTaskIDs) {
$scriptCommand = "curl -X DELETE 127.0.0.1/tasks/registrations/$taskID"
Write-Output "Deleting Failed Task ID $taskID"
$output = Invoke-VMScript -ScriptText $scriptCommand -vm $sddcmVMName -GuestUser "vcf" -GuestPassword $sddcManagerVCFPassword
# Verify the task was deleted
Try {
$verifyTaskDeleted = (Get-VCFTask -id $taskID)
if ($verifyTaskDeleted -eq "Task ID Not Found") {
Write-Output "Task ID $taskID Deleted Successfully"
}
}
catch {
Write-Error "Something went wrong. Please check your SDDC Manager state"
}
}
Disconnect-VIServer -server $vcenterFQDN -Confirm:$False