HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.
#!/usr/bin/env bash
# Versions
GO_VERSION="1.21.4"
TERRAFORM_VERSION="1.6.3"
# Arch
if [[ $(uname -m) == "x86_64" ]]; then
LINUX_ARCH="amd64"
elif [[ $(uname -m) == "aarch64" ]]; then
LINUX_ARCH="arm64"
fi
# Directory
if ! [[ -d ~/code ]]; then
mkdir ~/code
fi
# Go
wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
PATH=$PATH:/usr/local/go/bin
go version
rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
export GOPATH=${HOME}/code/go
# HashiCorp
wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
rm ./*.zip
I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.
I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.
The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.
Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version
terraform init
Then create the and apply the plan
terraform plan -out=create-vi-wld
terraform apply create-vi-wld
As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.
Check out the other posts on Terraform with VMware Cloud Foundation here:
Deploy Cloud Builder with the vSphere Terraform Provider
As before, you first need to define your provider configuration
Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.
Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.
# main.tf
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
name = var.data_center
}
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
name = var.cluster
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
name = var.datastore
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
name = var.mgmt_pg
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
name = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
name = var.compute_host
datacenter_id = data.vsphere_datacenter.datacenter.id
}
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
name = var.vm_name
resource_pool_id = data.vsphere_resource_pool.pool.id
datastore_id = data.vsphere_datastore.datastore.id
host_system_id = data.vsphere_host.host.id
local_ovf_path = var.local_ovf_path
ovf_network_map = {
"Network 1" = data.vsphere_network.mgmt.id
}
}
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
name = var.vm_name
datacenter_id = data.vsphere_datacenter.datacenter.id
datastore_id = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
host_system_id = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
resource_pool_id = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
num_cpus = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
memory = data.vsphere_ovf_vm_template.ovfLocal.memory
guest_id = data.vsphere_ovf_vm_template.ovfLocal.guest_id
scsi_type = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
wait_for_guest_net_timeout = 5
ovf_deploy {
allow_unverified_ssl_cert = true
local_ovf_path = var.local_ovf_path
disk_provisioning = "thin"
ovf_network_map = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
}
vapp {
properties = {
"ip0" = var.ip0,
"netmask0" = var.netmask0,
"gateway" = var.gateway,
"dns" = var.dns,
"domain" = var.domain,
"ntp" = var.ntp,
"searchpath" = var.searchpath,
"ADMIN_USERNAME" = "admin",
"ADMIN_PASSWORD" = var.ADMIN_PASSWORD,
"ROOT_PASSWORD" = var.ROOT_PASSWORD,
"hostname" = var.hostname
}
}
lifecycle {
ignore_changes = [
#vapp # Enable this to ignore all vapp properties if the plan is re-run
vapp[0].properties["ADMIN_PASSWORD"],
vapp[0].properties["ROOT_PASSWORD"],
host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
]
}
}
Now we can run the following to initialise Terraform and the required vSphere provider
terraform init
One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.
terraform plan -out=DeployCB
Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.
HashiCorp Terraform has become an industry standard, infrastructure-as-code & desired-state configuration tool for managing on-premises and cloud-based entities. If you are not familiar with Terraform, I’ve covered some early general learnings on Terraform in some posts here & here. The internal engineering team are working on a Terraform provider for VCF, so I decided to give it a spin to review its capabilities & test drive it in the lab.
First off what VCF operations is the Provider capable of supporting today:
Deploying a new VCF instance (bring-up)
Commissioning hosts
Creating network pools
Deploying a new VI Workload domain
Creating clusters
Expanding clusters
Adding users
New functionality is being added every week, and as with all new initiatives like this, customer consumption and adoption will drive innovation and progress.
The GitHub repo contains some great example files to get you started. I am going to do a few blog posts on what I’ve learned so far but for now, here are the important links you need if you would like to take a look at the provider