Install HashiCorp Terraform on a PhotonOS Appliance

HashiCorp Terraform is not currently available on the Photon OS repository. If you would like to install Terraform on a PhotonOS appliance you can use the script below. Note: The versions for Go and Terraform that I have included are current at the time of writing. Thanks to my colleague Ryan Johnson who shared this method with me some time ago for another project.

#!/usr/bin/env bash

# Versions
GO_VERSION="1.21.4"
TERRAFORM_VERSION="1.6.3"

# Arch
if [[ $(uname -m) == "x86_64" ]]; then
  LINUX_ARCH="amd64"
elif [[ $(uname -m) == "aarch64" ]]; then
  LINUX_ARCH="arm64"
fi

# Directory
if ! [[ -d ~/code ]]; then
  mkdir ~/code
fi

# Go
wget -q -O go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz https://golang.org/dl/go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
tar -C /usr/local -xzf go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
PATH=$PATH:/usr/local/go/bin
go version
rm go${GO_VERSION}.linux-${LINUX_ARCH}.tar.gz
export GOPATH=${HOME}/code/go

# HashiCorp
wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
rm ./*.zip

Terraform Module for Deploying VMware Cloud Foundation VI Workload Domains

I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.

I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.

The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.

# main.tf

module "vidomain" {
source= "LifeOfBrianOC/vidomain"
version = "0.1.0"

sddc_manager_fqdn     = "sfo-vcf01.sfo.rainpole.io"
sddc_manager_username = "administrator@vsphere.local"
sddc_manager_password = "VMw@re1!"
allow_unverified_tls  = "true"

network_pool_name                     = "sfo-w01-np"
network_pool_storage_gateway          = "172.16.13.1"
network_pool_storage_netmask          = "255.255.255.0"
network_pool_storage_mtu              = "8900"
network_pool_storage_subnet           = "172.16.13.0"
network_pool_storage_type             = "VSAN"
network_pool_storage_vlan_id          = "1633"
network_pool_storage_ip_pool_start_ip = "172.16.13.101"
network_pool_storage_ip_pool_end_ip   = "172.16.13.108"

network_pool_vmotion_gateway          = "172.16.12.1"
network_pool_vmotion_netmask          = "255.255.255.0"
network_pool_vmotion_mtu              = "8900"
network_pool_vmotion_subnet           = "172.16.12.0"
network_pool_vmotion_vlan_id          = "1632"
network_pool_vmotion_ip_pool_start_ip = "172.16.12.101"
network_pool_vmotion_ip_pool_end_ip   = "172.16.12.108"

esx_host_storage_type = "VSAN"
esx_host1_fqdn        = "sfo01-w01-esx01.sfo.rainpole.io"
esx_host1_username    = "root"
esx_host1_pass        = "VMw@re1!"
esx_host2_fqdn        = "sfo01-w01-esx02.sfo.rainpole.io"
esx_host2_username    = "root"
esx_host2_pass        = "VMw@re1!"
esx_host3_fqdn        = "sfo01-w01-esx03.sfo.rainpole.io"
esx_host3_username    = "root"
esx_host3_pass        = "VMw@re1!"
esx_host4_fqdn        = "sfo01-w01-esx04.sfo.rainpole.io"
esx_host4_username    = "root"
esx_host4_pass        = "VMw@re1!"

vcf_domain_name                    = "sfo-w01"
vcf_domain_vcenter_name            = "sfo-w01-vc01"
vcf_domain_vcenter_datacenter_name = "sfo-w01-dc01"
vcenter_root_password              = "VMw@re1!"
vcenter_vm_size                    = "small"
vcenter_storage_size               = "lstorage"
vcenter_ip_address                 = "172.16.11.130"
vcenter_subnet_mask                = "255.255.255.0"
vcenter_gateway                    = "172.16.11.1"
vcenter_fqdn                       = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_cluster_name               = "sfo-w01-cl01"
vds_name                           = "sfo-w01-cl01-vds01"
vsan_datastore_name                = "sfo-w01-cl01-ds-vsan01"
vsan_failures_to_tolerate          = "1"
esx_vmnic0                         = "vmnic0"
vmnic0_vds_name                    = "sfo-w01-cl01-vds01"
esx_vmnic1                         = "vmnic1"
vmnic1_vds_name                    = "sfo-w01-cl01-vds01"
portgroup_management_name          = "sfo-w01-cl01-vds01-pg-mgmt"
portgroup_vsan_name                = "sfo-w01-cl01-vds01-pg-vsan"
portgroup_vmotion_name             = "sfo-w01-cl01-vds01-pg-vmotion"
esx_license_key                    = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
vsan_license_key                   = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"

nsx_vip_ip                    = "172.16.11.131"
nsx_vip_fqdn                  = "sfo-w01-nsx01.sfo.rainpole.io"
nsx_manager_admin_password    = "VMw@re1!VMw@re1!"
nsx_manager_form_factor       = "small"
nsx_license_key               = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
nsx_manager_node1_name        = "sfo-w01-nsx01a"
nsx_manager_node1_ip_address  = "172.16.11.132"
nsx_manager_node1_fqdn        = "sfo-w01-nsx01a.sfo.rainpole.io"
nsx_manager_node1_subnet_mask = "255.255.255.0"
nsx_manager_node1_gateway     = "172.16.11.1"
nsx_manager_node2_name        = "sfo-w01-nsx01b"
nsx_manager_node2_ip_address  = "172.16.11.133"
nsx_manager_node2_fqdn        = "sfo-w01-nsx01b.sfo.rainpole.io"
nsx_manager_node2_subnet_mask = "255.255.255.0"
nsx_manager_node2_gateway     = "172.16.11.1"
nsx_manager_node3_name        = "sfo-w01-nsx01c"
nsx_manager_node3_ip_address  = "172.16.11.134"
nsx_manager_node3_fqdn        = "sfo-w01-nsx01c.sfo.rainpole.io"
nsx_manager_node3_subnet_mask = "255.255.255.0"
nsx_manager_node3_gateway     = "172.16.11.1"
geneve_vlan_id                = "1634"
}

Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version

terraform init

Then create the and apply the plan

terraform plan -out=create-vi-wld
terraform apply create-vi-wld

Deploy VMware Cloud Foundation Cloud Builder using the vSphere Terraform Provider

As part of my series on deploying and managing VMware Cloud Foundation using Terraform, this post will focus on deploying the VMware Cloud Foundation Cloud Builder appliance using the vSphere Terraform provider. I’ve used this provider in the past to deploy the NSX Manager appliance.

Check out the other posts on Terraform with VMware Cloud Foundation here:

Deploy Cloud Builder with the vSphere Terraform Provider

As before, you first need to define your provider configuration

# providers.tf
 
terraform {
  required_providers {
    vsphere = {
      source  = "hashicorp/vsphere"
      version = "2.5.1"
    }
  }
}
provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true
}

Then we define our variables

# variables.tf
 
# vSphere Infrastructure Details
variable "data_center" { default = "sfo-m01-dc01" }
variable "cluster" { default = "sfo-m01-cl01" }
variable "vds" { default = "sfo-m01-vds01" }
variable "datastore" { default = "vsanDatastore" }
variable "compute_pool" { default = "sfo-m01-cl01" }
variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"}
variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"}
 
# vCenter Credential Variables
variable "vsphere_user" {}
variable "vsphere_password" {}
 
# Cloud Builder Deployment
variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" }
variable "vm_name" { default = "sfo-cb01" }
variable "local_ovf_path" { default = "F:\\binaries\\VMware-Cloud-Builder-4.5.2.0-22223457_OVF10.ova" }
variable "ip0" { default = "172.16.225.66" }
variable "netmask0" { default = "255.255.255.0" }
variable "gateway" { default = "172.16.225.1" }
variable "dns" { default = "172.16.225.4" }
variable "domain" { default = "sfo.rainpole.io" }
variable "ntp" { default = "ntp.sfo.rainpole.io" }
variable "searchpath" { default = "sfo.rainpole.io" }
variable "ADMIN_PASSWORD" { default = "VMw@re1!" }
variable "ROOT_PASSWORD" { default = "VMw@re1!" }
variable "hostname" { default = "sfo-cb01.sfo.rainpole.io" }

Note the vCenter Server credentials in the above variables.tf do not have default values. We will declare these sensitive values in a terraform.tfvars file and add *.tfvars to our .GitIgnore file so they are not synced to our Git repo.

# terraform.tfvars
 
# vSphere Provider Credentials
vsphere_user     = "administrator@vsphere.local"
vsphere_password = "VMw@re1!"

Now that we have all of our variables defined we can define our main.tf to perform the deployment. As part of this, we first need to gather some data from the target vCenter Server, so we know where to deploy the appliance.

# main.tf
 
# Data source for vCenter Datacenter
data "vsphere_datacenter" "datacenter" {
  name = var.data_center
}
 
# Data source for vCenter Cluster
data "vsphere_compute_cluster" "cluster" {
  name          = var.cluster
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Datastore
data "vsphere_datastore" "datastore" {
  name          = var.datastore
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Portgroup
data "vsphere_network" "mgmt" {
  name          = var.mgmt_pg
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for vCenter Resource Pool. In our case we will use the root resource pool
data "vsphere_resource_pool" "pool" {
  name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for ESXi host to deploy to
data "vsphere_host" "host" {
  name          = var.compute_host
  datacenter_id = data.vsphere_datacenter.datacenter.id
}
 
# Data source for the OVF to read the required OVF Properties
data "vsphere_ovf_vm_template" "ovfLocal" {
  name             = var.vm_name
  resource_pool_id = data.vsphere_resource_pool.pool.id
  datastore_id     = data.vsphere_datastore.datastore.id
  host_system_id   = data.vsphere_host.host.id
  local_ovf_path   = var.local_ovf_path
  ovf_network_map = {
    "Network 1" = data.vsphere_network.mgmt.id
  }
}
 
# Deployment of VM from Local OVA
resource "vsphere_virtual_machine" "cb01" {
  name                 = var.vm_name
  datacenter_id        = data.vsphere_datacenter.datacenter.id
  datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id
  host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id
  resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id
  num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus
  num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket
  memory               = data.vsphere_ovf_vm_template.ovfLocal.memory
  guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id
  scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type
 
  wait_for_guest_net_timeout = 5
 
  ovf_deploy {
    allow_unverified_ssl_cert = true
    local_ovf_path            = var.local_ovf_path
    disk_provisioning         = "thin"
    ovf_network_map   = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map
 
  }
  vapp {
    properties = {
      "ip0"               = var.ip0,
      "netmask0"          = var.netmask0,
      "gateway"          = var.gateway,
      "dns"             = var.dns,
      "domain"           = var.domain,
      "ntp"              = var.ntp,
      "searchpath"       = var.searchpath,
      "ADMIN_USERNAME"  = "admin",
      "ADMIN_PASSWORD"           = var.ADMIN_PASSWORD,
      "ROOT_PASSWORD"       = var.ROOT_PASSWORD,
      "hostname"           = var.hostname
    }
  }
  lifecycle {
    ignore_changes = [
      #vapp # Enable this to ignore all vapp properties if the plan is re-run
      vapp[0].properties["ADMIN_PASSWORD"],
      vapp[0].properties["ROOT_PASSWORD"],
      host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it
    ]
  }
}

Now we can run the following to initialise Terraform and the required vSphere provider

terraform init 

One the provider is initialised, we can then create a terraform plan to ensure our configuration is valid.

terraform plan -out=DeployCB

Now that we have a valid configuration we can apply our plan to deploy the Cloud Builder appliance.

terraform apply DeployCB

VMware Cloud Foundation Terraform Provider: Create a New VCF Instance

Following on from my VMware Cloud Foundation Terraform Provider introduction post here I wanted to start by using it to create a new VCF instance (or perform a VCF bring-up).

As of writing this post I am using version 0.5.0 of the provider.

First off we need to define some variables to be used in our plan. Here is a copy of the variables.tf I am using. For reference, I am using the default values in the VCF Planning & Preparation Workbook for my configuration. Note “sensitive = true” on password and licence key variable to stop them from showing up on the console and in logs.

variable "cloud_builder_username" {
  description = "Username to authenticate to CloudBuilder"
  default = "admin"
}

variable "cloud_builder_password" {
  description = "Password to authenticate to CloudBuilder"
  default = "VMw@re1!"
  sensitive = true
}

variable "cloud_builder_host" {
  description = "Fully qualified domain name or IP address of the CloudBuilder"
  default = "sfo-cb01.sfo.rainpole.io"
}

variable "sddc_manager_root_user_password" {
  description = "Root user password for the SDDC Manager VM. Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length"
  default = "VMw@re1!"
  sensitive = true
}

variable "sddc_manager_secondary_user_password" {
  description = "Second user (vcf) password for the SDDC Manager VM.  Password needs to be a strong password with at least one alphabet and one special character and at least 8 characters in length."
  default = "VMw@re1!"
  sensitive = true
}

variable "vcenter_root_password" {
  description = "root password for the vCenter Server Appliance (8-20 characters)"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_manager_admin_password" {
  description = "NSX admin password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_audit_password" {
  description = "NSX audit password. The password must be at least 12 characters long. Must contain at-least 1 uppercase, 1 lowercase, 1 special character and 1 digit. In addition, a character cannot be repeated 3 or more times consecutively."
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "nsx_manager_root_password" {
  description = " NSX Manager root password. Password should have 1) At least eight characters, 2) At least one lower-case letter, 3) At least one upper-case letter 4) At least one digit 5) At least one special character, 6) At least five different characters , 7) No dictionary words, 6) No palindromes"
  default = "VMw@re1!VMw@re1!"
  sensitive = true
}

variable "esx_host1_pass" {
  description = "Password to authenticate to the ESXi host 1"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host2_pass" {
  description = "Password to authenticate to the ESXi host 2"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host3_pass" {
  description = "Password to authenticate to the ESXi host 3"
  default = "VMw@re1!"
  sensitive = true
}

variable "esx_host4_pass" {
  description = "Password to authenticate to the ESXi host 4"
  default = "VMw@re1!"
  sensitive = true
}

variable "nsx_license_key" {
  description = "NSX license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vcenter_license_key" {
  description = "vCenter license to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "vsan_license_key" {
  description = "vSAN license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

variable "esx_license_key" {
  description = "ESXi license key to be used"
  default = "AAAAA-BBBBB-CCCCC-DDDDD-EEEE"
  sensitive = true
}

Next, we need our main.tf file that contains what we want to do – in this case – perform a VCF bring-up. For now, I’m using a mix of variables from the above variables.tf file and hard-coded values in my main.tf to achieve my goal. I will follow up with some better practices in a later post.

terraform {
  required_providers {
    vcf = {
      source = "vmware/vcf"
    }
  }
}
provider "vcf" {
  cloud_builder_host = var.cloud_builder_host
  cloud_builder_username = var.cloud_builder_username
  cloud_builder_password = var.cloud_builder_password
  allow_unverified_tls = true
}

resource "vcf_instance" "sddc_1" {
  instance_id = "sfo-m01"
  dv_switch_version = "7.0.3"
  skip_esx_thumbprint_validation = true
  management_pool_name = "sfo-m01-np"
  ceip_enabled = false
  esx_license = var.esx_license_key
  task_name = "workflowconfig/workflowspec-ems.json"
  sddc_manager {
    ip_address = "172.16.11.59"
    hostname = "sfo-vcf01"
    root_user_credentials {
      username = "root"
      password = var.sddc_manager_root_user_password
    }
    second_user_credentials {
      username = "vcf"
      password = var.sddc_manager_secondary_user_password
    }
  }
  ntp_servers = [
    "172.16.11.4"
  ]
  dns {
    domain = "sfo.rainpole.io"
    name_server = "172.16.11.4"
    secondary_name_server = "172.16.11.5"
  }
  network {
    subnet = "172.16.11.0/24"
    vlan_id = "1611"
    mtu = "1500"
    network_type = "MANAGEMENT"
    gateway = "172.16.11.1"
  }
  network {
    subnet = "172.16.13.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.13.101"
      end_ip_address = "172.16.13.108"
    }
    vlan_id = "1613"
    mtu = "8900"
    network_type = "VSAN"
    gateway = "172.16.13.1"
  }
  network {
    subnet = "172.16.12.0/24"
    include_ip_address_ranges {
      start_ip_address = "172.16.12.101"
      end_ip_address = "172.16.12.104"
    }
    vlan_id = "1612"
    mtu = "8900"
    network_type = "VMOTION"
    gateway = "172.16.12.1"
  }
  nsx {
    nsx_manager_size = "medium"
    nsx_manager {
      hostname = "sfo-m01-nsx01a"
      ip = "172.16.11.72"
    }
    root_nsx_manager_password = var.nsx_manager_root_password
    nsx_admin_password = var.nsx_manager_admin_password
    nsx_audit_password = var.nsx_manager_audit_password
    overlay_transport_zone {
      zone_name = "sfo-m01-overlay-tz"
      network_name = "sfo-m01-overlay"
    }
    vip = "172.16.11.71"
    vip_fqdn = "sfo-m01-nsx01"
    license = var.nsx_license_key
    transport_vlan_id = 1614
  }
  vsan {
    license = var.vsan_license_key
    datastore_name = "sfo-m01-vsan"
  }
  dvs {
    mtu = 8900
    nioc {
      traffic_type = "VSAN"
      value = "HIGH"
    }
    nioc {
      traffic_type = "VMOTION"
      value = "LOW"
    }
    nioc {
      traffic_type = "VDP"
      value = "LOW"
    }
    nioc {
      traffic_type = "VIRTUALMACHINE"
      value = "HIGH"
    }
    nioc {
      traffic_type = "MANAGEMENT"
      value = "NORMAL"
    }
    nioc {
      traffic_type = "NFS"
      value = "LOW"
    }
    nioc {
      traffic_type = "HBR"
      value = "LOW"
    }
    nioc {
      traffic_type = "FAULTTOLERANCE"
      value = "LOW"
    }
    nioc {
      traffic_type = "ISCSI"
      value = "LOW"
    }
    dvs_name = "SDDC-Dswitch-Private"
    vmnics = [
      "vmnic0",
      "vmnic1"
    ]
    networks = [
      "MANAGEMENT",
      "VSAN",
      "VMOTION"
    ]
  }
  cluster {
    cluster_name = "sfo-m01-cl01"
    cluster_evc_mode = ""
    resource_pool {
      name = "Mgmt-ResourcePool"
      type = "management"
    }
    resource_pool {
      name = "Network-ResourcePool"
      type = "network"
    }
    resource_pool {
      name = "Compute-ResourcePool"
      type = "compute"
    }
    resource_pool {
      name = "User-RP"
      type = "compute"
    }
  }
  psc {
    psc_sso_domain = "vsphere.local"
    admin_user_sso_password = "VMw@re1!"
  }
  vcenter {
    vcenter_ip = "172.16.11.70"
    vcenter_hostname = "sfo-m01-vc01"
    license = var.vcenter_license_key
    root_vcenter_password = var.vcenter_root_password
    vm_size = "tiny"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.101"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx01"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.102"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx02"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.103"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx03"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
  host {
    credentials {
      username = "root"
      password = "VMw@re1!"
    }
    ip_address_private {
      subnet = "255.255.255.0"
      cidr = ""
      ip_address = "172.16.11.104"
      gateway = "172.16.11.1"
    }
    hostname = "sfo01-m01-esx04"
    vswitch = "vSwitch0"
    association = "SDDC-Datacenter"
  }
}

Once the above is defined you can run the following to create your Terraform Plan:

terraform init
terraform plan -out=vcf-bringup

Once there are no errors from the above plan command you can run the following to start the VCF bring-up

terraform apply .\vcf-bringup

All going well, this should result in a successful VMware Cloud Foundation bring-up

VMware Cloud Foundation Terraform Provider: Introduction

HashiCorp Terraform has become an industry standard, infrastructure-as-code & desired-state configuration tool for managing on-premises and cloud-based entities. If you are not familiar with Terraform, I’ve covered some early general learnings on Terraform in some posts here & here. The internal engineering team are working on a Terraform provider for VCF, so I decided to give it a spin to review its capabilities & test drive it in the lab.

First off what VCF operations is the Provider capable of supporting today:

  • Deploying a new VCF instance (bring-up)
  • Commissioning hosts
  • Creating network pools
  • Deploying a new VI Workload domain
  • Creating clusters
  • Expanding clusters
  • Adding users

New functionality is being added every week, and as with all new initiatives like this, customer consumption and adoption will drive innovation and progress.

The GitHub repo contains some great example files to get you started. I am going to do a few blog posts on what I’ve learned so far but for now, here are the important links you need if you would like to take a look at the provider

If you want to get started by using the examples take a look here.

Terraform Learnings: Deploy an OVA Using the vSphere Provider

Once i got my head around the basics of Terraform I wanted to play with the vSphere provider to see what its was capable of. A basic use case that everyone needs is to deploy a VM. So my first use case is to deploy a VM from an OVA. The vSphere provider documentation for deploying an OVA uses William Lam’s nested ESXi OVA as an example. This is a great example of how to use the provider but seeing as I plan to play with the NSX-T provider also, I decided to use NSX-T Manager OVA as my source to deploy.

So first thing to do is setup your provider. Every provider in the Terraform registry has a Use Provider button on the provider page that pops up a How to use this provider box. This shows you what you need to put in your required_providers & provider block. In my case I will use a providers.tf file and it will look like the below example. Note you can only have one required_providers block in your configuration, but you can have multiple providers. So all required providers go in the same required_providers block and each provider has its own provider block.

# providers.tf

terraform {
  required_providers {
    vsphere = {
      source  = "hashicorp/vsphere"
      version = "2.1.1"
    }
  }
}
provider "vsphere" {
  user                 = var.vsphere_user
  password             = var.vsphere_password
  vsphere_server       = var.vsphere_server
  allow_unverified_ssl = true
}

To authenticate to our chosen provider (in this case vSphere) we need to provide credentials. If you read my initial post on Terraform you would have seen me mention a terraform.tfvars file which can be used for sensitive variables. We will declare these as variables later in the variables.tf file but this is where we assign the values. So my terraform.tfvars file looks like this

# terraform.tfvars

# vSphere Provider Credentials
vsphere_user     = "administrator@vsphere.local"
vsphere_password = "VMw@re1!"

Next we need variables to enable us to deploy our NSX-T Manager appliance. So we create a variables.tf file and populate it with our variables. Note – variables that have a default value are considered optional and the default value will be used if no value is passed.

# variables.tf 

# vSphere Infrastructure Details 
variable "data_center" { default = "sfo-m01-dc01" } 
variable "cluster" { default = "sfo-m01-cl01" } 
variable "vds" { default = "sfo-m01-vds01" } 
variable "workload_datastore" { default = "vsanDatastore" } 
variable "compute_pool" { default = "sfo-m01-cl01" } 
variable "compute_host" {default = "sfo01-m01-esx01.sfo.rainpole.io"} 
variable "vsphere_server" {default = "sfo-m01-vc01.sfo.rainpole.io"} 

# vCenter Credential Variables 
variable "vsphere_user" {} 
variable "vsphere_password" {} 

# NSX-T Manager Deployment 
variable "mgmt_pg" { default = "sfo-m01-vds01-pg-mgmt" } 
variable "vm_name" { default = "sfo-m01-nsx01a" } 
variable "local_ovf_path" { default = "F:\\OVAs\\nsx-unified-appliance-3.1.3.5.0.19068437.ova" } 
variable "deployment_option" { default = "extra_small" } # valid deployments are: extra_small, small, medium, large 
variable "nsx_role" { default = "NSX Manager" } # valid roles are NSX Manager, NSX Global Manager 
variable "nsx_ip_0" { default = "172.16.225.66" } 
variable "nsx_netmask_0" { default = "255.255.255.0" } 
variable "nsx_gateway_0" { default = "172.16.225.1" } 
variable "nsx_dns1_0" { default = "172.16.225.4" } 
variable "nsx_domain_0" { default = "sfo.rainpole.io" } 
variable "nsx_ntp_0" { default = "ntp.sfo.rainpole.io" } 
variable "nsx_isSSHEnabled" { default = "True" } 
variable "nsx_allowSSHRootLogin" { default = "True" } 
variable "nsx_passwd_0" { default = "VMw@re1!VMw@re1!" } 
variable "nsx_cli_passwd_0" { default = "VMw@re1!VMw@re1!" } 
variable "nsx_cli_audit_passwd_0" { default = "VMw@re1!VMw@re1!" } 
variable "nsx_hostname" { default = "sfo-m01-nsx01a.sfo.rainpole.io" } 

Now that we have our provider & variables in place we need a plan file to deploy the NSX-T Manager OVA, including the data sources we need to pull information from and the resource we are going to create.

<br />
# main.tf</p>
<p># Data source for vCenter Datacenter<br />
data "vsphere_datacenter" "datacenter" {<br />
  name = var.data_center<br />
}</p>
<p># Data source for vCenter Cluster<br />
data "vsphere_compute_cluster" "cluster" {<br />
  name          = var.cluster<br />
  datacenter_id = data.vsphere_datacenter.datacenter.id<br />
}</p>
<p># Data source for vCenter Datastore<br />
data "vsphere_datastore" "datastore" {<br />
  name          = var.workload_datastore<br />
  datacenter_id = data.vsphere_datacenter.datacenter.id<br />
}</p>
<p># Data source for vCenter Portgroup<br />
data "vsphere_network" "mgmt" {<br />
  name          = var.mgmt_pg<br />
  datacenter_id = data.vsphere_datacenter.datacenter.id<br />
}</p>
<p># Data source for vCenter Resource Pool. In our case we will use the root resource pool<br />
data "vsphere_resource_pool" "pool" {<br />
  name          = format("%s%s", data.vsphere_compute_cluster.cluster.name, "/Resources")<br />
  datacenter_id = data.vsphere_datacenter.datacenter.id<br />
}</p>
<p># Data source for ESXi host to deploy to<br />
data "vsphere_host" "host" {<br />
  name          = var.compute_host<br />
  datacenter_id = data.vsphere_datacenter.datacenter.id<br />
}</p>
<p># Data source for the OVF to read the required OVF Properties<br />
data "vsphere_ovf_vm_template" "ovfLocal" {<br />
  name             = var.vm_name<br />
  resource_pool_id = data.vsphere_resource_pool.pool.id<br />
  datastore_id     = data.vsphere_datastore.datastore.id<br />
  host_system_id   = data.vsphere_host.host.id<br />
  local_ovf_path   = var.local_ovf_path<br />
  ovf_network_map = {<br />
    "Network 1" = data.vsphere_network.mgmt.id<br />
  }<br />
}</p>
<p># Deployment of VM from Local OVA<br />
resource "vsphere_virtual_machine" "nsxt01" {<br />
  name                 = var.vm_name<br />
  datacenter_id        = data.vsphere_datacenter.datacenter.id<br />
  datastore_id         = data.vsphere_ovf_vm_template.ovfLocal.datastore_id<br />
  host_system_id       = data.vsphere_ovf_vm_template.ovfLocal.host_system_id<br />
  resource_pool_id     = data.vsphere_ovf_vm_template.ovfLocal.resource_pool_id<br />
  num_cpus             = data.vsphere_ovf_vm_template.ovfLocal.num_cpus<br />
  num_cores_per_socket = data.vsphere_ovf_vm_template.ovfLocal.num_cores_per_socket<br />
  memory               = data.vsphere_ovf_vm_template.ovfLocal.memory<br />
  guest_id             = data.vsphere_ovf_vm_template.ovfLocal.guest_id<br />
  scsi_type            = data.vsphere_ovf_vm_template.ovfLocal.scsi_type<br />
  dynamic "network_interface" {<br />
    for_each = data.vsphere_ovf_vm_template.ovfLocal.ovf_network_map<br />
    content {<br />
      network_id = network_interface.value<br />
    }<br />
  }</p>
<p>  wait_for_guest_net_timeout = 5</p>
<p>  ovf_deploy {<br />
    allow_unverified_ssl_cert = true<br />
    local_ovf_path            = var.local_ovf_path<br />
    disk_provisioning         = "thin"<br />
    deployment_option         = var.deployment_option</p>
<p>  }<br />
  vapp {<br />
    properties = {<br />
      "nsx_role"               = var.nsx_role,<br />
      "nsx_ip_0"               = var.nsx_ip_0,<br />
      "nsx_netmask_0"          = var.nsx_netmask_0,<br />
      "nsx_gateway_0"          = var.nsx_gateway_0,<br />
      "nsx_dns1_0"             = var.nsx_dns1_0,<br />
      "nsx_domain_0"           = var.nsx_domain_0,<br />
      "nsx_ntp_0"              = var.nsx_ntp_0,<br />
      "nsx_isSSHEnabled"       = var.nsx_isSSHEnabled,<br />
      "nsx_allowSSHRootLogin"  = var.nsx_allowSSHRootLogin,<br />
      "nsx_passwd_0"           = var.nsx_passwd_0,<br />
      "nsx_cli_passwd_0"       = var.nsx_cli_passwd_0,<br />
      "nsx_cli_audit_passwd_0" = var.nsx_cli_audit_passwd_0,<br />
      "nsx_hostname"           = var.nsx_hostname<br />
    }<br />
  }<br />
  lifecycle {<br />
    ignore_changes = [<br />
      #vapp # Enable this to ignore all vapp properties if the plan is re-run<br />
      vapp[0].properties["nsx_role"], # Avoid unwanted changes to specific vApp properties.<br />
      vapp[0].properties["nsx_passwd_0"],<br />
      vapp[0].properties["nsx_cli_passwd_0"],<br />
      vapp[0].properties["nsx_cli_audit_passwd_0"],<br />
      host_system_id # Avoids moving the VM back to the host it was deployed to if DRS has relocated it<br />
    ]<br />
  }<br />
}<br />

Once we have all of the above we can run the following to validate our plan

terraform plan -out=nsxt01

If your plan is successful you should see an output similar to below

Once your plan is successful run the command below to apply the plan

terraform apply nsxt01

If the stars align your NSX-T Manager appliance should deploy successfully. Once its deployed, if you were to re-run the plan you should see a message similar to below

One of the key pieces to this is the lifecycle block in the plan. The lifecycle block enables you to callout things that Terraform should ignore when it is re-applying a plan. Things like tags or other items that may get updated by other systems etc. In our case we want Terraform to ignore the vApp properties as it will try to apply password properties every time, which would entail powering down the VM, making the change, and powering the VM back on.

Terraform Learnings: Getting Started

Playing with Terraform has been on my To-Do list for a while now (it’s a long list 🙂 ). Over the past couple of weeks i’ve been spending time in my homelab getting familiar with it and figured i’d create a blog series that may help others.

So where do you start? There are lots of resources on the web to get started. From blogs to Pluralsight courses. The Terraform documentation & provider documentation in the Terraform Registry is also very good and usually has what you need.

For my setup i use Visual Studio Code. I flip between my mac & a windows jump vm in my homelab, and VSC works seamlessly on both. I’ve installed the following VSC extension:

Installing Terraform is straightforward. Follow the steps for your OS to download and then install Terraform.

Terraform Basic Constructs

Terraform uses the following basic constructs (there are plenty more advanced constructs but baby steps!)

  • Providers
    • Plugins to interact with target endpoints
  • Variables
    • User input to create objects
    • There are multiple (6 i believe) ways to provide variables to Terraform
  • Data Sources
    • Sources of information outside of Terraform that provide infrastructure details to interact with resources
  • Resources
    • Infrastructure objects you interact with
  • Configuration files
    • .tf file extension
    • Read alphabetically and actioned when you plan/apply/destroy your config (more on that later)
    • A single main.tf file can contain everything your infrastructure plan requires:
      • Provider
      • Variables
      • Data Sources
      • Resources
    • Recommended to split these out for larger environments
      • providers.tf
        • You must declare required_providers and then a provider block for each provider.
        • You can use alias = “alias_name” if you want to have multiple instances of a provider.
        • In the screenshot below the credentials are coming from variables defined in my terraform.tfvars file
  • variables.tf
    • List of variables to be used in the configuration
    • Written in Hashicorp Configuration Language (HCL) (or JSON)
  • Sensitive variables such as credentials or access keys should be stored in Terraform variable definition files .tfvars or stored as environment variables.
    • Use a Terraform.gitignore file to ensure your .tfvars with sensitive information are not committed to your git repo.
  • Data Sources & Resources can be in a single file or split out into logical infrastructure files
    • network.tf
    • deploy_vm.tf
    • etc

Terraform Commands

Once you have your configuration defined you first want to validate that it will run

terraform plan -out=plan-name
# This will evaluate your configuration to ensure it is valid and store the result in a file called "plan-name"
terraform apply plan-name
# This will apply your configuration based on the output of the above plan. You will be asked to confirm this action. you can add -auto-approve to skip the confirmation (use with caution)
terraform destroy
# This will destroy the configuration. You will be asked to confirm this action. you can add -auto-approve to skip the confirmation (use with caution)

Hopefully this was helpful. This is just scratching the surface to get started with Terraform. I recommend getting hands on and reading the documentation as you go. I will continue this with a post on using the vSphere provider to deploy an OVA. Stay tuned!