Terraform Module for Deploying VMware Cloud Foundation VI Workload Domains

I have been working a lot with Terraform lately and in particular the Terraform Provider For VMware Cloud Foundation. As I covered previously, the provider is something that is in development but is available to be tested and used in your VMware Cloud Foundation instances.

I spent this week at VMware Explore in Barcelona and have been talking with our customers about their automation journey and what tools they are using for configuration management. Terraform came up in almost all conversations and the topic of Terraform modules specifically. Terraform modules are basically a set of standard configuration files that can be used for consistent, repeatable deployments. In an effort to standardise my VI Workload domain deployments, and to learn more about Terraform modules, I have created a Terraform module for VMware Cloud Foundation VI Workload domains.

The module is available on GitHub here and is also published to the Terraform registry here. Below is an example of using the module to deploy a VI Workload domain on a VMware Cloud Foundation 4.5.2 instance. Because the module contains all the logic for variable types etc, all you need to do is pass variable values.

# main.tf

module "vidomain" {
source= "LifeOfBrianOC/vidomain"
version = "0.1.0"

sddc_manager_fqdn     = "sfo-vcf01.sfo.rainpole.io"
sddc_manager_username = "administrator@vsphere.local"
sddc_manager_password = "VMw@re1!"
allow_unverified_tls  = "true"

network_pool_name                     = "sfo-w01-np"
network_pool_storage_gateway          = "172.16.13.1"
network_pool_storage_netmask          = "255.255.255.0"
network_pool_storage_mtu              = "8900"
network_pool_storage_subnet           = "172.16.13.0"
network_pool_storage_type             = "VSAN"
network_pool_storage_vlan_id          = "1633"
network_pool_storage_ip_pool_start_ip = "172.16.13.101"
network_pool_storage_ip_pool_end_ip   = "172.16.13.108"

network_pool_vmotion_gateway          = "172.16.12.1"
network_pool_vmotion_netmask          = "255.255.255.0"
network_pool_vmotion_mtu              = "8900"
network_pool_vmotion_subnet           = "172.16.12.0"
network_pool_vmotion_vlan_id          = "1632"
network_pool_vmotion_ip_pool_start_ip = "172.16.12.101"
network_pool_vmotion_ip_pool_end_ip   = "172.16.12.108"

esx_host_storage_type = "VSAN"
esx_host1_fqdn        = "sfo01-w01-esx01.sfo.rainpole.io"
esx_host1_username    = "root"
esx_host1_pass        = "VMw@re1!"
esx_host2_fqdn        = "sfo01-w01-esx02.sfo.rainpole.io"
esx_host2_username    = "root"
esx_host2_pass        = "VMw@re1!"
esx_host3_fqdn        = "sfo01-w01-esx03.sfo.rainpole.io"
esx_host3_username    = "root"
esx_host3_pass        = "VMw@re1!"
esx_host4_fqdn        = "sfo01-w01-esx04.sfo.rainpole.io"
esx_host4_username    = "root"
esx_host4_pass        = "VMw@re1!"

vcf_domain_name                    = "sfo-w01"
vcf_domain_vcenter_name            = "sfo-w01-vc01"
vcf_domain_vcenter_datacenter_name = "sfo-w01-dc01"
vcenter_root_password              = "VMw@re1!"
vcenter_vm_size                    = "small"
vcenter_storage_size               = "lstorage"
vcenter_ip_address                 = "172.16.11.130"
vcenter_subnet_mask                = "255.255.255.0"
vcenter_gateway                    = "172.16.11.1"
vcenter_fqdn                       = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_cluster_name               = "sfo-w01-cl01"
vds_name                           = "sfo-w01-cl01-vds01"
vsan_datastore_name                = "sfo-w01-cl01-ds-vsan01"
vsan_failures_to_tolerate          = "1"
esx_vmnic0                         = "vmnic0"
vmnic0_vds_name                    = "sfo-w01-cl01-vds01"
esx_vmnic1                         = "vmnic1"
vmnic1_vds_name                    = "sfo-w01-cl01-vds01"
portgroup_management_name          = "sfo-w01-cl01-vds01-pg-mgmt"
portgroup_vsan_name                = "sfo-w01-cl01-vds01-pg-vsan"
portgroup_vmotion_name             = "sfo-w01-cl01-vds01-pg-vmotion"
esx_license_key                    = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
vsan_license_key                   = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"

nsx_vip_ip                    = "172.16.11.131"
nsx_vip_fqdn                  = "sfo-w01-nsx01.sfo.rainpole.io"
nsx_manager_admin_password    = "VMw@re1!VMw@re1!"
nsx_manager_form_factor       = "small"
nsx_license_key               = "AAAAA-BBBBB-CCCCC-DDDDD-EEEEE"
nsx_manager_node1_name        = "sfo-w01-nsx01a"
nsx_manager_node1_ip_address  = "172.16.11.132"
nsx_manager_node1_fqdn        = "sfo-w01-nsx01a.sfo.rainpole.io"
nsx_manager_node1_subnet_mask = "255.255.255.0"
nsx_manager_node1_gateway     = "172.16.11.1"
nsx_manager_node2_name        = "sfo-w01-nsx01b"
nsx_manager_node2_ip_address  = "172.16.11.133"
nsx_manager_node2_fqdn        = "sfo-w01-nsx01b.sfo.rainpole.io"
nsx_manager_node2_subnet_mask = "255.255.255.0"
nsx_manager_node2_gateway     = "172.16.11.1"
nsx_manager_node3_name        = "sfo-w01-nsx01c"
nsx_manager_node3_ip_address  = "172.16.11.134"
nsx_manager_node3_fqdn        = "sfo-w01-nsx01c.sfo.rainpole.io"
nsx_manager_node3_subnet_mask = "255.255.255.0"
nsx_manager_node3_gateway     = "172.16.11.1"
geneve_vlan_id                = "1634"
}

Once you have the above defined, you simply need to run the usual Terraform commands to apply the configuration. First we initialise the env which will pull the required module version

terraform init

Then create the and apply the plan

terraform plan -out=create-vi-wld
terraform apply create-vi-wld

Leave a comment