Set up the cluster with Consul and Nomad
In this tutorial, you will create the required infrastructure and set up access to the CLI and UI of Consul and Nomad.
Infrastructure overview
The cluster consists of three server nodes, three private client nodes, and one publicly accessible client node. Each node runs the Consul agent and Nomad agent. The agents run in either server or client mode depending on the role of the node.
Nomad server nodes are responsible for accepting jobs, managing client nodes, and scheduling workloads. Consul server nodes are responsible for storing state information like service address, the results of health checks, and other service-specific configurations.
Nomad client nodes are the nodes where Nomad schedules workloads to run. The Nomad client registers with the Nomad servers, communicates health status to the servers, accepts task allocations, and updates the status of allocations. Consul client nodes report node and service health statuses to Consul servers.
Prerequisites
Running this tutorial locally requires the following software and credentials:
- Nomad CLI installed locally
- Consul CLI installed locally
- Packer CLI installed locally
- Terraform CLI installed locally
- AWS account with credentials environment variables set locally
openssl
andhey
CLI tools installed locally
Create the cluster
The cluster creation process includes steps to build the machine images with Packer and then deploy the infrastructure with Terraform.
Make sure that you have cloned the tutorial's code repository locally and changed into the directory.
Build the machine image
Change into the aws
directory.
Rename the example variables file.
Open variables.hcl
in your text editor. Update the region
variable to your preferred AWS region. In this example, the region is us-east-1
. The remaining variables are for Terraform and you will update them after building the AMI. Save the file.
Initialize Packer to download the required plugins. This command returns no output when it finishes successfully.
Build the image and provide the variables file with the -var-file
flag.
The terminal outputs the value of the AMI.
Deploy the infrastructure
Open variables.hcl
in your text editor and update the ami
variable with the value output from the Packer build. In this example, the value is ami-0445eeea5e1406960
. Save the file.
The variables.hcl
file includes options for values used to configure Consul.
Consul is configured with TLS encryption and to trust the certificate provided by the Consul servers. The Consul Terraform provider requires the CONSUL_TLS_SERVER_NAME
environment variable to be set.
The Terraform code defaults the datacenter and domain variables in variables.hcl
to dc1
and global
so CONSUL_TLS_SERVER_NAME
will be consul.dc1.global
.
You can update these variables with other values. If you do, be sure to also update the CONSUL_TLS_SERVER_NAME
variable.
Export the CONSUL_TLS_SERVER_NAME
environment variable.
Initialize the Terraform configuration to download the necessary providers and modules.
Provision the resources and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation. Terraform will output addresses for the Consul and Nomad UIs when it completes the process.
Set up Consul and Nomad access
Set up access to Consul and Nomad from your local terminal using the output values from Terraform.
The Terraform code generates a datacenter.env
environment file that contains all the necessary variables to connect to your Consul and Nomad instances using the CLI.
Source the datacenter.env
file to set the Consul and Nomad environment variables. This command returns no output when it runs successfully.
Verify Consul members using the consul members
command.
Ensure connectivity to the Nomad cluster from your terminal.
Next steps
In this tutorial, you created a cluster running Consul and Nomad and set up CLI and UI access to Consul and Nomad.
In the next tutorial, you will deploy the initial containerized version of the HashiCups application.