Multi-Cloud Deployment — Using Terraform With GitLab to Deploy to AWS and Azure

Rafael Medeiros
6 min readDec 5, 2021
Photo by Tim Mossholder on Unsplash

If you remember from my last article, we have created a single-cloud deployment to deploy an infrastructure to Azure.

Today we are going to use the same approach, but this time we will be deploying to AWS as well. How this is going to work?Here’s the diagram:

If you want to follow this how-to, you can find the full code in my repo right here:

The Terraform Code

let’s understand how the code was set up:

I’ve written this post to talk about this best practice of having multiple *.tf files in Terraform, for better flexibility and control over your deployment. This will help us to understand where we have to go when we want to find any resource.

The aws.tf file contains all the AWS resources, the azure.tf contains all the Azure resources.

One thing to note here is that now I have more than just one provider, and that’s why I’ve separated them in a file called providers.tf:

Here you can find the providers supported by Terraform.

Configuring Gitlab to Authenticate to Azure and AWS

Let’s configure Gitlab to make sure that it has the necessary permissions to create the resources by assigning an Azure service Principal and an AWS IAM user to the Gitlab variables.

Creating the Service Principal in Azure

To be able to connect to Azure via pipeline, it’s recommended that you create a service principal to do so, and that’s exactly what we will be doing here.

Go to your Azure subscription and find your subscription id. Open a Powershell session and make sure you have AZ CLI installed on your machine, and then run the following commands:

$SubscriptionId = "XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"az loginaz account set --subscription $subscriptionId
az ad sp create-for-rbac --role="Contributor" --scopes="subscriptions/$subscriptionId" --name "id-terraformtest"

This command will create a service principal on your Azure Active Directory that has contributor access to the subscription you want to deploy the resources, this is more than enough to provision everything successfully.

Also, an output will be generated, make sure you save this information elsewhere, you won’t be able to see the service principal password again:

Configuring IAM User in AWS

For simplicity sake, here we will be giving full access to the IAM user. Just run the commands below to create a user and assign him admin permissions:

If it’s your first time running aws command line, you will have to configure your credentials first. This can be done by following this tutorial.

This command will create a user called it-terraformtest:

aws iam create-user --user-name id-terraformtest

To assign it a role, you have to do the following:

Search for the role you want to assign to this IAM user, in our case it will be Administrator access:

aws iam list-policies | select-string AdministratorAccess

Select the Arn of the role and paste it in the following command:

aws iam attach-user-policy --user-name id-terraformtest --policy-arn "arn:aws:iam::aws:policy/AdministratorAccess"

Creating the Access key for the user is out-of-scope on this how-to, but you can find the instructions here:

Adding Azure and AWS Variables to Gitlab

With all the information about both the IAM user and the Service Principal user, it’s now time to add them to the Gitlab Variables.

Here are the variables that you need to add to Gitlab:

  • ARM_CLIENT_ID — Service Principal appID;
  • ARM_CLIENT_SECRET — Service Principal Password;
  • ARM_SUBSCRIPTION_ID — Subscription ID;
  • ARM_TENANT_ID — Tenant ID;
  • AWS_ACCESS_KEY_ID — IAM Access key ID;
  • AWS_SECRET_ACCESS_KEY — IAM Access key;
  • AWS_DEFAULT_REGION — Default region to be used by IAM user;

On your project, go to settings >>CI/CD:

Add each of the variables above with the correct values:

You will end up with the following:

The pipeline will be the same as we used in my last post, therefore the .gitlab-ci.yml file is already included in my repo:

Running the Pipeline

Clone the repository to your machine to upload your code:

Open up a Powershell session, select a folder where you want to store this repo and then clone the recently created repo with the following command:

git clone <repo-url>

If you haven’t uploaded the code to your gitlab repo yet, you can do so by running the following commands:

git add . #To add all the changes to be committed
git status #To show which files will be uploaded
git commit -m "Added Terraform code" #To commit the changes
git push #To push the changes to the repo

The pipeline will be triggered after any commit is made to “main” branch. You can also manually run the pipeline through the portal:

You can see during the “plan” phase which resources will be created:

EC2 Instance
Azure VM

After checking that everything was planned correctly, don’t forget to allow the apply phase to start creating the resources:

After that, we can check on both portals if the resources are there:

AWS:

VPC
EC2 instance

Azure:

VNET and Azure VM

That’s it! Your first multi-cloud deployment is complete!

Wrap Up

In this post, you’ve learned how to perform a multi-cloud deployment. You can change the code to accommodate other cloud providers or even add more providers to this current deployment. And that’s why Terraform is so powerful and adopted as the default IaC tool by many companies.

I hope you liked it, if you have any questions, let me know in the comments. See you in the next post!

--

--

Rafael Medeiros

DevOps Engineer | 3x Azure | CKA | Terraform Fanatic | Another IT Professional willing to help the community