Mate Terraform and Serverless
Hello everybody !
Today’s article is about mating Terraform and Serverless together to get the best of both worlds.
“Why ?” 🤔 would you ask me !
Because IMHO, Terraform is very good at handling the infrastructure part while Serverless makes it a breeze to manage your Cloud FaaS deployments.
When I say “very good” it mostly means “less verbose”, as both of them are able to perform the same exact tasks on their own.
Context
Imagine you would like to describe your core infrastructure with Terraform on AWS, so for example :
- AWS VPC
- AWS Security Groups
- AWS Subnets
- AWS API Gateway definition
- AWS IAM Roles
- etc …
On the other hand, you would like to let Serverless handle packaging your source code, deploy to AWS CloudFormation and create everything required for your FaaS on your behalf, so for example :
- CloudFormation stack
- AWS S3 bucket to host versions of your deployed code
- AWS Lambda (linked to resources previously setup with Terraform)
- AWS API Gateway endpoint (linked to AWS API Gateway previously created with Terraform)
- etc …
Also, keep in mind that you could make good use of some Serverless Plugins, among others : serverless-prune-plugin, serverless-offline, middy and so on…
Problems
Trouble arise when you discover that :
- Terraform tends to destroy then recreate Resources on each change
- There’s no Terraform Provider for Serverless, as it’s basically pointless since both of them aim at managing the state of your deployments and it could be done entirely in either of them
- You could delegate Serverless actions inside Terraform to some Provisioning tool like Ansible, but you’re reluctant to add one more actor in your whole setup (more tools, more syntax to learn, more complications)
- You’ve already gave a try at defining all the Resources either in Serverless or in Terraform configuration, but ended up gnashing your teeth as both describe some gracefully, while being verbose for the rest
Let’s do it
The main point is to be able to :
- define whole infrastructure in Terraform, and provision FaaS deployments with Serverless
- being able to pass required variables between both : luckily, there’s already excellent explanations about this from Yan Cui
- write about Serverless module in Terraform in an explicitly way, especially the
version
of the package to be deployed - provide a way for Terraform to run :
-yarn install microservice@version
andyarn serverless deploy …
when BOTH the resource is created for the first time OR when the Serverless moduleversion
is updated
-yarn install microservice@version
andyarn serverless remove …
ONLY when the resource is deleted - avoid writing custom scripts as much as possible
As mentioned in this discussion, I first found a hacky way to do it.
But it happens that, aside from being very ugly, Terraform CLI also already warned that it’s deprecated, so it would probably fail in a coming release.
Then I ended up reading a lot of documentation and articles on the internet until I found this article :
One of the useful tip to run Ansible is the terraform taint command. By using the command, we can just run the Ansible portion not touching (create or destroy) the AWS instance.
Bam ! I thought there might be a way to use it.
Concrete example
Requirements
Here are the versions on my installed softwares at the time of writing this article :
- Terraform v0.12.19
+ provider.null v2.1.2 - Serverless framework v1.61.2
- Node.js v10.17.0
- NPM v6.13.4
Yarn v1.21.1 - GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin19)
Yeah… I bit the forbidden fruit… 😅
I’m gonna actually strip most of it in order to focus on how to orchestrate Terraform and Serverless together, which means that all you really need to try it out is Terraform, Node.js, NPM, Yarn and a command-line shell able to run GNU bash. Likewise, I chose to use Node.js and AWS ecosystems but you could switch them with your favorite stacks.
Please also note that the following solution is viable in a CI/CD pipeline since
.serverless
folder gets appropriately regenerated and synced whenever Serverless commands are run, as stated in this discussion. Same goes for.terraform
folder and states files, as you will see below.
Sourcecode
First, let’s define a Terraform Module to represent our Serverless “provisioning of a sort” inside Terraform :
# microservice/module.tfvariable "microservice_version" {
type = string
}
variable "microservice_active" {
type = bool
default = true
}resource "null_resource" "microservice" {
triggers = { version = "${var.microservice_version}" }
count = var.microservice_active ? 1 : 0
provisioner "local-exec" {
command = "echo \"install ${var.microservice_version} && serverless deploy\""
} provisioner "local-exec" {
command = "echo \"serverless remove\""
when = destroy
}
}
Please note that in a real implementation you would have to provide additional Terraform Input Variable matching required Serverless Options for AWS.
e.g. : stage, region, etc …
variable "microservice_version" { ... }
Terraform Input Variable which allows us to define the version of our microservice we would like to install with Yarn and deploy with Serverless.
e.g. : yarn install microservice@version && yarn serverless deploy …
variable "microservice_active" { ... }
Terraform Input Variable which allows us to define a mechanism to remove our microservice.
In fact if you just remove the block inside Terraform, it will remove the resource but it won’t trigger Terraform Destroy-Time Provisioner. With a Terraform Resource, this can be achieved by setting
count = 0
but the thing is that it’s not yet supported for Terraform Module at the time I write this article.
resource "null_resource" "microservice" { ... }
Define a Terraform Null Resource to allow us to trigger Serverless commands.
More on the implementation below.
triggers = { version = "${var.microservice_version}" }
This is mandatory simply because if you don’t put it, Terraform will completely ignore the changes to your
microservice_version
and as a result won't trigger a new Serverless deploy. Usually though, it will actually destroy the resource, executing the Terraform Destroy-Time Provisioner, then recreate it again, executing the Terraform Creation-Time Provisioner again. Of course, this is not what we want because if the resource already exists, we would like to skip Serverless remove and run Serverless deploy again instead, but there’s a way to bypass it below.
count = var.microservice_active ? 1 : 0
This is a trick to be able to run the Terraform Destroy-Time Provisioner when we actually want to remove the resource.
provisioner "local-exec" {
command = "echo \"install ${var.microservice_version} && serverless deploy\""
}
This is our Terraform Creation-Time Provisioner : it will run whenever we create the resource for the first time and whenever we update the microservice_version
because of the trick above and below (triggers
and taint
).
Please note that here I actually won’t deploy anything, I just run some
echo
as a proof-of-concept.
provisioner "local-exec" {
command = "echo \"serverless remove\""
when = destroy
}
This is our Terraform Destroy-Time Provisioner : it will run whenever we set the microservice_active
to false, or if you implement some depends_on
variable on your module too.
Please note that here I actually won’t remove anything, I just run some
echo
as a proof-of-concept.
# main.tfmodule "sls_user" {
source = "./microservice"
microservice_version = "1.0.0"
}module "sls_product" {
source = "./microservice"
microservice_version = "1.0.0"
# microservice_active = false
}
This is how, in your main Terraform file, you would define microservices to be managed with Serverless.
source
: the path to the Terraform Module we created before
e.g. :microservice/module.tf
so the module is located in./microservice
microservice_version
: the version of the package that you want to deploymicroservice_active
(optional) : whenever you want to see your package removed, just set it to false
// package.json{
...
"scripts": {
"tf:init": "terraform init",
"tf:plan": "terraform plan -out terraform.plan && terraform show -json terraform.plan > terraform.plan.json",
"tf:taint": "bash taint.sh",
"tf:apply": "terraform apply --auto-approve",
"tf:clean": "rm -f terraform.plan terraform.plan.json",
"deploy": "yarn tf:init && yarn tf:plan && yarn tf:taint && yarn tf:apply"
}
}
A bunch of useful scripts to be able to run everything in the terminal with just yarn deploy
.
"tf:init": "terraform init"
terraform init : it is required the first on your computer (or whenever you delete the .terraform
folder that it creates) to install required Terraform Plugins and Modules. This is usually required every time it will run in your CI/CD pipelines.
"tf:plan": "terraform plan -out terraform.plan && terraform show -json terraform.plan > terraform.plan.json"
terraform plan : this is where Terraform will read the configuration file (here main.tf
and its dependencies), analyze the existing managed resources (stored in terraform.tfstate
and terraform.tfstate.backup
whenever you apply your changes, locally or remotely depending on how you set it up) and deduce the change to be applied, without applying them yet.
The trick here is to take advantage of the new feature in Terraform v0.12 and later to output the plan to JSON format so that we can introspect the planned changes with another script (more on this below).
terraform plan -out terraform.plan
Outputs the result of terraform plan
command to terraform.plan
file.
terraform show -json terraform.plan
Shows the content of terraform.plan
file in plain JSON format.
> terraform.plan.json
Pipes the result of the previous command with >
into terraform.plan.json
file.
This where the whole trick lies, so keep up attentively ! 👀
"tf:taint": "bash taint.sh"
Executes our taint.sh
file :
// taint.sh#!/usr/bin/envnode for ADDRESS in $(node should-taint.js)
do terraform taint $ADDRESS
done
⚠️ Don’t forget to allow your bash script to be executable by setting file permissions with :
chmod +x taint.sh
This bash script will execute should-taint.js
and for every outputs logged in the console, run terraform taint
.
// should-taint.jsconst fs = require('fs')
const path = require('path')const plan = fs.readFileSync(path.join(__dirname, 'terraform.plan.json'), { encoding: 'utf8' })
const modules = fs.readFileSync(path.join(__dirname, '.terraform', 'modules', 'modules.json'), { encoding: 'utf8' })const is = {}
is.serverless = key => JSON.parse(modules)
.Modules
.filter(({ Source }) => Source === './microservice') // define Source accoding to your module source
.find(({ Key }) => Key === key)const taint = JSON.parse(plan)
.resource_changes
.filter(({ module_address }) => is.serverless(module_address.split('.')[1])) // filter the serverless modules
.filter(({ change }) => (change.actions.includes('create') && change.actions.includes('delete'))) // output of `triggers`.map(({ address }) => address)taint.forEach(address => console.log(address))
So here is how we introspect the Terraform State to decide which resource we should run terraform taint
onto. I chose to do it with Node.js, but it could be achieved with Python or any other programming language.
const plan = fs.readFileSync(path.join(__dirname, 'terraform.plan.json'), { encoding: 'utf8' })
Reads the previously outputted terraform.plan.json
which contains the planned changes.
const modules = fs.readFileSync(path.join(__dirname, '.terraform', 'modules', 'modules.json'), { encoding: 'utf8' })
and
const is = {}
is.serverless = key => JSON.parse(modules)
.Modules
.filter(({ Source }) => Source === './microservice') // define Source according to your module source
.find(({ Key }) => Key === key)
Reads the installed modules from .terraform/modules/modules.json
and provide a is.serverless
shortcut method to be able to determine whether the resource belongs to our module or not.
⚠️ Please note here
Source === ‘./microservice’
:
if your module lies in another location, update it accordingly.
const taint = JSON.parse(plan)
.resource_changes
.filter(({ module_address }) => is.serverless(module_address.split('.')[1])) // filter the serverless modules
.filter(({ change }) => (change.actions.includes('create') && change.actions.includes('delete'))) // output of `triggers`.map(({ address }) => address)
Finally : reads the planned changes, determines if they are related to our module, checks if they will trigger both create
AND delete
actions and return their address (see Terraform Resource Addressing).
A note on Terraform actions :
onlycreate
: the first time the resource will be created.
onlydelete
: whenever the resource will be deleted (including whendepends_on
is marked for deletion and whencount = 0
).create
anddelete
: when the resource has its attribute(s) changed, it will be deleted and re-created.
So in a few words, here we determine if there’s gonna be any change made to our module resource(s) and taint them to prevent Terraform from running Destroy-Time Provisioner while still running Creation-Time Provisioner.
The remaining commands are just shortcuts to terraform apply --auto-approve
, deleting the temporary files created by the terraform plan
output and chaining everything together.
I hope you enjoyed this article !
Happy coding 💻🙏🏻