tangled developer

The truth is out there. Anybody got the URL?!

Migrating AWS Lambda to Pulumi Project from Serverless

Written by Venky Koneru · 26 Jul 2020 · 4 min read

Introduction

In this article, We will be looking at how to migrate an AWS lambda function deployed as a Serverless application into a Pulumi project without modifying the underlying lambda code.

This article assumes that you have a basic understanding of serverless framework1 and pulumi2.

The code snippets and the projects are available on github here

Our existing Serverless Framework implementation

Let us first imagine that we have a serverless application with a single lambda function (createTodo) deployed in AWS. To make it interesting, we will use the code from typescript todo rest api example serverless examples repository.

The project structure could look something like this:

todos-api
├─ .eslintrc.json
├─ functions
│  └─ create.ts
├─ package.json
├─ package-lock.json
├─ serverless.yml
└─ tsconfig.json

serverless.yml could look as follows:

service: todos-api
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: eu-central-1
environment:
PRE_NAME: ${self:service}-${opt:stage, self:provider.stage}
DYNAMODB_TABLE: ${self:provider.environment.PRE_NAME}
ROLE_NAME: ${self:provider.environment.PRE_NAME}-executionRole
CREATE_TODO_FUNCTION_NAME: ${self:provider.environment.PRE_NAME}-createTodo
role: executionRole
functions:
createTodo:
name: ${self:provider.environment.CREATE_TODO_FUNCTION_NAME}
handler: functions/create.handler
memorySize: 128
environment:
DYNAMODB_TABLE: ${self:provider.environment.DYNAMODB_TABLE}
events:
- http:
path: new
method: post
resources:
Resources:
executionRole:
Type: AWS::IAM::Role
Properties:
RoleName: ${self:provider.environment.ROLE_NAME}
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: ${self:provider.environment.ROLE_NAME}-policy
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action:
- logs:CreateLogGroup
- logs:CreateLogStream
- logs:PutLogEvents
Resource: arn:aws:logs:${opt:region, self:provider.region}:#{AWS::AccountId}:log-group:/aws/lambda/${self:provider.environment.CREATE_TODO_FUNCTION_NAME}*
- Effect: Allow
Action:
- dynamodb:PutItem
Resource: arn:aws:dynamodb:${opt:region, self:provider.region}:#{AWS::AccountId}:table/${self:provider.environment.DYNAMODB_TABLE}
TodosDynamoDbTable:
Type: "AWS::DynamoDB::Table"
DeletionPolicy: Retain
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:provider.environment.DYNAMODB_TABLE}
plugins:
- serverless-plugin-typescript
- serverless-pseudo-parameters

And we deploy with the following command

$ cd todos-api
$ serverless deploy

This would create the following AWS resources:

  • CloudFormation template used to provision the stack
  • S3 bucket where zip files of the function are stored
  • Lambda serverless application
  • Lambda function, belonging to the application
  • CloudWatch log group with log streams for each instance of the function
  • Dynamodb Table resource
  • Role attached to the function with the following policies:
    • An "assume role" policy with a permission for AWS Lambda service to assume this role
    • A policy allowing the CloudWatch log group to create log streams and put log events to these streams
    • A policy allowing dynamodb PUT operation
  • REST API Gateway with:
    • the /new POST endpoint integrated with the function
    • a permission to invoke the function

After we implement the Lambda function with Pulumi, we will have pretty much the same set of resources, except for the:

  • CloudFormation template
  • S3 bucket
  • Lambda serverless application

So you can remove them after you've moved your function.

Converting to pulumi project

At the end of the conversion, we would have the following structure.

todos-api
├─ .eslintrc.json
├─ functions
│ └─ create.ts
+├─ infrastructure
+│ └─ index.ts
├─ package.json
├─ package-lock.json
+├─ package.sh
+├─ Pulumi.yaml
+├─ Pulumi.dev.yaml
-├─ serverless.yaml
└─ tsconfig.json

By having the above structure, we are separating the business logic and infrastructure code. This is useful, especially when the project grows with various services and resources.

Initialize pulumi

$ cd todos-api
$ npm i -D -E @pulumi/pulumi @pulumi/aws

Above command adds the necessary pulumi dependencies as a dev dependency to already existing package.json file.

After adding pulumi dependencies, create a pulumi config file (Pulumi.yaml) to initialize a new pulumi project. The same can also be accomplished via pulumi templates using pulumi new command.

Pulumi.yaml file could look as follows:

name: todos-api
runtime: nodejs
description: Todos API
main: infrastructure/

Now, we will create a new stack for the above pulumi project. Let's name it as dev

$ pulumi stack init

The above command will create a new pulumi stack named dev. Also, it is suggested to set a region for the stack using below command.

$ pulumi config set aws:region eu-central-1

This will create a stack file Pulumi.dev.yaml with the following content.

config:
  aws:region: eu-central-1

With this, we are done initializing pulumi project and a dev stack.

Infrastructure code

Before proceeding, let's create an empty directory named infrastructure along with index module to place the infra code.

$ mkdir -p infrastructure && touch infrastructure/index.ts

Also, let's prepare a script to help packaging node modules and code archive files. The script could look as follows:

#!/bin/bash
set -o errexit # Exit on error
CWD=$(pwd)
BUILD_PATH_LAYERS=$CWD/layers
cd $CWD
# Package typescript code
npm install --prefer-offline
npm run build
echo "zip build directory"
cd build
zip -q -r archive.zip functions/*
# Package node_modules
mkdir -p $BUILD_PATH_LAYERS
cp $CWD/package.json $BUILD_PATH_LAYERS/package.json
cd $BUILD_PATH_LAYERS
echo "installing production only dependencies"
npm install --production --prefer-offline
echo "zip node_modules directory"
mkdir -p ./nodejs
mv node_modules nodejs/node_modules
zip -q -r archive.zip *
rm -rf nodejs
echo "exiting to root directory"
cd $CWD
echo "Done."

Additionally, we could add a simple npm script command to our package.json file so that we could run packaging via npm.

scripts: {
"build": "tsc",
"lint": "eslint .",
+ "prepackage": "rm -rf build layers",
+ "package": "./package.sh"
}

Now, let's migrate the resources one by one.

Resources: DynamoDB Table

To provision dynamodb table,

import * as aws from "@pulumi/aws"
import * as pulumi from "@pulumi/pulumi"
function preName(name?: string) {
const namePrefix = `${pulumi.getStack()}-${pulumi.getProject()}`
return name ? `${namePrefix}-${name}` : namePrefix
}
/**
* Globals
*/
const todosDynamoDbTableName = preName()
/**
* DynamoDb Table
*/
const todosDynamoDbTable = new aws.dynamodb.Table(todosDynamoDbTableName, {
name: todosDynamoDbTableName,
attributes: [
{
name: "id",
type: "S"
}
],
hashKey: "id",
billingMode: "PROVISIONED",
readCapacity: 1,
writeCapacity: 1,
tags: {
Environment: pulumi.getStack()
}
})

💡 We could also import the existing table instead of creating a new one.

Resources: IAM Role

To create a role similar to the one from serverless,

import * as aws from "@pulumi/aws"
import * as pulumi from "@pulumi/pulumi"
function preName(name?: string) {
const namePrefix = `${pulumi.getStack()}-${pulumi.getProject()}`
return name ? `${namePrefix}-${name}` : namePrefix
}
/**
* Globals
*/
const account = pulumi.output(aws.getCallerIdentity({ async: true })).accountId
const executionRoleName = preName("executionRole")
const todosDynamoDbTableName = preName()
const createTodoFunctionName = preName("createTodo")
/**
* IAM Role
*/
const executionRole = new aws.iam.Role(executionRoleName, {
name: executionRoleName,
assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal({
Service: "lambda.amazonaws.com"
}),
tags: {
Environment: pulumi.getStack()
}
})
const executionRolePolicyName = `${executionRoleName}-policy`;
const rolePolicy = new aws.iam.RolePolicy(executionRolePolicyName, {
name: executionRolePolicyName,
role: executionRole,
policy: {
Version: "2012-10-17",
Statement: [
{
Effect: "Allow",
Action: [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
],
Resource: account.apply(
(accountId) =>
`arn:aws:logs:${aws.config.region}:${accountId}:log-group:/aws/lambda/${createTodoFunctionName}*`
)
},
{
Effect: "Allow",
Action: ["dynamodb:PutItem"],
Resource: account.apply(
(accountId) =>
`arn:aws:dynamodb:${aws.config.region}:${accountId}:table/${todosDynamoDbTableName}`
)
}
]
}
})
Lambda function

To create lambda function resource with a node modules layer,

import * as aws from "@pulumi/aws"
import * as pulumi from "@pulumi/pulumi"
import { join } from "path"
function preName(name?: string) {
const namePrefix = `${pulumi.getStack()}-${pulumi.getProject()}`
return name ? `${namePrefix}-${name}` : namePrefix
}
function relativeRootPath(path: string) {
return join(process.cwd(), "..", path)
}
/**
* Globals
*/
const createTodoFunctionName = preName("createTodo")
/**
* Code Archive & Lambda layer
*/
const code = new pulumi.asset.AssetArchive({
".": new pulumi.asset.FileArchive(relativeRootPath("build/archive.zip"))
})
const zipFile = relativeRootPath("layers/archive.zip")
const nodeModuleLambdaLayerName = preName("lambda-layer-nodemodules")
const nodeModuleLambdaLayer = new aws.lambda.LayerVersion(
nodeModuleLambdaLayerName,
{
compatibleRuntimes: [aws.lambda.NodeJS12dXRuntime],
code: new pulumi.asset.FileArchive(zipFile),
layerName: nodeModuleLambdaLayerName
}
)
/**
* Lambda Function
*/
const createTodoFunction = new aws.lambda.Function(createTodoFunctionName, {
name: createTodoFunctionName,
runtime: aws.lambda.NodeJS12dXRuntime,
handler: "functions/create.handler",
role: executionRole.arn,
code,
layers: [nodeModuleLambdaLayer.arn],
memorySize: 128,
environment: {
variables: {
DYNAMODB_TABLE: todosDynamoDbTableName
}
},
tags: {
Environment: pulumi.getStack()
}
})
Api Gateway

Finally, we are going to hookup above lambda function with api gateway.

import * as aws from "@pulumi/aws"
import * as pulumi from "@pulumi/pulumi"
function preName(name?: string) {
const namePrefix = `${pulumi.getStack()}-${pulumi.getProject()}`
return name ? `${namePrefix}-${name}` : namePrefix
}
/**
* API Gateway
*/
const createTodoApiRest = new aws.apigateway.RestApi(preName("rest"), {
name: preName("rest")
})
const createTodoApiResource = new aws.apigateway.Resource(preName("resource"), {
restApi: createTodoApiRest.id,
parentId: createTodoApiRest.rootResourceId,
pathPart: "{new}"
})
const createTodoApiMethod = new aws.apigateway.Method(preName("method"), {
restApi: createTodoApiRest.id,
resourceId: createTodoApiResource.id,
authorization: "NONE",
httpMethod: "POST"
})
const createTodoApiIntegration = new aws.apigateway.Integration(
preName("integration-post"),
{
restApi: createTodoApiRest.id,
resourceId: createTodoApiResource.id,
httpMethod: createTodoApiMethod.httpMethod,
integrationHttpMethod: "POST",
type: "AWS_PROXY",
uri: createTodoFunction.invokeArn
}
)
const createTodoApiDeployment = new aws.apigateway.Deployment(
preName("deployment"),
{
stageName: pulumi.getStack(),
restApi: createTodoApiRest.id
},
{
dependsOn: [createTodoApiIntegration]
}
)
const createTodoApiLambdaPermission = new aws.lambda.Permission(
`${createTodoFunctionName}-permission`,
{
statementId: "AllowAPIGatewayInvoke",
principal: "apigateway.amazonaws.com",
action: "lambda:InvokeFunction",
function: createTodoFunction,
sourceArn: pulumi
.output(createTodoApiRest.executionArn)
.apply((executionArn) => `${executionArn}/*/*`)
}
)
export const createTodoApiUrl = createTodoApiDeployment.invokeUrl

💡 Do also checkout pulumi crosswalk for aws on effortlessly creating aws gateway. I opted for the above conventional approach as I do not want to alter the source code of the lambda.

Deployment

We are now ready to deploy these resources via pulumi. We are going to run our packaging script first before issuing pulumi up.

$ npm run package
$ pulumi up -y

The output for the above command will look like below:

Updating (dev):
     Type                           Name                                    Status
 +   pulumi:pulumi:Stack            todos-api-dev                           created
 +   ├─ aws:apigateway:RestApi      dev-todos-api-rest                      created
 +   ├─ aws:iam:Role                dev-todos-api-executionRole             created
 +   ├─ aws:dynamodb:Table          dev-todos-api                           created
 +   ├─ aws:lambda:LayerVersion     dev-todos-api-lambda-layer-nodemodules  created
 +   ├─ aws:apigateway:Resource     dev-todos-api-resource                  created
 +   ├─ aws:iam:RolePolicy          dev-todos-api-executionRole-policy      created
 +   ├─ aws:apigateway:Method       dev-todos-api-method                    created
 +   ├─ aws:lambda:Function         dev-todos-api-createTodo                created
 +   ├─ aws:lambda:Permission       dev-todos-api-createTodo-permission     created
 +   ├─ aws:apigateway:Integration  dev-todos-api-integration-post          created
 +   └─ aws:apigateway:Deployment   dev-todos-api-deployment                created

Outputs:
    createTodoApiUrl: "https://xxxxxxxxxx.execute-api.eu-central-1.amazonaws.com/dev"

Resources:
    + 12 created

Duration: 27s

Cleanup

Our lambda function is now entirely managed by pulumi. So, we can remove the serverless stack, and it's resources.

$ serverless remove
$ npm uninstall serverless serverless-plugin-typescript serverless-pseudo-parameters
$ rm serverless.yml

The code snippets and the projects are available on github here

This post is highly inspired by Moving Lambda function from Serverless to Terraform


  1. serverless framework - zero-friction serverless development; easily build apps that auto-scale on low cost, next-gen cloud infrastructure. ↩︎

  2. Pulumi - Modern Infrastructure as Code. By leveraging familiar programming languages for infrastructure as code, Pulumi makes you more productive, and enables sharing and reuse of common patterns. A single delivery workflow across any cloud helps developers and operators work better together. ↩︎