Improved mono-repo for Typescript + Cloud Function development

October 20, 2020

I recently posted about developing cloud functions with typescript locally. This post is a follow where I've tackled a few other issues you might have when developing multiple functions from a single repo.

For example, as dependencies grow you won't want to include all dependencies for all functions as they may not make use of them. Large dependency lists will slow down the deployment of your functions, and can impact cold start times. I've addressed this using yarn workspaces and following a mono-repo pattern that is common when developing node packages.

This approach means we can develop shared libraries that can be packaged and deployed to a private NPM registry. These can then be be included as dependencies on your functions. Whilst developing locally you won't notice much of a difference.

By using this pattern each function can be packaged with it's own package.json and deployed separately. This brought up a new problem. My previous solution used a src/index.ts file which imported and exported all functions and was also used as an entrypoint for all requests when developing locally.

To solve this, I have introduced Docker and Docker Compose. Now each function runs in a standalone container. Each container is mapped to a single nginx container which is how you can trigger your functions. I find this approach a lot nicer. Introducing more technologies into this does complicate a bit though, but I believe this has created a more accurate environment.

Let's get started.

Initial repo structure:

First up, we'll need to setup our initial repo structure like so:

docker/
  functions/
    Dockerfile
    start.sh
src/
  functions/
    http/
      helloWorld/
        index.ts
        index.test.ts
        package.json
  packages/
    lib/
      index.ts
      package.json
.eslintrc.json
.gitignore
docker-compose.yml
package.json
tsconfig.json

To create those files form the command line:

mkdir -p docker/functions
touch docker/functions/{Dockerfile,start.sh}
mkdir -p src/{functions/packages}
mkdir -p src/functions/{event,http}
mkdir -p src/functions/http/helloWorld/{index.ts,package.json}
mkdir -p src/packages/lib/{index.ts,package.json}
touch .eslintrc.json docker-compose.yml package.json tsconfig.json

So let's quickly talk about this structure. I've tried to keep a logical structure, so all functions are in the src/functions/{event-type}/{function-name}/index.ts format. Supported event-types include: http and event. This just determines which function signature is used (req, res) => {} for http, and (event, context) => {} for event.

The packages directory is where you can add specific packages that will act as your shared code library. Each directory within here must have it's own package.json file which a unique package name. These packages will be scoped, but we'll touch on that more later.

Finally the docker and any docker named files all relate to how we'll build and run our docker containers locally. If you don't currently have docker installed now would be a good time to do so. Docker for Mac is recommended if you're on a Mac. Docker for Windows for Windows, and you can just install Docker if you're on linux. If you're not familiar with docker, I won't be going into very much detail here, so I'm sorry if I glaze over it!

Setting up our package.json files.

Currently we have three package.json files: ./package.json, src/functions/http/helloWorld/package.json and src/packages/lib/package.json. To get these to work with Yarn Workspaces we'll need to first of all make sure we're running yarn 2 from within our project root:

yarn set version berry

(berry is the code name for yarn 2 at the time of writing)

Note that yarn 2 now uses a different binary for each project rather than a global yarn. There are a number of differences in how yarn 2 works.

With that done, let's create our root package.json with the following:

{
  "name": "functions",
  "version": "0.0.0",
  "description": "",
  "license": "Apache-2.0",
  "keywords": [],
  "private": true,
  "workspaces": [
    "src/packages/*",
    "src/functions/http/*",
    "src/functions/event/*"
  ],
  "scripts": {
    "dev": "yarn install && docker-compose up",
    "test": "mocha -r ts-node/register ./**/*.test.ts",
    "lint": "eslint '**/*.ts'",
    "compile": "tsc",
    "fix": "eslint --fix '**/*.ts'"
  },
  "devDependencies": {
    "@google-cloud/functions-framework": "^1.7.1",
    "@types/chai": "^4.2.14",
    "@types/eslint": "^7.2.4",
    "@types/express": "^4.17.8",
    "@types/mocha": "^8.0.3",
    "@types/node": "^14.11.2",
    "@types/prettier": "^2.1.5",
    "@typescript-eslint/eslint-plugin": "^4.5.0",
    "@typescript-eslint/parser": "^4.5.0",
    "chai": "^4.2.0",
    "chai-http": "^4.3.0",
    "eslint": "^7.11.0",
    "eslint-config-prettier": "^6.13.0",
    "eslint-plugin-node": "^11.1.0",
    "eslint-plugin-prettier": "^3.1.4",
    "express": "^4.17.1",
    "mocha": "^8.2.0",
    "nodemon": "^2.0.4",
    "prettier": "^2.1.2",
    "ts-node": "^9.0.0",
    "typescript": "^4.0.3"
  }
}

In your src/functions/http/helloWorld/package.json add:

{
  "name": "functions-http-hello-name",
  "version": "0.0.0",
  "description": "",
  "license": "Apache-2.0",
  "keywords": [],
  "main": "build/index.js",
  "rootDir": ".",
  "scripts": {
    "start": "functions-framework",
    "test": "mocha -r ts-node/register ./**/*.test.ts",
    "build": "tsc",
    "tsc:watch": "tsc --watch",
    "prezip": "cp package.json build",
    "zip": "cd build && zip -r ../function.zip ."
  },
  "devDependencies": {
    "@google-cloud/functions-framework": "^1.7.1",
    "@types/chai": "^4.2.14",
    "@types/eslint": "^7.2.4",
    "@types/express": "^4.17.8",
    "@types/mocha": "^8.0.3",
    "chai": "^4.2.0",
    "chai-http": "^4.3.0",
    "eslint": "^7.11.0",
    "express": "^4.17.1",
    "mocha": "^8.2.0",
    "typescript": "^4.0.3"
  }
}

You'll notice I have duplicated a number of the devDependencies here. This is because each workspace must declare all of their dependencies explicitly.

Then in our src/packages/lib/package.json:

{
  "name": "@ashsmith-funcs/lib",
  "version": "1.0.0",
  "license": "MIT",
  "main": "build/index.js",
  "files": [
    "build/**/*"
  ],
  "types": "build/index.d.ts",
  "scripts": {
    "build": "tsc",
    "tsc:watch": "tsc --watch"
  },
  "devDependencies": {
    "@types/chai": "^4.2.14",
    "@types/eslint": "^7.2.4",
    "@types/express": "^4.17.8",
    "@types/mocha": "^8.0.3",
    "chai": "^4.2.0",
    "chai-http": "^4.3.0",
    "eslint": "^7.11.0",
    "express": "^4.17.1",
    "mocha": "^8.2.0",
    "typescript": "^4.0.3"
  }
}

With that done we can now run yarn install. You'll notice that there is no longer a node_modules directory. This is expected, and part of the new functionality provided by yarn 2.

Setting up Typescript and linting

Previously I was using gts for base Typescript and linting. However using yarn 2 these seemed to break as they depend on the node_modules directory existing. Instead of working around it, I decided to remove it. So lets setup eslint and typescript from scratch this time (using the same configs as gts though for now).

Your eslintrc.json will look like this:

{
  "extends": [
    "eslint:recommended",
    "plugin:node/recommended",
    "prettier"
  ],
  "plugins": [
    "node",
    "prettier"
  ],
  "rules": {
    "prettier/prettier": "error",
    "block-scoped-var": "error",
    "eqeqeq": "error",
    "no-var": "error",
    "prefer-const": "error",
    "eol-last": "error",
    "prefer-arrow-callback": "error",
    "no-trailing-spaces": "error",
    "quotes": ["warn", "single", { "avoidEscape": true }],
    "no-restricted-properties": [
      "error",
      {
        "object": "describe",
        "property": "only"
      },
      {
        "object": "it",
        "property": "only"
      }
    ],
    "node/no-unpublished-import": ["error", {
      "allowModules": ["@google-cloud/functions-framework"]
    }]
  },
  "overrides": [
    {
      "files": ["**/*.ts", "**/*.tsx"],
      "parser": "@typescript-eslint/parser",
      "extends": [
        "plugin:@typescript-eslint/recommended"
      ],
      "rules": {
        "@typescript-eslint/no-non-null-assertion": "off",
        "@typescript-eslint/no-use-before-define": "off",
        "@typescript-eslint/no-warning-comments": "off",
        "@typescript-eslint/no-empty-function": "off",
        "@typescript-eslint/no-var-requires": "off",
        "@typescript-eslint/explicit-function-return-type": "off",
        "@typescript-eslint/explicit-module-boundary-types": "off",
        "@typescript-eslint/ban-types": "off",
        "@typescript-eslint/camelcase": "off",
        "node/no-missing-import": "off",
        "node/no-empty-function": "off",
        "node/no-unsupported-features/es-syntax": "off",
        "node/no-missing-require": "off",
        "node/shebang": "off",
        "no-dupe-class-members": "off",
        "require-atomic-updates": "off"
      },
      "parserOptions": {
        "ecmaVersion": 2018,
        "sourceType": "module"
      }
    }
  ]
}

I won't deep dive into these rules. The only one I want to note is the node/no-unpublished-import rule, where I have explicitly allowed the @google-cloud/functions-framework. This is to prevent errors from attempting to load type definitions.

prettier.json

{
  "bracketSpacing": false,
  "singleQuote": true,
  "trailingComma": "es5",
  "arrowParens": "avoid"
}

Next up, our tsconfig.json

{
  "compilerOptions": {
    "lib": ["es2018", "dom"],
    "allowUnreachableCode": false,
    "allowUnusedLabels": false,
    "declaration": true,
    "forceConsistentCasingInFileNames": true,
    "module": "commonjs",
    "noEmitOnError": true,
    "noFallthroughCasesInSwitch": true,
    "noImplicitReturns": true,
    "pretty": true,
    "sourceMap": true,
    "strict": true,
    "target": "es2018",

    "esModuleInterop": true,
    "baseUrl": "src",
    "paths": {
        "@ashsmith-funcs/*": [
          "packages/*"
        ],
    }
  },
  "exclude": ["node_modules", "./**/*.test.ts"]
}

Now the important part here is the compilerOptions.paths section. Here I have stated that all directories within src/packages will be mapped to @ashsmith-funcs/* so our packages/lib would be referenced in another package as @ashsmith-funcs/lib. You'll notice that this is the same name given in the corresponding package.json. This ensures typescript will look for that directory instead of erroring with an unresolved dependency issue.

Setting up our first function

Ok, let's get into it with our first http function. In src/functions/http/helloWorld/index.ts lets add:

import type {HttpFunction} from '@google-cloud/functions-framework/build/src/functions';

export const helloWorld: HttpFunction = (req, res) => {
  res.send('Hello, World!');
};

In our test file index.test.ts

/* eslint-disable node/no-extraneous-import */
import chai from 'chai';
import chaiHttp from 'chai-http';
import express from 'express';
import {helloName} from './';

chai.use(chaiHttp);
chai.should();

const app = express();
app.all('*', helloName);

describe('helloName function', () => {
  describe('GET /', () => {
    it('should get return Hello, World! when no query is provided ', done => {
      chai
        .request(app)
        .get('/')
        .end((err, res) => {
          res.should.have.status(200);
          res.text.should.be.a('string');
          res.text.should.equal('Hello, World!');
          done();
        });
    });
    it('should get return Hello, ash! when the query string is added: ?name=ash', done => {
      chai
        .request(app)
        .get('/?name=ash')
        .end((err, res) => {
          res.should.have.status(200);
          res.text.should.be.a('string');
          res.text.should.equal('Hello, ash!');
          done();
        });
    });
  });
});

Now run yarn test and make sure it all works!

To test the function locally right now we could also run:

yarn workspaces functions-http-hello-world run start --target=helloWorld

This will start up localhost:8080.

This is great, but what about reloading when file changes happen, and what about developing multiple functions?

Lets dive into docker.

Docker setup.

For this we'll be using docker-compose to configure our different containers. Let's start by creating our docker images.

docker/functions/Dockerfile:

FROM node:12

COPY ./start.sh /usr/bin/start.sh
RUN chmod +x /usr/bin/start.sh

EXPOSE 8080

USER node
WORKDIR /home/node

RUN yarn set version berry

CMD /usr/bin/start.sh

Then our docker/start.sh script will look like this:

#!/bin/bash

npx nodemon \
  --watch src/functions/${EVENT}/${FUNCTION} \
  --exec yarn workspace ${WORKSPACE} run start \
    --signature_type=${EVENT} \
    --source=./build \
    --target=${FUNCTION}

The start.sh script will run as the main command in the container. This will use nodemon to watch for changes to our project, and reload the functions-framework command. Note the environment variables used here. I'm following a strict naming convention here where by: src/functions/{event}/{functionName}/index.ts must exist, and must export a function called {functionName}.

With this convention we can reuse this docker image for each of our functions, and it helps with deployment processes too.

With our docker images defined. Let's create the docker-compose.yml file:

version: '2'

services:
  function-proxy:
    image: ashsmith/function-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
    environment:
      - DEFAULT_HOST=functions.local

  tsc-watch:
    image: ashsmith/functions-framework
    build:
      context: ./docker/functions
    working_dir: /home/node
    volumes:
      - ./:/home/node
    command: ["yarn", "workspaces", "foreach", "-ipa", "run", "tsc:watch"]

  hello-world:
    image: ashsmith/functions-framework
    build:
      context: ./docker/functions
    volumes:
      - ./:/home/node
    environment:
      - FUNCTION=helloWorld
      - EVENT=http
      - WORKSPACE=functions-http-hello-world

This file defines three services (or containers):

  • function-proxy which is a custom nginx server.
  • tsc-watch this will watch all ts files and compile when they change.
  • hello-world which is our function.

To get this running:

docker-compose up [-d]

If the images don't exist, it will build them and then boot them up.

The -d will run in detached mode, so it runs in the background.

The last thing to do is configure your hosts file so that 127.0.0.1 is mapped to functions.local:

On mac / linux:

echo "127.0.0.1 functions.local" | sudo tee -a /etc/hosts

Then go ahead and open http://functions.local/helloWorld in your browser.

It's worth noting here the ashsmith/function-proxy image uses jwilder/nginx-proxy docker image as a base. This docker image by default allows you to expose containers via this image onto multiple domains (at least 1 domain per container, no shared domains) for each VIRTUAL_HOST environment variable that is defined against a given container.

I have customised the nginx.tmpl file so that we have a single server directive and a single domain, but it generates unique location directives per container. Instead of using VIRTUAL_HOST I use the FUNCTION environment variable to achieve this.

That is how our function helloWorld will be available on: /helloWorld. Nice!

Adding a dependency...

Now we have a function working. Let's create some shared code...

src/packages/lib/index.ts

class Logger {
  public static info(...params: any[]) {
    console.info(...params);
  }
  public static warn(...params: any[]) {
    console.warn(...params);
  }
  public static error(...params: any[]) {
    console.error(...params);
  }
}
export { Logger };

Ok.. it's kind of useless, but provides a useful example!

Now let's modify our function to use it:

src/functions/http/helloWorld/index.ts

import type {HttpFunction} from '@google-cloud/functions-framework/build/src/functions';
import {Logger} from '@ashsmith-funcs/lib';

export const helloWorld: HttpFunction = (req, res) => {
  Logger.info('hello world!');
  res.send('Hello, World!');
};

Then add it to your functions package.json:

yarn workspace functions-http-hello-world add @ashsmith-funcs/lib

Now reload the browser, hopefully the page loads successfully. You can also check the logs output from the docker-compose up command. To see if our console.log made it. If you used the -d parameter then try: docker-compose logs -f hello-world to attach and follow the logs for the helloWorld function.

Final thoughts.

This approach makes managing dependencies on a per function basis possible. I highly doubt this is the last iteration of the cloud function project for me.

This might be overkill for smaller projects, but it's worth keeping in mind if you wish to keep cold start times down and keep deployments lean.

Combined with the terraform deployment process you'll have a capable environment that has best practices at its core.

With this we also have developed some packages that will need to be published to a private NPM repository. I'll cover this step in a future post! As well as how we will need to modify the deployment strategy (if you've seen my post on using terraform to deploy cloud functions then that will serve as a reasonable starting point.