Murat Gözel

Murat Gözel

Independent software developer & designer

This is How I Code & Deploy

February 1, 2024
Software Engineering

In this article, I’ll describe how I implemented continuous integration and deploy within my monorepo setup. This will be mostly conceptual as I can’t get into every details of the piece but I still hope the reader will get clear overview how it works in general. You’ll see the following tools and services being used throughout the article: pnpm, git, buildkite, husky, docker, vitest, commitlint and more. The code samples are trimmed to keep it simple.

We have an ideal in setting up all this: all devs can commit to production branch and commits deployed if they pass the releasing workflow.

This is not a one product monorepo setup. It includes different kind of frontend, backend and self hosted projects. All dependencies managed by pnpm. Consider a folder structure like below:

  • frontend/* where we keep our frontend projects.
  • backend/* for backend projects
  • self-hosted/* for third party self hosted products
  • shared/* is the packages shared between frontend and backend projects scoped to one product
  • standard/* packages are common to all frontend and backend projects

Let’s clarify these a bit more.

Standard Packages

Standard packages are what lodash or underscore is for javascript modules, opinionated, common code for projects.

standard/core: basic validation functions, date manipulation and similar modules that can be used both in browsers and node.

standard/backend: logging, mailers, database connectors or anything that is common for node apps.

standard/frontend: client side logging, browser specific feature modules and anything that can only be used in client side.

standard/svelte: svelte actions, stores, components and everything specific to svelte.

standard/react: react actions, states, components and everything based on react.

All these packages have their build and publish commands in their package.json as these commands used during release.

Shared Packages

These packages are product specific. Let’s say you have a product called “foogle” and you have shared code between frontends and backends of this product:

shared/foogle-schemas: to validate inputs, responses and other kind of data in foogle frontends and backends.

All these packages have their build and publish commands in their package.json as these commands used during release.

Self-hosted Packages

This place is where we keep the configuration of open source products we use. The only requirement of the deploy system is that they must be based on docker which most of the open source projects are.

self-hosted/foogle-directus: to deploy a directus instance for our product foogle.

self-hosted/matomo: to deploy matomo analytics to track our products stats.

and so on. these aren’t really npm packages and we aren’t building or publishing them but we still have a package.json that indicates they are self hosted kind of projects.

Backend Packages

Regular node.js apps that has fetch, build and publish commands and ready to be deployed in docker containers. The fetch command exists to fetch data and env. Framework you use doesn’t really matter. Build command just bundles the code.

backend/fooble-api: handles requests coming from frontend.

Frontend Packages

No matter what framework you use, frontend packages should have fetch, build and publish commands. Basically they might fetch some data (and env) before build, and then they build the app resulting everything in dist directory and publish command which creates a package from all this.

frontend/foogle-website: some website in let’s say svelte

frontend/foogle-docs: and this in react/next.js

The Way Devs Push Code

As we want devs have full responsibility with their code and freely push to production, we have to expect their commits in a certain format. We have a releaser script, inside the monorepo, to help devs push their code in a formatted and configured way. This releaser script asks them which projects they want to push and corresponding commit messages based on conventional commit format.

Let’s say you changed something in standard/frontend:

# run the releaser script
> pnpm exec releaser

# it will detect changed packages
✔ Analyzing workspace: 19 packages found.
✔ Finding changes: 1 package has been changed. (standard/frontend)
ℹ Staging changes.
# will ask us which changed packages we want to stage for commit
? Do you want to stage all changes? › no, i will pick one by one per project / yes

✔ Staging changes: 1 package has been staged. (standard/frontend)
ℹ Collecting commits.
# time to create commit messages based on conventional commit format
? (@gozeltr/standard-frontend): Choose the type of the change: › - Use arrow-keys. Return to submit.
❯   build - Changes that affect the build system or external dependencies (example scopes: gulp, broccoli, npm)
    ci
    docs
    feat
    fix
    perf
    refactor
    revert
    style
  ↓ env

# enter a commit message
✔ (@gozeltr/standard-frontend): Choose the type of the change: › refactor
? (@gozeltr/standard-frontend): Enter a short description for the change: › 

# breaking change?
? (@gozeltr/standard-frontend): Is this a breaking change?: › no / yes

# we can add more commit messages too
? (@gozeltr/standard-frontend): Do you want to add another change?: › no / yes

# and finally it commits
✔ Collecting commits.
ℹ Finding side effects.
✔ Finding side effects.
ℹ Pushing changes.
✔ Pushing changes.

So in general, with each push, we are sending bunch of commit messages each containing package name, message and type information:

# pseudo code
git add {packages we selected}
git commit -m "build(@standard/frontend): some updated" -m "perf(@foogle/api): another update" # ...
git push

There is one more thing this push script is doing and that’s merging with the remote. Just before pushing the code, it does git fetch --tags && git merge --no-edit origin/main main so changes made by others gets merged before push.

The Way A Change Gets Deployed

After this push, our repository should trigger our release and deploy pipelines. I prefer buildkite but many would do. This pipeline has two main phases: release and deploy.

In the release phase, standard and shared packages gets built, unit tests are run for all packages. After that, we collect commit messages created since last “release”. Last release here is the commit tagged with something starting with “v”.

const lastVersionTagOutput = execSync(
	`git describe --tags --abbrev=0 --match="v[0-9]*"`
);
const commitMessages = execSync(`git log --format=%B ${lastVersionTag}..HEAD`)

That’s how we read changed packages and their type of the change.

Finally, we begin to create artifacts. Artifacts are versioned and packaged npm packages published with pnpm publish. I use AWS CodeArtifact and Elastic Container to store these packages. Both frontend and backend packages, each versioned based on their commit type. The script creates a releases section in the root package.json and keeps the changed packages and their new versions respectively. Changed self hosted packages too placed in there.

There are various kind of errors you might face at this stage. And if the script faces one, it removes the published packages from the artifact stores and also possible git tags from the remote.

Finally, it does a version update for the monorepo and commits changes.

We now have our artifacts and release map ready to deploy. Release map is the releases object I explained above. It keeps changed package names and their versions in the root package.json. Let’s go to the deploy phase.

Deploy phase runs in pure shell. It first reads the release map:

release_map_text=$(pnpm view ${MONOLITH_PKG_NAME}@${version} releases --json)

Extracts package and version information from the release map, and runs the correspondent deploy method for each package:

while read line
do
    pkg_name=${line%%=*}
    pkg_version_quoted=${line#*=}
    pkg_version=${pkg_version_quoted//\\'/}
    
    if [ "$is_frontend" = true ]; then
		    # here we fetch the package from the aws first
		    tar_file=$(fetch_package_tar "$pkg_name" "$pkg_version")
		    # then just copy it to the server dir
	      rm -rf "${web_dir:?}"/*
	      tar -xzf "$tar_file" -C "$web_dir"
	      mv "$web_dir"/package/dist/* "$web_dir"
	      rm -rf "$web_dir"/package
    fi
    
    if [ "$is_backend" = true ]; then
		    # prepare env
		    compose_service_file="docker-compose.${pkg_slug}.yml"
		    
        # download & extract published package
        tar_file=$(fetch_package_tar "$_pkg_name" "$pkg_version")
        rm -rf "${web_dir:?}"/*
        tar -xzf "$tar_file" -C "$web_dir"
        mv "$web_dir"/package/* "$web_dir"/package/.[!.]* "$web_dir"
        rm -rf "$web_dir"/package

        # create container image
        current_dir=$PWD
        cd "$web_dir"
        pnpm run docker:build
        pnpm run docker:push
        cd "$current_dir"
        
        # create compose service file
        envsubst '${CONTAINER_CONN_STR},${APP_VERSION},${APP_PORT},${APP_URL}' < "$web_dir/docker-compose.service.yml" > "$compose_service_file"
        compose_service_files+=" $compose_service_file"
    fi
    
    if [ "$is_self_hosted" = true ]; then
		    compose_service_file="docker-compose.${pkg_slug}.yml"
		    # fetch env and create compose service file
		    cp "$project_dir"/docker-compose.service.yml ./"$compose_service_file"
		    compose_service_files+=" $compose_service_file"
    fi
done < <(jq -r '. | to_entries | map("\\(.key)=" + @sh "\\(.value|tostring)") | .[]' <<< "$release_map_text")

No problem with frontend deployments but we are doing docker compose file concatenation above. So if there are more than backend or self hosted packages to deploy, we gather their compiled docker service files and deploy them all together at the end:

# prepare compose cmd args if there are any docker services need to deploy
compose_cmd_args=""
if [ -n "$compose_service_files" ]; then
    for file in $compose_service_files; do
        compose_cmd_args+=" -f $file"
    done
fi

# put production containers into live
if [ -n "$compose_cmd_args" ]; then
    docker compose -f docker-compose.yml$compose_cmd_args up -d
    docker builder prune -f
fi

That’s all with the deploy. All done.

Auto Commit Sometimes

But, there is one more thing. There are cases where we don’t change the codebase but still want to deploy. Content changes in the CMS for example or environment variable changes. Since the system works only by commit messages, it looks like not possible or doesn’t make sense to make a commit for this kind of changes. In truth, yes, it doesn’t make sense for this setup, but, let’s hack the system. We create an auto-commit command in our releaser script and anytime we run it, we specify which package it should run for and a commit message. It generates a timestamped auto commit file in the project directory and commits the repo.

pnpm exec releaser auto-commit --msg "fix(@foogle/website): env var changed [auto commit]"

This command itself doesn’t create a commit but instructs buildkite to create commit and push. The reason we are doing this in the server is because deploy triggers are not developers, it could be a content management or environment management system or something else.

All Goes Through A Single Pipeline

We mentioned there are two main phases in the pipeline as release and deploy. There is one more phase to run the auto commit logic which is commit phase.

- label: 'commit'
  key: 'commit'
  command: .buildkite/commit.sh
  env:
      NODE_ENV: production
      HUSKY: 0
  if: "build.message =~ /auto commit/"

- wait

- label: 'release'
  key: 'release'
  command: .buildkite/release.sh
  env:
      NODE_ENV: production
      HUSKY: 0
  if: "build.message !~ /auto commit/"

- wait

- label: 'deploy'
  key: 'deploy'
  command: .buildkite/deploy.sh
  depends_on: 'release'
  env:
      NODE_ENV: production
  if: "build.message !~ /auto commit/"

This is how the whole setup works in summary. I know I cut a lot of code in the article. Some things is too much to explain regarding the scope in here. Especially things related to authentication could be cumbersome. All of these services should work together in a reliable way. I didn’t prefer to use “relatively ready to use” monorepo solutions as they aren’r really satisfying in all aspects of the continuous integration and deploying. I’ll continue to maintain this setup and make it more reliable as we progress.

All coffee beans reserved. © 2024 Murat Gözel.