Dockerizing the NSR Application

Introduction

Previously, we were using merge fields in a Word doc within a Laserfiche Workflow to generate a PDF. This was a good solution, but it was not very flexible. If we wanted to change the layout of the PDF, we would have to edit the Word doc and re-upload it to Laserfiche. This is not ideal, as we would have to re-publish the Workflow. It also introduced the possibility of human error, as we would have to manually edit the Word doc and link all the merge fields to correct data.

The solution to this problem is to use a testing engine to use a headless browser to print the form to a pdf. This way, we can use HTML and CSS to design the form, and we can use Javascript to populate the form with data. This is a much more flexible solution, as we can easily change the layout of the form without having to re-publish the Workflow. We can also use Javascript to populate the form with data, which is much more reliable than using merge fields.

Problem

Our NSR app was running on a RedHat Linux box with x86 architecture. Playwright only supports s390x architecture, so we had to find a way to run the app on a s390x machine. On to Docker.

Docker

Docker is a platform for developing, shipping, and running applications. It allows us to package up an application with all of its dependencies into a standardized unit for software development. This allows us to run the application on any machine, regardless of the machine's architecture.

Dockerfile

A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build, users can create an automated build that executes several command-line instructions in succession.

We first start with a base image from Docker Hub. For now, we are using Playwright's image for Ubuntu 22.04. We then install the dependencies for our app, and copy the app files into the container. We then expose port 3000, and run the app.

# syntax = docker/dockerfile:1.2


FROM mcr.microsoft.com/playwright:v1.37.1-jammy as base

ENV NODE_ENV production
# Install dependencies for Prisma
RUN apt-get update && apt-get install -y openssl
# Install dependencies for Playwright
RUN apt-get update && apt-get install -y  libnss3 libatk-bridge2.0-0 libdrm-dev libxkbcommon-dev libgbm-dev libasound-dev libatspi2.0-0 libxshmfence-dev

# setup dev node_modules
FROM base as dependencies

WORKDIR /app

# adds the projects package.json and package-lock.json to container
ADD package*.json ./

# install all dependencies including devDependencies
RUN npm install --include=dev 



# setup production node_modules
FROM base as production-deps

ENV NODE_ENV production

WORKDIR /app

# copy the node_modules from the dependencies stage
COPY --from=dependencies /app/node_modules /app/node_modules
ADD package.json ./

# remove the devDependencies
RUN npm prune --omit=dev

# build the app
FROM base as build

WORKDIR /app

# copy the node_modules from the dependencies stage
COPY --from=dependencies /app/node_modules /app/node_modules

# copy the prisma folder
ADD prisma .

# generate prisma client and install playwright
RUN npx prisma generate
RUN npx playwright install --with-deps chromium

# copy the rest of the app
ADD . .
# build the app
RUN npm run build


# Build prod with minimal footprint
FROM base

ENV NODE_ENV="production"

WORKDIR /app

# add some necessary files for the app
ADD service-account.json /app/service-account.json
ADD client_secret.json /app/client_secret.json

# copy the node_modules from the production-deps stage
COPY --from=production-deps /app/node_modules /app/node_modules

# copy the prisma folder
COPY --from=build /app/node_modules/.prisma /app/node_modules/.prisma

# copy the rest of the items from the build stage needed fof the app
COPY --from=build /app/build /app/build
COPY --from=build /app/package.json /app/package.json
COPY --from=build /app/public /app/public
COPY --from=build /app/prisma /app/prisma

# expose port 3000
EXPOSE 3000

# run the app
CMD ["npm", "run", "start"]

Docker Build

We can build the Docker image using the following command:

docker build -t nsr .

Docker Run

We can run the Docker image using the following command:

# lots more env variables than this but you get the idea
docker run --env "DATABASE_URL=someurlpath;withUsename;passwordcombo" -p 3000:3000 --name nsr_prod_c1 -d nsr

Docker Create

We can create a container using the same command as above except we use the create command.

docker create --env "DATABASE_URL=someurlpath;withUsename;passwordcombo" -p 3000:3000 --name nsr_prod_c2 -d nsr

I'm thinking this may be the way to go for our production environment. We can create a container, and then we can start and stop the container as needed. We can also delete the container and create a new one if we need to update the app.

Docker Stop and Start

So in prod we can stop and start the container using the following commands in one line:

docker stop nsr_prod_c1 && docker start nsr_prod_c2

If we had a the nsr_prod_c1 container running, this command would stop it and start the nsr_prod_c2 container we created above. This is useful if we need to update the app. We can create a new container, and then stop the old one and start the new one.

Conclusion

Docker is a great tool for running applications on any machine. It allows us to package up an application with all of its dependencies into a standardized unit for software development. This allows us to run the application on any machine, regardless of the machine's architecture. We can also use Docker to create containers, which we can start and stop as needed. This is useful if we need to update the app. We can create a new container, and then stop the old one and start the new one.