Deploying Docker Static Applications

When throwing together a basic UI, lately I've been using React.

It's fun for smaller projects, but it's entirely useless for major projects. Given that the HTML is inside the JS (JSX), your artists/designers who write the HTML are pretty much sidelined for all HTML designs after the initial one. All subsequent changes are made by engineers who should never have a need to know the difference between aqua and cyan and should not ever care about box dimensions. That's why you hired an HTML artist. UX engineer is an oxymoron.

In a different part of the client-side world is Angular, which forces you to deal with TypeScript. While it's one of the only programming languages, with Go and to some extend Python, to get interfaces right, that one good thing isn't enough to make me ever want to go back to dealing with types. 16 years of C# is enough, thank you. Types lead to false negatives. You don't care that something is an integer, you care that it's between 2 and 12. Tests always outrank types.

Regardless of the poison you drink, you have to strip out something to make it work on the web. In the case of React, JSX must be removed. In the case of Angular, TypeScript must be removed. In both cases, the concept of components must be flattened. Thus, you always end up with a build process for client-side applications.

Raw ES5 + flux pattern is raw legit power. No frameworks. Check it out.

Furthermore, there's always more than mere files. You always have to think about how those files will get to the end user. Static files contain no inherent execution mechanism. Something must serve them. Of course, this is what a web server is for.

To summarize: to get your application deployed, you need need a way to build the application and you need a way to serve it. How did you get these files? Build process. How do you deliver these files? You need a web server.

There's a very simple single bullet for this solution: Docker.

Building

Examine the following single Dockerfile for building a React application:

#+ this staging area is thrown out, so no need to optimize too much
FROM node:8.11-alpine as staging

WORKDIR /var/app

RUN npm install -g create-react-app

#+ nginx.conf
RUN echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d > /etc/nginx.conf

WORKDIR /var/app

COPY package.json /var/app

RUN npm install

COPY . /var/app

RUN npm run build

FROM nginx:1.13.9-alpine

COPY --from=staging /var/app/build /var/app/
COPY --from=staging /etc/nginx.conf /etc/nginx/conf.d/default.conf

STOPSIGNAL SIGTERM

ENTRYPOINT ["nginx", "-g", "daemon off;"]

There are two parts: staging and your application.

The staging area starts with a Node binary, setups up the React evironment by installing create-react-app (Facebook is horrible at naming things), then it does some magical voodoo (we'll come back to that), then it builds the application.

The second stage starts with an Nginx binary, copies over your application, a config file, then runs Nginx.

In the end, Docker will create a binary of your application that will run Nginx, which will serve your files.

That's literally everything you need.

You just build and run:

docker build . -t registry.gitlab.com/your_gitlab_name/example:prod-latest
docker run -p 80:80 registry.gitlab.com/your_gitlab_name/example:prod-latest

Your application is working and is production ready.

Configuration

About that magic voodoo...

When using Docker, you don't always need to mess with files. If you can avoid adding files to your application, you should do so. Because Docker lets you run pipes and redirect stdout, you can do much of this inline.

The staging area contained the following line:

RUN echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d > /etc/nginx.conf

When you run the command without the redirect in a shell, you get the following:

[dbetz@ganymede ~]$ echo c2VydmVyIHsKICAgIGxpc3RlbiA4MDsKCiAgICBsb2NhdGlvbiAvIHsKICAgICAgICByb290IC92YXIvYXBwOwogICAgICAgIHRyeV9maWxlcyAkdXJpIC9pbmRleC5odG1sOwogICAgfQp9Cg== | base64 -d
server {
    listen 80;

    location / {
        root /var/app;
        try_files $uri /index.html;
    }
}

It's the nginx.conf file.

Now you can see why the second stage (FROM nginx:1.15-alpine), the one you're putting in production, is nginx. This is literally the web server that's serving up the production-ready files.

Run on your server and you're done.

Security

Nothing is complete without SSL security, but I don't recommend doing that with your Docker binaries.

Your binaries represent the application and only the application. SSL is an infrastructure add-on to your application.

Do your TLS on your host machine. This will give you more flexibility too since a single nginx surface will listen on all addresses at once and you can simply use server_name to match, thus enabling you to use a single IP address for an army of FQDNs.