Thursday, June 26, 2025

Docker

I've always had a bit of fear when needing to edit anything Docker related at work as I'm just not an expert in that space. Something however did click for me with it recently.

It originally revolved around some Linux command line tool that did not work on macOS that someone at work was talking about. The only way to make it work on a Mac was to use docker run.

After chatting with ChatGPT about the topic and "how to think" of Docker it came up with this summary:

A way to run software inside isolated, reproducible Linux environments — regardless of the host OS.

I think thinking of Docker as a way to be able to run Linux command line tools is simple and it makes sense as to why it's so popular.

I think back to this Guide To Deploying a Web App I wrote about a year ago and it was tricky to get nginx working right. I had to edit files in the Linux file system which is definitely not the right way to do things. This kind of setup isn't reproducible like it is with Docker. The whole ideology of Docker is that you can easily tear everything down and build it again and things just work as they did before.

This inspired me to build a little web app with Docker using Bun, PostgreSQL and nginx.

  1. You start with the Dockerfile that basically just copies the files it needs and runs the code
FROM oven/bun

WORKDIR /app
COPY ./hello-world-app ./
RUN bun install

CMD ["bun", "run", "index.ts"]

It's important to note that Docker has no reference to the Linux file system by default and this is a good thing for security reasons.

  1. I have a docker-compose.yml file which sets up the application, it pulls in the images it needs for the app to run, the directory name demo-bun-docker-app groups the 3 containers, when using a Docker GUI like Orbstack locally you can see how this grouping is done, we can hit localhost:8080 to access endpoints, the DATABASE_URL is a .env value the app is able to read
services:
  db:
    image: postgres:15
    restart: always
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: pass
      POSTGRES_DB: demo
    volumes:
      - db_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  hello-world-app:
    build: .
    restart: always
    depends_on:
      - db
    environment:
      DATABASE_URL: postgres://user:pass@db:5432/demo

  nginx:
    image: nginx:alpine
    restart: always
    ports:
      - "8080:80"
    volumes:
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - hello-world-app

volumes:
  db_data:
  1. The app code is a rudimentary bun server
import { client } from './db';

const server = Bun.serve({
  port: 3000,
  async fetch(req) {
    const url = new URL(req.url);
    const nameRes = await client.query('SELECT name FROM people LIMIT 1');
    const name = nameRes.rows[0]?.name ?? 'World';

    return new Response(`<html><body><h1>Hello ${name}</h1></body></html>`, {
      headers: { 'Content-Type': 'text/html' },
    });
  },
});

console.log(`Server running on http://localhost:${server.port}`);
  1. It has a DB client file
import { Client } from 'pg';

export const client = new Client({
  connectionString: process.env.DATABASE_URL,
});

await client.connect();
  1. The nginx config lives in a /nginx/default.conf file, this is much better for the code to be here and committed to git rather than just being in the host machine file system (like I did here)
server {
    listen 80;

    location / {
        proxy_pass http://hello-world-app:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
  1. To get access to the DB (and run any SQL) I can run this command
docker exec -it <postgres-container-name> psql -U user -d demo

to get the postgres-container-name I can run docker ps, find the postgres container and copy the name from the NAMES column, I can also just connect to the DB locally in a GUI like TablePlus using the DATABASE_URL

  1. Now that this is all setup you can develop locally and do your usual development work

  2. Where the real win comes in is deploying this on a cloud compute instance, there's a few steps but overall it's a relatively simple process:

    1. Buy an instance
    2. SSH into it
    3. Install docker and docker-compose
    4. Copy the code across to the instance
    5. Run the app in detached mode docker compose up -d --build
    6. Update the instance firewall with ufw
  3. You should now be able to access the app endpoints on the internet with http://<your-ip>:8080

  4. You would obviously want to use HTTPS but that would be easily done with Let's Encrypt and a quick tweak to the nginx config