Saturday, December 16, 2023

Guide To Deploying A Web App

Since I've started doing web development professionally in 2018 I've tried all sorts of different methods to deploy apps. The OG is still Heroku but I've also switched between Netlify, AWS Lightsail and Deno Deploy.

What I've never gone fully end to end with is deploying on a cloud computer that I can ssh into. There's heaps of different options but the three that I've tested in the past are AWS EC2, Digital Ocean droplets and Vultr cloud compute instances.

For this guide I went with Digital Ocean droplets as I'm the most comfortable using their UI.

To get a guide on what I wanted to achieve I sent this query to ChatGPT:

i want to deploy node.js code to a digital ocean droplet and run a web app with https, how do i do this

It gave me a solid response and I was underway.

The node code is incredibly simple. Literally just a "Hello World" route with Express. It looks like this:

// index.js
const express = require('express')
const app = express()
const port = 3000

app.get('/', (_req, res) => {
  res.send('Hello World!')
})

app.listen(port, () => {
  console.log(`Example app listening on port ${port}`)

Everything is stock standard with node_modules and the package.json.

Here were the steps to get the job done.

  1. Get domain name, I used AWS Route 53 (got hmtestingstuff.com)
  2. Get a Digital Ocean droplet, once I have the droplet running I can add IP address to an A record in Route 53, the record is configured as follows
Record Name | Value
hmtestingstuff.com | <IP Address>
www.hmtestingstuff.com | <IP Address>
  1. ssh into droplet, I already had SSH keys setup with the droplet so I didn't need to enter my password
  2. Install node via nvm
  3. Pull down the code via git clone, to do this I also needed to setup SSH keys with GitHub (using a key from the droplet itself), follow the GitHub guide on how to do this
  4. Install dependencies with npm install, run the service to test everything works, can use node index.js or a start script, run curl http://localhost:3000 in a seperate ssh window to test we're getting a response
  5. When everything is working you can then run the node process forever with
nohup node index.js > /dev/null 2>&1 &
  1. Confirm the node process is running
lsof -i :3000
  1. Install nginx
sudo apt update
sudo apt install nginx
  1. Configure nginx file, run this command
sudo nano /etc/nginx/sites-available/hmtestingstuff.com
  1. Copy paste to this file with the below, this acts as a reverse proxy on port 80
server {
    listen 80;
    server_name hmtestingstuff.com www.hmtestingstuff.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
  1. Create a symbolic link between sites-available and sites-enabled
sudo ln -s /etc/nginx/sites-available/hmtestingstuff.com /etc/nginx/sites-enabled
  1. Confirm the nginx syntax is correct
sudo nginx -t
  1. For some reason you need to stop Apache from running, I think because it also uses port 80 (this step may not be needed)
sudo systemctl stop apache2.service
  1. Start nginx
sudo systemctl start nginx
  1. Confirm everything is running correctly and view the status of nginx to confirm it's not crashing
systemctl status nginx.service
  1. We can reload nginx to debug things
sudo systemctl reload nginx
  1. Open up firewalls, use ufw, need to ensure port 80 and 443 are open, also the ssh port 22
sudo ufw status
# if this says ufw is disabled need to enable
sudo ufw enable
sudo ufw allow 80
sudo ufw allow 443
sudo ufw allow 22
sudo ufw reload
sudo ufw status
  1. Confirm the site works over http, in my example I was able to access http://hmtestingsite.com where I see an "insecure" message in most browsers
  2. Now we want to setup https, follow the certbot guide for nginx and ubuntu, this command is what I used to actually get the certificates in the correct file locations
sudo certbot certonly --nginx
  1. Update nginx sites-available config
sudo nano /etc/nginx/sites-available/hmtestingstuff.com
  1. This is what I had in my final version of the config, we can see that on port 80 (http) we redirect to https
server {
    listen 80;
    server_name hmtestingstuff.com www.hmtestingstuff.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name hmtestingstuff.com www.hmtestingstuff.com;

    ssl_certificate /etc/letsencrypt/live/hmtestingstuff.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/hmtestingstuff.com/privkey.pem;
    ssl_trusted_certificate /etc/letsencrypt/live/hmtestingstuff.com/chain.pem;

    # Security settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers 'TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384';

    # Enable session resumption to improve SSL/TLS performance
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;

	# Allow for larger request body to be passed
    client_max_body_size 20M;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
  1. Test the site works over https, I can now access my site (hmtestingstuff.com) and see the "Hello world"

I couldn't have put all of this together without the help of ChatGPT. It's an incredible learning tool where you can basically throw any code or config related questions at it and it has excellent answers for the most part.

To close this out we also want to be sure we can make updates. This is how that's done.

  1. Work locally and make changes, test and push code to GitHub
  2. ssh into droplet
  3. Kill the process running node on port 3000
kill $(lsof -t -i:3000)
  1. If you go to the site it will be down, this is where other solutions like Docker are much better but it doesn't matter so much for my own small projects, I assume pushing a new Docker container and then having some kind of terminal command to switch from the old container to the new one would be effective here and cause no downtime, you're also guaranteed that the software is exactly the same, for instance the node version on my local machine might be different to the node version on my cloud compute instance so this is another big reason to use Docker
  2. cd into app and git pull
  3. Start the app back up
nohup node index.js > /dev/null 2>&1 &
  1. The site should be back up and have the updated changes

One of the most important aspects I wanted to test out of this is how expensive it is to run the app in a droplet. Currently I'm paying around $18 a month to run this blog in AWS Lightsail which I feel is a bit high. Will just have to see what's cheaper.