Upgrading My Services: Embracing Modern Tech for a Smooth Transition

Updating, containerising application and setting up continuous deployment. Doing it right the first time to avoid future headaches.

23 March 2024#devops #development
Cover

A little taste of history

Twenty years ago when I was in primary school, my cousin showed me how to FTP web pages to his web server. It was amazing that I could transfer web pages from my computer to the World Wide Web. Later, I had my very first web hosting with cPanel set up a few days before receiving our SPM results. I had to take the train and walk in Kuala Lumpur with my friend to transfer the cash. We launched our very first eCommerce store, and it was amazing. That was exactly 10 years ago!

As time progressed, I built applications to fill in the gaps and started having fun with shell and Linux. Setting up a web server was quite hard at the beginning, and there wasn't one way to do it. We could SSH to the server and Git pull the code and then do the install. Or, we could Rsync the code if we were in a shared hosting.

When the cloud services started to kick in, we saw new opportunities with deployments. New jargon started to pop up like DevOps, SRE, and many more. Life was a bit nicer these days, as if you need a new instance, just ask for it. And of course, new serverless services come out to wrap around open-source and charge you hefty money if you're not careful enough.

My current setup

I started hosting my services on AWS back in 2017. Later, in 2021, I moved my stuff across to DigitalOcean as their pricing is more transparent. However, I didn't appreciate them raising the price after a few months of using their services. Nonetheless, it was okay. Just a standard Nginx setup, nothing too complex. I hosted 12 applications since then, enduring a few DDoS attacks here and there, but ultimately surviving.

server-setup-old.jpeg

Motivation

I had this thought to move the services in May last year, mainly because the operating system was still on Ubuntu 20 LTS. However, it's been quite a challenge as it's such a pain to migrate 12 applications. Things got a bit more complicated as some of the applications run on different programming languages and versions. It's either setting up a pipeline and dockerizing them or being stuck in maintenance hell.

There are four main reasons I need to move them over:

  • Security
  • Isolate application
  • Need an automated deployment
  • Consolidation

New design

The design that I'm aiming for involves having a reverse proxy that serves multiple Docker containers. This means that all applications will be isolated in their own environment. I understand the consequences of implementing this design, as it will consume memory, given that each application will now have its own operating system. Additionally, I plan to utilise Github workflow to build the image and deploy to the instance, automating deployment effortlessly.

server-setup-new.jpeg

Journey

Upgrading codes

  • The first step is to upgrade the legacy codebase to the latest version. Some of the applications were written many years ago and have numerous reported dependency vulnerabilities. It's crucial to run the latest versions for security.

carazu security after upgrade

105 security vulnerabilities closed for a 8 years old application!

  • Most of the application volumes are stored in S3, like uploaded content, but some are not. I had to update the code to offset the objects to S3 instead of locally stored. This will ease the process of server migration in the future.

Containerised application

Next, the applications need to be containerized. I'm using dunglas/frankenphp as the base image for PHP applications and custom-written for Node.js app for this site. These containers will sit behind a Caddy Reverse Proxy web server, exposing their services on their respective ports. Also, there are applications that run without a web server, running on crons trigger, which were also containerized.

Application declaration and CI/CD

A GitHub repo contains the declaration of the application configuration. This repo maintains the list of applications and how they should behave. I got this idea from Terraform, where instead of declaring infrastructure items, it declares application structure. If I were to add or remove another application in the future, I would just need to commit a new configuration.

The CI/CD were also configured. I set up workflows using GitHub action, triggering builds when code is pushed to the master branch or manually. Two main workflows were created:

  1. Server setup - It runs when I create a new instance, configuring firewalls, installing fail2ban, setting up Docker, and configuring users and permissions. It runs on a manual trigger.
  2. Deploy - Build the configuration and deploy to the server, pushing the configuration to the server and reloading the server configuration.

For each application, an application secret is stored in Github Action Secret, and whenever new code is pushed to master, it builds a Docker image and pushes it to the server, swapping the old one with the new one. It's like swapping a disk!

Heavy weight done, it's time to enjoy the result

It may sound simple, but it's actually quite challenging, especially when trying new technology like Caddy. GitHub action was pretty straightforward, and I enjoyed using it.

It took well over 5 days to get this working. 15 instances were created and destroyed over the span of testing and configuring. 120 minutes of GitHub build time was consumed to get to the final bits.

instances-created

Instances created on DigitalOcean for testing the GitHub Workflows (not including Oracle Cloud instances)

Notes for you

  1. If you have set up UFW correctly and keep getting connection refused, it might be you need to open the port at the provider level.
  2. Using ARM will work with QEMU, but it takes ages to build the images. Just forget it.