Brimble: Deployment Made Easy 🚀

¡

6 min read

How did we do it ?

Introduction

We’ve all been there. Spinning wheels staring at “file upload completed” messages, wondering why deploying static sites feels like pushing boulders uphill. Cloud storage is handy, sure, but is that all there is? What if your web application dreams deserve more than just a glorified bucket?

Traditional web hosting platforms are not capable of certain features, which makes deploying modern web applications very difficult and harder to manage, especially with the complexities attached to modern web frameworks & tools.

Basic Flow

Our technology essentially revolves around a Command Line Interface (CLI) tool. This tool comes into the picture whenever you choose to deploy your web application or push your projects to your git provider. Upon pushing to your git provider, it is the responsibility of the CLI tool to build and compile all the project assets and deploy them on the Brimble server.

Your deployment commences with code, more than 15 frameworks supported by brimble is detected by some regular expression patterns is identifed and the build command attached to the framework is provided unless you decide to change it. The Brimble Git integration is designed to automatically monitor your commits and trigger a fresh deployment when necessary.

With each deploy, Brimble proceeds through the following commands for your service:

Deploying a nextjs project on the brimble dashboard

Build process

Initiation of a deployment in Brimble first involves verifying the user’s request. This critical process establishes both the request’s legitimacy and the user’s authorization level, ensuring optimal security against any unauthorized access and maintaining unimpaired integrity. The Brimble Build API then enriches the project as a container from an internally created build container, which is tasked with containerizing the project using Docker.

This build image comes pre-packaged with crucial tools for a seamless running of each frontend project. It is equipped with a fundamental node-ubuntu image, along with the Brimble cli amongst other key features. As the project is created, the user plan defines various specifications that are attached to the project container with each deployment. Below is a diagram that paints a clear picture of the specifications linked to a user plan.

plans: {
  team: {
    replicas: 6,
    memory: 4,
    cpu: 4
  },
  pro: {
    replicas: 4,
    memory: 4,
    cpu: 4
  },
  free: {
    replicas: 2,
    memory: 2,
    cpu: 2
  }
}

The build container functions by generating build output using the provided specifications, while concurrently being evaluated by health checks. This dual process ensures the container’s status before moving onto the next phase.

Upon successful packaging of the build image, notably, these build images are ‘distroless’ — rendering them lightweight and minimally susceptible to security threats, deployment on a cluster follows. This is achieved using Docker Swarm, the orchestration tool responsible for distributing the container across worker nodes that execute the project. Meanwhile, manager nodes handle the orchestration. The distribution of the container varies, factoring in the number of replicas defined by the user plan. Concurrently, health checks track the status and health of the deployment. This entire process adheres to the blue-green deployment strategy. During this phase, environment variables are included, and image optimization conducted. Upon verifying the successful operation of the application, we proceed to push the image to our private Docker registry. Following this, A rabbitMQ event is dispatched to the domain mapping service to execute domain mapping on the deployed projects Automatic SSL certificates are provisioned via caddy and the certificates and other necessary configuration files are stored on a cloud bucket which is not publicly accessible. Once the necessary resources have been provisioned, a Deployment database record is updated.

The diagram below provides a clear architecture of the deployment phase:

Request Phase

When a domain name of a live web application is typed into a web browser, a DNS lookup swings into action to pinpoint the IP address of the host server. Cloudflare comes into play here, functioning as a middleman in the domain name resolution process. Besides its role as a Content Delivery Network (CDN), it also doubles as a cache platform for the web app.

The traffic travels via Cloudflare, increasing security by examining and filtering unwanted and potentially harmful users or bots before they reach your load balancer. This load balancer, strategically spread out across various zones for maximum availability, utilizes a round-robin algorithm to uniformly shuffle the traffic across different zones.

Cloudflare remains actively involved in caching applications that are hosted on the platform, improving their speed and reducing latency for any follow-up requests. The load balancer, meanwhile, utilizes the least connection algorithm; directing incoming traffic to any of the operational containers housed in the Swarm cluster. This cluster is where the projects are deployed, and it’s equipped to scale dynamically to efficiently manage diverse traffic loads.

And for a quick catch, before requests reach their final destinations they go through our reverse proxy built with caddy, here with the user data we check if the web application should be rendered or disabled if the deployment is disabled or due payments.

In peak traffic circumstances, Brimble’s proprietary in-house software tool triggers automatic scaling of the containers based on certain citerias. This ensures that the cluster can comfortably digest the surge in demand. This well-coordinated combination of DNS resolution, caching and CDN services via Cloudflare, region-based load balancing, and dynamic scaling of containers is what makes web hosting on the Brimble platform both efficient and robust.

Here is an example of a simple load balancing with caddy across the worker nodes of the deployed projects and a sample diagram of the request phase system

*.brimble.app {
  reverse_proxy {
    to <ip-1> <ip-2> <ip-2>

    lb_policy least_conn
    lb_try_duration 30s
    lb_try_interval 5s

    health_port 4545
    health_status 2xx
    health_timeout 10s
  }
  header Server "Brimble"
}

Brief high level diagram on the request phase system

Conclusion:

There is a lot more to how we made this happen and we are still improving our current system, we have various services that we have created and the technologies behind some of them

Here’s a brief glimpse into our creation’s core services, along with the technologies that power them.

  1. Auth API: This component is the guardian of user management and authentication within Brimble.

  2. Core API: This essential service oversees the projects, teams, and deployments within our system.

  3. Runner: Responsible for the execution of project deployments on our servers.

  4. Payment API: This service handles cards as well as user and team subscriptions or payments.

  5. Dns: This unit is in charge of domain mapping after deployments and managing DNS records for those domains.

  6. IAC: Leveraging Pulumi — an infrastructure-as-code system, this service spins up new servers and resources for deployed web applications.

Technologies:

Typescript, python, golang, php, caddy, linode, google cloud platform, bind9, pulumi, docker, docker swarm, consul, redis, rabbitMQ, infisical, mergent.

Â