BLACK FRIDAY SALE | 20% OFF with code TWENTYOFF2025 | Ends Dec 31, 2025

· by Simon Chiu

Building Self-Hosting Rails Applications: Design Decisions & Why

Broadcast is a self-hosted email marketing platform released in 2024. Over the past year since the app was released, a lot of people have asked me to write a blog post detailing the technical learnings in creating such an app.

From the start, one of the challenges is to ensure that the application is easy to install and maintain by the end-user.

Deploying Ruby on Rails applications has come a long way, with the introduction of Kamal and having services like Hatchbox and Fly.io. With Broadcast, however, I can’t make the assumption that users are even familiar with Rails (and its conventions), much less about Rails deployment.

Table of Contents

Distributing as Docker Images

Broadcast isn’t distributed as source code. Customers don’t clone a repo, run bundle install, and configure a Ruby environment. They pull a Docker image.

This decision shaped everything else:

  • No Ruby version conflicts — The image includes the exact Ruby version, gems, and system dependencies. It works the same on every server.
  • Simplified installation — A single docker compose up starts the entire stack. No build step on the customer’s machine.
  • Controlled updates — Each release is a tagged image. Upgrading means pulling a new tag. Rolling back means switching to an older tag.

Docker Compose handles container orchestration. The stack includes the Rails app, a background job worker, and PostgreSQL—all on a private network within a single host. Containers communicate via service names (postgres, app, jobs), and only the app container exposes ports to the outside world.

# docker-compose.yml
services:
  app:
    image: registry.example.com/broadcast:1.22.0
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      postgres:
        condition: service_healthy

  jobs:
    image: registry.example.com/broadcast:1.22.0
    command: bin/jobs
    depends_on:
      postgres:
        condition: service_healthy

  postgres:
    image: postgres:17
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready"]

The Dockerfile uses Rails 8’s default multi-stage build:

# Build stage: install gems and precompile assets
FROM ruby:3.4-slim AS build
# ... install dependencies, bundle install, asset precompile

# Runtime stage: slim production image
FROM ruby:3.4-slim
COPY --from=build /rails /rails
CMD ["thrust", "bin/rails", "server"]

The result is a ~500MB image that includes everything needed to run: Ruby, gems, compiled assets, and Thruster for SSL termination.

Customers authenticate with a private registry using credentials tied to their license. Valid license, valid credentials, access to pull images.

The Goal: One Database, Zero External Services

From day one, I knew minimizing dependencies was critical. Every additional service in the stack is something that can break on a customer’s server—and something I’d have to write documentation for, debug remotely, and answer support tickets about.

The timing was fortunate. I started building Broadcast right as Rails introduced SolidQueue, SolidCable, and SolidCache—all backed by PostgreSQL instead of Redis. For a self-hosted application, this was a no-brainer. Instead of asking customers to run and maintain a Redis instance alongside PostgreSQL, I could collapse everything into a single database dependency.

Database-backed queues have a reputation for not scaling, but my customers aren’t running millions of jobs per minute. They’re sending email campaigns to their subscriber lists. The bottleneck is their SMTP provider’s rate limit, not my job queue.

# config/database.yml
production:
  primary:
    database: broadcast_primary_production
  queue:
    database: broadcast_queue_production
    migrations_paths: db/queue_migrate
  cable:
    database: broadcast_cable_production
    migrations_paths: db/cable_migrate

Three logical databases, one PostgreSQL instance. All three are persisted to disk via Docker volumes, but the queue and cable databases contain only transient operational data—no customer data lives there. If something goes wrong with background jobs, I can tell customers to clear the queue database and restart. The primary database with their subscribers, campaigns, and settings remains untouched.

The Trigger File Pattern

I wanted customers to manage everything from the UI. Check for updates, click “Upgrade,” and watch it happen. Trigger a backup before making changes. Restart services if something feels stuck. No SSH required. No terminal commands to memorize.

But here’s the problem: the Rails application runs inside a Docker container, and containers can’t shell out to the host system. You can’t just call system("docker pull ...") from Ruby—the container has no access to the Docker daemon on the host. I needed a way for the application to communicate intent to the host system.

The naive approach of shelling out to system commands wouldn’t work anyway. Rails can’t restart itself mid-request. And if an upgrade failed halfway through, you’d have a zombie process and an angry customer.

The solution: trigger files.

If you’ve deployed Rails before, you’ve probably used touch tmp/restart.txt to restart Passenger or Puma. Same idea—a file signals intent, and a separate process acts on it. I just extended the pattern to handle upgrades, backups, and domain changes.

# app/models/installation.rb
def upgrade_now(version)
  trigger_path = Rails.root.join('triggers/upgrade.txt')
  File.write(trigger_path, version)
end

Rails writes a file. That’s it. A cron job running every minute outside the container checks for that file:

# scripts/trigger.sh
if [ -f "$TRIGGERS_DIR/upgrade.txt" ]; then
    VERSION=$(cat "$TRIGGERS_DIR/upgrade.txt")
    ./broadcast.sh upgrade "$VERSION"
    rm "$TRIGGERS_DIR/upgrade.txt"
fi

The shell script stops the containers, pulls the new image, starts everything back up. Rails doesn’t need to know how Docker works. The shell script doesn’t need to know how Rails works. Clean separation.

I use this pattern for three operations:

  • upgrade.txt — contains target version, triggers upgrade
  • backup-db.txt — existence triggers database backup
  • domains.txt — contains new domains, triggers SSL cert regeneration

The Rails UI can show a maintenance page by checking for these files:

# app/controllers/application_controller.rb
before_action :check_system_availability

def check_system_availability
  @system_unavailable = File.exist?(Rails.root.join('triggers/upgrade.txt')) ||
                        File.exist?(Rails.root.join('triggers/domains.txt'))
end

It’s simple, obvious, and leverages two of the most battle-tested tools in Unix: cron and bash. No message queues. No custom daemons. Just files, scripts, and decades of reliability.

The Single-Row Installation Model

Most multi-tenant Rails apps use a current_tenant pattern. Broadcast is different—each installation is single-tenant. One customer, one server, one database. But I still needed somewhere to store installation-specific configuration.

# app/models/installation.rb
class Installation < ApplicationRecord
  def self.instance
    first || create!
  end

  # System info for dashboard
  def gather_system_info
    monitor_file = Rails.root.join('monitor/system.json')
    return {} unless File.exist?(monitor_file)
    JSON.parse(File.read(monitor_file))
  end
end

The installations table has exactly one row. It stores the license key, hosted domain, optional S3 credentials, and acts as the central configuration object.

# Used throughout the app
Installation.instance.hosted_domain  # => "mail.customer.com"
Installation.instance.license_key    # => "abc123..."

This pattern replaced what would have been a dozen environment variables. The values are editable through the UI, persisted in the database, and don’t require a container restart to change.

Host System Monitoring

Customers want to see host-level metrics: CPU load, memory usage, disk space. Is my server running out of resources?

Adding an external monitoring service would mean another dependency, another account to configure, another thing that can break. I wanted to provide basic server visibility without requiring customers to set up anything beyond the application itself.

The problem: Rails runs inside a Docker container and can’t directly access host system metrics. The container sees its own cgroup limits, not the actual server resources.

The solution mirrors the trigger file pattern—use the filesystem as a bridge. A shell script runs on the host every minute via cron:

# scripts/monitor.sh
function monitor() {
    cpu_cores=$(nproc)
    cpu_load=$(uptime | awk -F'average:' '{print $2}' | awk -F',' '{print $1}' | tr -d ' ')

    mem_total=$(free -b | awk '/Mem:/ {print $2}')
    mem_used=$(free -b | awk '/Mem:/ {print $3}')
    mem_free_percent=$(echo "scale=2; ($mem_total - $mem_used) / $mem_total * 100" | bc)

    disk_total=$(df -B1 / | awk 'NR==2 {print $2}')
    disk_used=$(df -B1 / | awk 'NR==2 {print $3}')
    disk_free_percent=$(df -h / | awk 'NR==2 {print $5}' | tr -d '%')
    disk_free_percent=$(echo "100 - $disk_free_percent" | bc)

    current_version=$(cat /opt/broadcast/.current_version 2>/dev/null || echo "unknown")

    cat <<EOF > /opt/broadcast/app/monitor/system.json
{
    "cpu_cores": $cpu_cores,
    "cpu_load": $cpu_load,
    "memory_total": $mem_total,
    "memory_free_percent": $mem_free_percent,
    "disk_space_total": $disk_total,
    "disk_space_free_percent": $disk_free_percent,
    "current_version": "$current_version"
}
EOF
}

The /opt/broadcast/app/monitor directory is mounted into the container. Rails reads the JSON:

# app/models/installation.rb
def gather_system_info
  monitor_file = Rails.root.join('monitor/system.json')
  return {} unless File.exist?(monitor_file)
  JSON.parse(File.read(monitor_file))
end

The host writes, the container reads. Same pattern as trigger files, but in reverse—information flows from host to container instead of container to host.

Version Checking and the Update Flow

The application checks a central server for available updates. The request includes the license key for validation, and the response returns a list of available versions:

# Fetch available releases from the update server
def available_updates
  response = fetch_releases_from_server
  return [] unless response.success?

  response.releases.select do |release|
    Gem::Version.new(release.version) > Gem::Version.new(current_version)
  end
end

Each release in the response includes metadata: version number, release notes, and whether it includes database migrations. This lets the UI warn customers before upgrades that involve schema changes.

The current version is stored in a constant:

# config/initializers/version.rb
APP_VERSION = '1.22.0'

The Docker image is tagged with the same version string. When a customer triggers an upgrade via the UI, the trigger file contains the target version, and the shell script pulls the corresponding image tag.

SSL and HTTP/2 with Thruster

Kamal is excellent for teams familiar with Docker and server provisioning, but my customers aren’t DevOps engineers. Asking them to learn Kamal’s configuration, set up Traefik, and manage SSL certificates wasn’t realistic.

Thruster solves this. Basecamp built it for their own self-hosted applications (like Campfire)—zero-configuration SSL and HTTP/2 in front of Puma.

# Dockerfile
RUN gem install thruster

# The entrypoint wraps Puma with Thruster
CMD ["thrust", "bin/rails", "server"]

That’s it. Thruster automatically:

  • Provisions Let’s Encrypt certificates for your domain
  • Handles certificate renewal
  • Terminates SSL and proxies to Puma
  • Serves HTTP/2 for better performance
  • Redirects HTTP to HTTPS

No nginx configuration. No Traefik. No certbot cron jobs. The domain is configured via a single environment variable:

TLS_DOMAIN=mail.customer.com

For customers with multiple domains, it’s comma-separated:

TLS_DOMAIN=mail.customer.com,newsletter.customer.com

Customers point their DNS at the server, set the domain in the setup wizard, and SSL works. Zero support tickets about certificate issues.

Multi-Domain Support

Broadcast supports multiple broadcast channels, and each channel can have its own domain. A customer might run newsletter.acme.com for their main list and updates.acme.com for product announcements—each needing valid SSL certificates.

The domain is stored per channel:

# app/models/broadcast_channel.rb
class BroadcastChannel < ApplicationRecord
  # domain :string - optional override for this channel
end

When a user adds or changes a channel’s domain in the UI, they click “Reload App.” This triggers the same file-based pattern used for upgrades:

# app/models/installation.rb
def reload_app
  domains = BroadcastChannel.where.not(domain: nil).pluck(:domain)
  File.write(Rails.root.join('triggers/domains.txt'), domains.join("\n"))
end

The cron job detects domains.txt and updates the environment:

# scripts/trigger.sh
if [ -f "$TRIGGERS_DIR/domains.txt" ]; then
    # Read primary domain
    primary=$(cat /opt/broadcast/.domain)

    # Read additional domains, convert newlines to commas
    others=$(cat "$TRIGGERS_DIR/domains.txt" | tr '\n' ',' | sed 's/,$//')

    # Update TLS_DOMAIN with all domains
    sed -i "s/^TLS_DOMAIN=.*/TLS_DOMAIN=$primary,$others/" /opt/broadcast/app/.env

    rm "$TRIGGERS_DIR/domains.txt"
    systemctl restart broadcast
fi

After restart, Thruster reads the updated TLS_DOMAIN and provisions certificates for all domains automatically.

The Rails app shows a maintenance page while this happens:

# app/controllers/application_controller.rb
before_action :check_system_availability

def check_system_availability
  if File.exist?(Rails.root.join('triggers/domains.txt'))
    render 'shared/unavailable', status: :service_unavailable
  end
end

The entire flow—user adds domain, clicks reload, system restarts with new SSL certs—takes about 30 seconds. No manual certificate management required.

The Tradeoffs

  1. No horizontal scaling — One server, one installation. Customers who outgrow this need to upgrade their server, not add more.

  2. Database-backed jobs are slower — SolidQueue polls the database. There’s latency that Sidekiq with Redis doesn’t have. For email sending with rate limits, this doesn’t matter.

  3. File-based triggers are fragile — If the cron job dies, triggers stop working. I mitigate this with systemd ensuring cron runs, but it’s still a single point of failure.

  4. Backups are basicpg_dump to a tarball. No incremental backups, no point-in-time recovery. Good enough for most customers, not good enough for enterprises.

Conclusion

Customers are sending millions of emails each week using Broadcast. Most support questions are about initial installation, not ongoing infrastructure. The architecture described here has proven robust across various installations over the past 12 months.

The debugging surface is small: one Rails app, one PostgreSQL database, a few shell scripts. When something does break, there aren’t many places to look.


Broadcast is a self-hosted email marketing platform. Check it out at sendbroadcast.net.


References

  • Broadcast — Self-hosted email marketing platform
  • Thruster — HTTP/2 proxy with automatic SSL for Rails
  • Kamal — Deploy web apps anywhere
  • SolidQueue — Database-backed Active Job backend
  • SolidCable — Database-backed Action Cable adapter
  • SolidCache — Database-backed Active Support cache store
  • Propshaft — Asset pipeline for Rails
  • Rails 8 Release Notes — “No PaaS Required”
  • jemalloc — Memory allocator for reduced Ruby memory usage
  • PostgreSQL — The database that powers everything