Replacing SaaS with self-hosted tools
Self-hosting is having a moment, and most of the moment is bad advice. This is the version that is actually true: which swaps are worth making, which are not, and how to keep the resulting infrastructure from quietly turning you into an unpaid sysadmin.
When self-hosting is the right move
Three signals, any of which is enough. Two of which together is a no-brainer.
- Per-user pricing has crossed your headcount. Most SaaS pricing is benign at five seats and absurd at fifty.
- The data is sensitive enough that you would rather not vendor it. Customer data, financial records, anything covered by a regulator.
- The vendor's roadmap and your needs have visibly diverged. Open source equivalents tend to be more configurable; self-hosting puts you in control of the upgrade cadence.
Self-hosting is not free. It costs less than enterprise SaaS, and it costs in different ways.
When it really, really is not
Where most "self-host everything" content is wrong: most teams should not self-host their email, their auth provider, or their primary database in production unless they have an actual on-call rotation. The blast radius of a 4am email outage is everyone in the company missing meetings. Some things are worth paying for.
My short list of things to keep paying for:
- Inbound transactional email (Postmark, Resend, SES — they handle deliverability that you absolutely cannot replicate at small scale).
- Identity and SSO (Auth0, Clerk, WorkOS — auth is where you do not want to be clever).
- Production Postgres for revenue-critical apps (Supabase, Neon, RDS — the recovery story matters more than you think).
- DNS and certificates (Cloudflare. End of list.).
The baseline you need first
Before adopting any self-hosted thing, the boring infrastructure must already exist. Without these, every new tool is a future outage waiting for a name.
- A real VPS provider with snapshots. Hetzner, DigitalOcean, Vultr. Not your laptop, not a Raspberry Pi in a cupboard.
- A reverse proxy with automatic TLS. Caddy or Traefik. Solved problem; do not solve it again.
- An offsite backup destination. S3, Backblaze B2, Wasabi. With a tested restore.
- A single, simple Docker Compose convention for every service. Same shape, same Caddy config block, same env file.
bash# Standard layout I use across services /opt/<service-name>/ docker-compose.yml Caddyfile # symlinked into a global Caddy config .env # 0600 perms, never in git data/ # bind-mounted, in the backup set backup.sh # nightly cron, writes to s3://backups/<service>
The high-value swaps
The replacements below are the ones I actually run, ordered by ROI. None of them are "perfect feature parity"; all of them are "good enough that the savings outweigh the gaps".
Zapier or Make → n8n
Almost always worth it once you spend over £100/month. n8n self-hosted on a £6 VPS happily runs hundreds of workflows. See the dedicated playbook for the full setup; the migration playbook is the staged plan.
Notion → AppFlowy or Outline
Notion at scale is wonderful and expensive. Outline is the closest functional replacement for company wikis (Postgres backend, real Markdown, sane permissions). AppFlowy is the closest to Notion's UI. Neither is a one-for-one swap; both pay for themselves at thirty seats.
Calendly → Cal.com
Cal.com self-hosted is genuinely excellent. Same scheduling experience, fully white-labelled, no per-host pricing. The only friction is Google Calendar OAuth setup; do it once, never think about it again.
Sentry (cloud) → Sentry (self-hosted)
The cloud product is great; the self-hosted version is the same code. Worth the swap once your error volumes push you past £100/month, and the data sensitivity case is strong (errors leak more than you think).
Loom → tella.tv or local screen recording
Honestly, the right replacement for Loom is "use OBS or QuickTime and upload to your own bucket". The self-hosted async-video products are not yet good enough.
Mailchimp → Listmonk
Listmonk is one of the best pieces of self-hosted software in existence. It is a Go binary, it backs onto Postgres, it handles transactional and broadcast email cleanly, and it pairs well with Postmark or SES as the SMTP relay. Worth doing on day one of any newsletter you actually own.
Heroku-style PaaS → Coolify or Dokku
For internal apps, dashboards and small services, a self-hosted PaaS removes the friction that pushes teams to expensive managed platforms. Coolify is the modern choice; Dokku is the boring stable one.
Owning your data
The thing self-hosting actually buys you is data ownership. That only matters if you can get the data out of the system in a useful form. For each tool, write a 30-line script that exports everything to a tarball and runs nightly. The day you decide to migrate or shut something down, you will be very grateful.
bash#!/usr/bin/env bash # /opt/cal/export.sh — runs nightly via cron set -euo pipefail TS=$(date -u +%Y%m%d) DEST=s3://my-backups/cal-com/$TS cd /opt/cal docker compose exec -T postgres pg_dump -U cal cal | gzip > /tmp/cal-$TS.sql.gz aws s3 cp /tmp/cal-$TS.sql.gz $DEST/ rm /tmp/cal-$TS.sql.gz
Updates as a habit
The single biggest difference between teams that succeed at self-hosting and teams that do not is whether they update on a cadence. The successful teams update every service monthly, on the second Tuesday, regardless of whether there is a known reason to. The unsuccessful teams update only when something breaks, by which time three versions have shipped and the upgrade is now genuinely scary.
Pin every image to a specific tag. Read the changelog. Run docker compose pull && docker compose up -d. Watch logs for ten minutes. Move on. If you cannot honestly commit to ten minutes a month per service, you cannot self-host that service.
Monitoring without paying for monitoring
You do not need Datadog. For a small set of self-hosted services you need three things: an uptime check, a logs view, and an alert when CPU or disk go off the rails. The minimal version:
- Uptimekuma — uptime checks with notifications, free, runs in a single container.
- Dozzle — live web UI for Docker logs across the host. Read-only, takes 30 seconds to deploy.
- Netdata or node_exporter + Grafana Cloud's free tier — host metrics with alerting.
yaml# A boring monitoring stack on the same host services: uptime-kuma: image: louislam/uptime-kuma:1 restart: unless-stopped volumes: ['./uptime-kuma:/app/data'] dozzle: image: amir20/dozzle:latest restart: unless-stopped volumes: ['/var/run/docker.sock:/var/run/docker.sock:ro'] environment: DOZZLE_AUTH_PROVIDER: simple DOZZLE_USERNAME: sarma DOZZLE_PASSWORD: ${DOZZLE_PASSWORD}
Pitfalls
A 1 GB VPS will run anything in a benchmark and almost nothing in production. The cost difference between £4 and £12 is the difference between something that survives an upgrade and something that does not.
A backup script that has never been restored from is a guess. Quarterly: spin up a throwaway VPS, restore the latest backup, log in, confirm it works. Calendar it.
If you are the only person who can log into the VPS, your team is one bus accident away from a permanent outage. Document the layout, share the credentials in a real password manager, and have one teammate do the next upgrade with you watching.
Fine until one of them OOMs and takes the others with it. Group services by criticality; keep the things you cannot afford to lose on a host of their own.
Wrap-up
Self-hosting works when it is treated as an engineering investment with a maintenance budget. It fails when it is treated as a hobby that will somehow run itself. The list of swaps above will, for most teams over twenty people, save five figures a year and improve the data ownership story. The price is roughly a half-day a month of careful, calendared maintenance.
That trade is, on the whole, a very good one. Just do not pretend the price is zero.
Want this done for you?
If you would rather skip the YAK shave and have someone who has done this fifty times set it up properly, that is what I do for a living.
Start a project