How it works · StaffPortal

How StaffPortal works

Eight modules, one Next.js app, one Supabase project, RLS on every row. The whole HR platform in plain English with diagrams, code, and the trade-offs that shaped it.

TL;DR

One app, one database,
RLS on every row.

StaffPortal is a Next.js 16 application backed by a single Supabase project. Eight modules — attendance, leave, expenses, kiosk, visitors, announcements, analytics, notifications — share auth, RLS, and notification primitives.

Every table has organisation_id and user_id. Every PostgreSQL policy scopes by both. Authorisation is defence in depth: the route handler checks, the database refuses to leak even if the route forgets.

Receipt OCR is the embedded Receipt Scanner pipeline. Kiosk is a PWA with IndexedDB offline queue. Notifications fan out to email, Slack, and in-app channels. Payroll exports are CSV in Xero/QuickBooks/Sage formats.

<span class="dim">Employee opens /expenses/new and uploads a Tesco receipt.</span> <span class="hl">Step 1 · Auth</span> Supabase JWT → auth.uid() + organisation_id <span class="hl">Step 2 · Upload</span> POST /api/expenses/scan (multipart) <span class="hl">Step 3 · Resize</span> sharp.rotate().resize(1568).jpeg(85) <span class="hl">Step 4 · Vision</span> anthropic claude-3-5-sonnet → JSON <span class="hl">Step 5 · Validate</span> Zod Receipt schema <span class="hl">Step 6 · Insert</span> expenses table with RLS check <span class="ok">✓ user_id == auth.uid()</span> <span class="ok">✓ organisation_id matches JWT</span> <span class="hl">Step 7 · Notify</span> manager email + Slack <span class="hl">Step 8 · Reply</span> 200 with the typed expense object <span class="ok">~2 seconds. Manager has the email before</span> <span class="ok">the employee closes the tab.</span>
Core data flow

Two paths in. One source of truth.

Web for office staff, kiosk PWA for on-site staff. Both end up in the same Postgres rows.

┌───────────── WEB (employees, managers) ─────────────────────┐
│  React Server Components                                    │
│   │ supabase.auth.getUser()  → JWT                          │
│   │ requireAuth(role: 'member' | 'manager' | 'admin')       │
│   ▼                                                         │
│  Server actions / API routes                                │
│   │ INSERT / UPDATE with auth.uid() + jwt.organisation_id   │
│   ▼                                                         │
│  PostgreSQL (RLS enforced)                                  │
└─────────────────────────────────────────────────────────────┘

┌───────────── KIOSK (PWA on tablet) ─────────────────────────┐
│  Touch-first UI, PIN auth                                   │
│   │ navigator.online?                                       │
│   │   yes → POST /api/kiosk/sign-in                         │
│   │   no  → idb.put('queued_signins', evt)                  │
│   ▼                                                         │
│  Service worker (sync on online event)                      │
│   │ flushQueue() → POST → idb.delete on 200                 │
│   ▼                                                         │
│  Same /api/kiosk/sign-in route                              │
│  Same clock_events table                                    │
└─────────────────────────────────────────────────────────────┘

           ┌────────────────────────────────────────┐
           │   shared lib                           │
           │   notify · audit · approvals · scanner │
           │   used by every module                 │
           └────────────────────────────────────────┘
Subsystems

Each module, deep-dived

Authentication & multi-tenancy

Why it exists

Every request must answer two questions: who is the user, and which organisation are they acting in? Both questions are answered before any business logic runs.

How it actually works

Supabase Auth issues a JWT with user_id (auth.uid()) and a custom organisation_id claim populated on sign-in. The JWT lives in an HttpOnly cookie. Server components and route handlers read it via the @supabase/ssr helper. Every PostgreSQL policy reads both claims to scope rows.

Attendance & timesheets

Why it exists

Hours worked are the bedrock of payroll. The system must capture them accurately on the web, on a tablet kiosk, and via manager bulk edits — and produce a single approved timesheet at the end of the period.

How it actually works

clock_events records every clock-in and clock-out as a discrete row with timestamp, source (web/kiosk/manual), photo URL when available, and geofence flag. A nightly job aggregates events into timesheet_lines with break deduction and overtime calculation. Managers approve at the timesheet level; the underlying events remain immutable.

Leave management

Why it exists

Per-employee allowances must accrue continuously, decrement on approval, and be auditable retroactively. Conflict detection prevents two key team members booking the same week off.

How it actually works

leave_balances is a SQL view computed from leave_allowances and approved leave_requests. Submission triggers a notification to the line manager. Approval mutates the request status and fires a calendar event to the team channel. The team_calendar view aggregates all approved leave for visualisation.

Expense capture (Receipt Scanner embedded)

Why it exists

Receipts are the single biggest source of HR data-entry friction. The fastest path is photograph, OCR, validate, submit — under ten seconds.

How it actually works

POST /api/expenses/scan accepts a multipart image. The image passes through sharp.rotate().resize(1568).jpeg(85), then to Anthropic Claude vision with the Receipt Scanner system prompt. Output is Zod-validated and inserted as a pending expense with full raw JSON in the raw column. A manager approval notification fires.

Kiosk Progressive Web App

Why it exists

On-site staff need to clock in from a wall-mounted tablet, often on flaky Wi-Fi. The kiosk has to work offline and sync on reconnect.

How it actually works

A separate /kiosk route renders a touch-first React UI. Service worker caches the shell. IndexedDB stores queued sign-in events as { user_id, timestamp, photo_blob, geo? }. On navigator.online, a flush handler POSTs each queued event to /api/kiosk/sign-in and removes it from the queue on 200.

Visitor management

Why it exists

Compliance, security, and host-notification needs converge in visitor sign-in. The system tracks pre-registration, photo capture, NDA acknowledgement, and host alerts in one flow.

How it actually works

Pre-register a visit via /visitors/new. On arrival, the visitor signs in via the kiosk by name lookup, captures a photo, acknowledges the host's NDA if applicable, and triggers an email + Slack alert to the host. visitor_visits records the full lifecycle. Watchlist matches block sign-in and notify security.

Notifications

Why it exists

Approval workflows, leave decisions, expense rejections, announcements, and visitor alerts all require timely delivery. Three channels — email, Slack, in-app — with per-user preferences.

How it actually works

lib/notify.ts exposes notify(userId, eventType, payload). Reads notify_preferences for the user, fans out to enabled channels. Email via Resend with templated HTML. Slack via incoming webhook with formatted blocks. In-app via realtime broadcast to the user's subscribed channel. Every send is logged to notify_log for audit.

Analytics & exports

Why it exists

Managers need real-time visibility on attendance trends, leave balances, and expense patterns. Payroll teams need a clean monthly export.

How it actually works

Materialised views over the live tables compute attendance heatmaps, leave-by-team breakdowns, and expense totals by category. /analytics renders these via server components with chart components from recharts. Monthly close exports CSV in Xero/QuickBooks/Sage-compatible formats with the correct GL codes.

Technology choices

Why this, not that

Next.js 16 App Router

Why we use it

Server components keep authorisation and DB queries server-side. Server actions cut form-submission boilerplate. File-based routes map to module structure.

Why not the alternative

Pages Router — older paradigm, more client-side state, less natural fit for streaming and partial pre-rendering.

Supabase

Why we use it

Postgres + Auth + RLS + Storage + Realtime in one service. Reproducing this in-house is 3,000+ lines and a permanent operational burden.

Why not the alternative

Custom Postgres + Lucia/NextAuth + S3 + custom WebSocket layer — works, but you maintain four systems instead of one.

Row-Level Security on every table

Why we use it

Multi-tenancy at the application layer is one bug away from data leakage. RLS makes Postgres refuse cross-tenant reads regardless of route logic.

Why not the alternative

Application-layer scoping only — works until it does not. Authorisation should be defence in depth.

PWA for kiosk

Why we use it

One codebase. No app store. Updates ship instantly. IndexedDB offline queue handles real site Wi-Fi conditions reliably.

Why not the alternative

Native iOS/Android — multiplies maintenance, app store gatekeeping for what is fundamentally a PIN-and-photo screen.

Embedded Receipt Scanner

Why we use it

Reuses a published, hardened OCR pipeline. Same prompt, same Zod schema, same accuracy.

Why not the alternative

Reimplement OCR per product — duplicates work, drift between expense scanning here and the standalone product, no shared improvements.

Resend for transactional email

Why we use it

Cleanest API, simplest verification, generous free tier, SDK-level templated HTML.

Why not the alternative

SendGrid — heavier API, opaque pricing. SES — IAM and sandbox approval drag for what should be one env var.

Generic approvals table

Why we use it

leave, expense, timesheet, visitor approvals share 90% of their logic. One table with target_type halves the route code.

Why not the alternative

Per-entity approval tables — duplicates schema, route code, and notification logic five times.

Performance & observability

What you can measure

~£26
monthly cost · 50 staff
Vercel + Supabase Pro + Resend + vision OCR
180ms
kiosk sign-in p50
Online path; offline path is instant
100%
RLS coverage
Every table, every policy, on every row

Failure modes you should expect

Kiosk loses internet
Cause: Wi-Fi drop or network change
Fix: IndexedDB queue holds sign-ins; flushes on navigator.online
Receipt scan returns nulls
Cause: Blurry or partial photo
Fix: UI surfaces "low-confidence" warning; user retakes or manually corrects fields
Manager forgets to approve timesheet
Cause: Pay period closes with pending approvals
Fix: Daily reminder cron job; analytics dashboard surfaces pending count
Multi-tenant data leak attempted
Cause: Bug in route logic forgets organisation scope
Fix: RLS policy refuses the query at PostgreSQL — return zero rows regardless of route bug
Resend domain unverified
Cause: New deployment without DNS records
Fix: FROM_EMAIL falls back to onresend.dev shared domain; verify when convenient
Leave allowance drift
Cause: Stored snapshot diverges after retroactive approval
Fix: leave_balances is a SQL view, not a stored value — always live, never drifts
Future direction

What’s next

Direct Xero / QuickBooks integrations

OAuth, scope mapping, push approved expenses and timesheets without CSV round-trip.

Performance reviews module

360-feedback flows, goal tracking, calibration sessions. Distinct domain, similar primitives.

Native mobile apps

iOS / Android wrappers around the PWA with push notifications and biometric auth.

Internationalisation

i18n routing, multilingual UI, locale-aware date and currency formatting.

Audit log viewer

Every state change is already in audit_log. Surface it in admin UI with filters and export.

Multi-org admin console

For agencies running StaffPortal on behalf of multiple clients — cross-tenant admin views.

Ready to deploy it?

Clone, set the env vars, run pnpm db:migrate, and invite the team. Twenty minutes from zero to a working HR platform.