Tracking every sample,
from buy to factory floor.
A fashion retail client with multiple buying teams had no reliable way to know where any given sample was at any given time. Designers logged into one tool, sales into a spreadsheet, factory staff updated nothing, and QC kept a private notebook. I designed and built an enterprise-grade sample lifecycle tracker — twelve user roles, real-time updates across the team, automated overdue monitoring, and a full spend analytics dashboard.
Problem
Samples are the lifeblood of a buying team. Each one is a small bet — a few hundred quid spent on a piece of fabric or a finished garment that gets bought, photographed, costed, sent to a factory, possibly returned, possibly cut for an own-label production. Multiply that by hundreds of samples a season, across multiple buyers, and the financial exposure becomes meaningful very quickly.
Before this platform, the client’s sample tracking lived in three places: a buyers’ spreadsheet that only some buyers updated, a logistics chat group on WhatsApp, and a stock of physical samples sitting on shelves with handwritten tags. When a director asked “where is sample 4421?”, somebody had to physically look on a shelf or send three messages.
The downstream consequences were everywhere. Samples got lost. Factory deadlines slipped because nobody noticed an overdue return. Spend by category was a mystery until the season ended. Two buyers occasionally ordered near-duplicates because they could not see each other’s samples in flight. There was no audit trail when something went wrong.
- →Samples logged in three different places, with no shared truth
- →No real-time visibility across design, sales, QC, factory, logistics
- →Overdue samples discovered weeks after the fact
- →No category-level spend analytics until end-of-season
- →No audit history when a sample was lost or mis-routed
We were spending six figures a season on samples and could not tell you, on any given Tuesday, where most of them were.
Approach
I started by mapping the actual sample lifecycle on a whiteboard with the client’s ops director. We ended up with eight canonical states: requested, ordered, received, with-design, with-sales, at-factory, returned, and cut-for-production. Every sample lives in exactly one state, every transition is logged, and every transition has a responsible role.
From there, I drew the role matrix. Twelve roles in total — buyer, senior buyer, designer, sales lead, sales executive, QC, factory liaison, logistics, finance, ops director, MD, system admin — each with different read and write permissions on different fields. The role matrix went into Postgres row-level security policies, not into application code, which is a decision I would make again every time.
The other early decision was to make the platform realtime by default. Supabase’s realtime channels meant that when a sales lead updated a sample’s status, the buyer saw it live. No refresh, no polling, no “I just updated it, can you check?” over Slack. Realtime is one of those features people undervalue until they have it, after which they refuse to go back.
- →Eight canonical sample states, with explicit transitions and ownership
- →Twelve user roles, enforced as Postgres RLS policies, not app-layer code
- →Realtime sync across all clients via Supabase channels
- →Server actions for every state transition, with full audit logging
- →Recharts dashboards driven by live Postgres views
Solution
The platform launched as a single Next.js app with a sample-centric data model and a role-aware UI. The home dashboard adapts to who is logged in: a buyer sees their open samples and overdue returns. A factory liaison sees only samples currently at-factory. A director sees the lot — filtered, sortable, exportable.
Every sample has its own page with the full timeline, photographs, costing, supplier details, and a comment thread. Comments are realtime, searchable, and tied to the sample forever. When QC rejects a sample, the comment lives next to the sample for years, which has already prevented at least one repeat order from a known bad supplier.
The overdue monitoring runs as a Vercel cron job. Every morning, the system flags any sample that has been in a state too long — “at-factory for 14 days”, “awaiting-return for 21 days” — and emails the responsible role with a clean, actionable list. No nag emails to people who do not need to act. No long digests. Just the samples each person needs to chase, today.
The analytics layer turned out to be a sleeper hit. The directors got a dashboard showing spend by category, by supplier, by buyer, by season, with month-over-month and year-over-year comparisons. It is built on top of materialised Postgres views, refreshed nightly. They use it for budget reviews, supplier negotiations, and team performance conversations. Before the platform, this analysis took a week of someone’s spreadsheet wrangling. Now it is a tab.
- →Sample-centric data model with eight states and full audit log
- →Realtime sync across all twelve roles, no polling
- →Per-role dashboards filtered to what each role actually needs to see
- →Automated daily overdue alerts, scoped per role
- →Live spend analytics dashboards, drillable by buyer, supplier, category, season
- →Welcome email automation for new staff, with role-aware onboarding
The factory team complained for the first week, then stopped complaining the second week, then started using it to ping us when we were the bottleneck. That was when I knew.
What actually changed.
Live in production. Every sample for the last several seasons has gone through the platform. The directors run their weekly ops meeting from the dashboard.
The most measurable result is also the easiest to overlook: samples stopped getting lost. The combination of a single source of truth, mandatory state transitions, and overdue alerts means every sample is accounted for, every day. The client has not had a “we cannot find that sample” incident since launch.
On the strategic side, the spend analytics has reshaped how the buying team operates. They can see, in real time, which suppliers are over-indexed, which categories are over-budget, and which buyers are running hot. The conversation moved from “how much did we spend last season?” to “what should we change this week?”
And the cultural shift mattered. Before, people would defend their data. Designers had their spreadsheet, sales had theirs, factory had theirs. Now everyone uses the same tool, and arguments are about decisions instead of about whose numbers are right. That alone has been worth the build.
Tools, picked deliberately.
What I would tell someone building this from scratch.
Model the lifecycle before the UI.
I spent two days on a whiteboard with the ops director defining the eight sample states. Every screen, every permission, every report came directly from that diagram. If I had started with screens, I would still be rebuilding them.
Realtime is non-negotiable for shared workflows.
Polling is a user experience tax that compounds across teams. Once everyone sees updates instantly, collaboration patterns change. People stop using Slack as a backchannel for “did you see my update?”
RLS policies are documentation.
A future developer can read the row-level security policies and understand exactly who can do what. That is far better than chasing permission checks across thirty route handlers.
Materialised views are an underrated tool.
Heavy analytics queries killed the experience the first week. Moving the dashboard onto nightly-refreshed materialised views made every chart instant and removed all load from the live workload.
Scope alerts to the smallest useful audience.
Overdue alerts go only to the role that can act on them. No CC, no escalation chain, no wider broadcast. Targeted alerts get acted on; broadcast alerts get muted.
Need something like this built?
I take one client at a time. If your problem is real and your timeline is honest, let’s talk.
Go deeper.
The engineering record
Architecture, RLS policy design, the eight-state machine, materialised views, and the implementation timeline.
Inside the platform
Data flow, realtime channels, per-role dashboards, the overdue cron, and the analytics layer explained step by step.