L L u u k k e e     S S t t u u r r g g e e o o n n I I n n f f o o

Drift Quotient

An AI platform that detects contradictions between what organisations claim about their culture and what independent evidence actually shows

Problem

An investor doing due diligence on a potential acquisition. A senior hire weighing up an offer. A board member reviewing ESG commitments. All of them face the same problem: the organisation’s public-facing narrative is compelling, but they have no fast way to stress-test it against independent evidence.

Traditional due diligence is slow — five to ten hours of manual research across Glassdoor, Companies House, press coverage, and regulatory filings. It’s subjective, it misses sources, and it treats different pieces of evidence as separate data points rather than looking for the pattern of contradiction across them.

Website dashboard screenshot, a grid of white cards with company names written in, each card links to an analysis report page.

From framework to product

EthosSignal approached me with a proprietary organisational analysis framework documented in Notion — a scoring methodology they called the Drift Quotient. My task was to transform it into a production platform in six weeks: something that could gather evidence automatically, apply the framework consistently, and produce reports a non-technical audience could act on.

The challenge wasn’t just engineering speed. It was preserving the analytical rigour of a human researcher while making it repeatable and defensible — every score needed to be traceable back to cited evidence, not just an AI opinion.

Website screenshot showing a report summary and status page, a table with 6 rows shows results and status of individual analyses

Interaction design

The analysis runs across five organisational dimensions in parallel. Watching five progress indicators sit at zero while nothing apparently happens would feel broken, so I built a real-time status system with elapsed timers per dimension — the user could see each analysis actively running and know exactly where time was going. This was especially important for edge cases where one dimension took longer than others.

The report itself needed to communicate two things at once: a decisive score and the evidence behind it. I used a colour-coded drift scale — green to red — to give a fast gestalt read, then structured each dimension’s section so the cited evidence sat immediately below the score rather than in a footnote. The goal was to make it impossible to look at a score without also seeing why it was given.

Users could suggest their own URLs as evidence sources — a feature that shifted the dynamic from passive consumer to active collaborator. It also meant the analysis could reach sources the AI wouldn’t discover through web search alone, which mattered for private documents or industry-specific sources the client knew existed.

Website screenshot of a organisational drift report, a bar visualises result from high, medium or low drift, and boxes have sentances and paragraphs written inside with insight into why the score was given

Technical architecture

The core constraint was cost: a platform charging per analysis needed per-analysis costs to be low enough that the margin held at scale. I chose GPT-5-mini with web search because it handles multi-source synthesis well at a fraction of the cost of larger models. The parallel execution architecture — running all five dimensional analyses simultaneously — wasn’t just about speed; it also kept API costs predictable by eliminating retry cascades.

Prompt templates were engineered with signal detection frameworks for absolutist language, tone mismatches, and direct contradictions — the specific patterns EthosSignal’s methodology identified as markers of organisational drift. Each template was tuned against real examples from their framework before going into production.

Client

What stood out most was Luke’s ability to instantly grasp the big-picture vision and translate it into a functional AI system that actually worked, not just a concept, but a living product. He guided us through each stage of development, breaking complex technical challenges into clear, iterative sprints that gave us room to pause, refine, and recalibrate.

What made the process remarkable was how seamlessly he blended technical innovation with creative problem-solving. Each prototype evolved from just a feature build as we’d had it to a design conversation with Luke that helped us clarify who we were building for and why.

Luke’s approach balanced rigor with curiosity. He made the journey from idea to working system feel structured yet exploratory, ensuring that technology served the user experience, not the other way around. We didn’t just walk away with a functional MVP; we walked away with a sharper, validated vision of what we’re creating and how it should feel to the people who use it.

Result

A production-ready platform delivered on schedule, with complete authentication flows, organisation management, a distraction-free analysis dashboard, and shareable public report pages. Each complete analysis costs approximately $0.05–0.15 — an order of magnitude lower than competitors — thanks to model selection, prompt optimisation, and parallel execution working together.

50–100x faster than manual research (6 minutes vs. 5–10 hours). 3–5x broader evidence coverage (30–50+ sources vs. 10–15 manual finds). A standardised scoring methodology that’s repeatable, auditable, and defensible to clients who need to show their working.