← All work
Co-founder · Tech Lead·2024 — Present

AI legal case-briefing platform serving 1,000+ users across 5 Commonwealth markets.

Users
1,000+
Markets
5
Cases indexed
300K+
MVP to prod
2 days

Problem

Lawyers across Commonwealth jurisdictions research case law through opaque, time-expensive workflows. Off-the-shelf LLM products output generic summaries that ignore jurisdiction-specific reasoning and citation conventions — exactly the parts a practicing lawyer has to trust.

What I built

  • AI case-structuring pipeline: unstructured input (facts, issue, jurisdiction) → structured case brief with PDF export. Shipped idea-to-prod in 2 days to validate the core value prop before investing in data infrastructure.
  • Multi-jurisdiction indexing across AU / UK / CA / NZ / SG, scaling to 300K+ real cases with jurisdiction-aware retrieval.
  • Consumer surfaces built on top of the case index: lawyer-matching and judge-profile pages turning opaque legal data into navigable web experiences.
  • Designed the team stack (TypeScript + Next.js + GraphQL) and CI/CD for 10 engineers; every major feature ships via feature-flag staged rollouts.
  • Authored code-review, testing, and release hygiene standards while staying hands-on shipping features day-to-day.

Stack notes

Next.js App Router + GraphQL lets us ship consumer surfaces and internal tooling on the same stack. Feature flags let us roll AI features out market-by-market while keeping lawyer trust intact.

Next.jsTypeScriptGraphQLAIPostgreSQL

What I learned

  • Validate AI value with a 2-day MVP before scaling data infrastructure — most "AI platform" roadmaps get the order backwards.
  • Feature flags are not infra polish; in AI products they are the trust mechanism that lets you ship to cautious user segments.
  • Domain-specific data structuring beats generic LLM summaries for legal workflows — practitioners care about citation patterns as much as accuracy.