top of page

CASE STUDY -> SYSTEM DESIGN

AI-Powered Contract Onboarding

Humans in the Loop-> Agents in the Flow.

Legal teams don’t lack data they struggle to trust it.

When contracts are onboarded at scale, small errors compound into broken relationships, unclear ownership, and unreliable systems.
 

This project explores how AI can assist onboarding 
not by replacing humans, but by working alongside their judgment.

We don’t start from clean systems. We inherit chaos.

Data ingestion from legacy contracts isn't a clean extraction. It's a messy reconstruction of history. Before the redesign, every new batch of contracts arrived as a fragmented puzzle of mismatched entities and broken hierarchies.

48 %

Missmatch

34%

Duplicates

56%

Hierarchy Issues

Contract onboarding became a bottleneck
Manual validation slows legal teams
Legal Engineers were spending 60% of their billable hours manually cross-referencing CSV exports against PDF scans to verify basic details.
Relationship mapping is error-prone
Understanding which Master Services Agreement (MSA) linked to which Statement of Work (SOW) was a cognitive nightmare.
Work spills into spreadsheets
Because the internal tools were brittle, teams reverted to external spreadsheets, creating a dangerous "shadow" data layer.
THE BOTTLENECK

The cost of poor onboarding isn’t immediate. It compounds.

Onboarding delayed by weeks
Increased operational cost
High risk of incorrect data
Skilled teams doing repetitive work

So we introduced AI into the process

"But trust broke."

16%

Wrong supplier mapping

5%

Hallucination rate

6%

broken Relationship

AI wasn’t the problem.
Ambiguity was.

AI handled structured data well. But contract onboarding isn’t just extraction, it’s interpretation.Deciding which entity is correct,which relationship applies,what should be trusted — these are not deterministic problems.

Instead of asking AI to do everything,we separated responsibilities

Humans handle ambiguity. AI handles certainty.The goal wasn’t automation. It was alignment.

JUDGEMENT

EXECUTION

HOW HUMAN DISSAMBUGUATE

01

02

03

Identify

"Who is this?" — Resolving legal entity names from fuzzy strings into a single source of truth.

Validate

"What’s connected to it?" — Mapping parent-child hierarchies between agreements automatically.

Verify

"Can I trust this?" Presenting AI evidence to humans for final high-confidence approval.

Agents surface ambiguity instead of hiding it

Low-confidence extractions are flagged with distinct visual markers, prompting human audit.

Reasoning is one click away

Every data point links directly back to the original PDF clause for immediate verification.

Confidence comes from traceability

Data lineage is preserved at every step, creating a permanent audit trail.

INTERFACE DESIGN

Humans decide. Agents execute.

Judgment

Is this relationship legal under current policy?

Execution

Find all 1200 instances and update the missing counterparty.

Human judgment became system intelligence.

View Next Project ->
bottom of page