top of page

AI Studio

Enhancing AI Performance

AI Studio Screens

About the project

AI Studio is a self-serve cognitive tool that empowers domain experts, such as legal and compliance teams, to influence and customize AI performance and training. AI Studio puts the power of AI in the hands of the users who know their business best. In this case, it's lawyers.

The Ecosystem and Personas

So, there are three personas currently working separately.

AI Studio - Persona 1
AI Studio - Persona
AI Studio - Persona 2

The below flow explains how these personas influence each other.This has increased the time needed to deploy and enhance AI performance.

Ecosystem of AI Studio

Pain Point for Data Preparation

We evaluated the data quality currently provided to understand the error pattern and data output quality.

​

We also connected with experts to understand their ecosystem to prepare golden data and gathered qualitative pain points.

Complex Instruction (Lack of annotation guidelines)
 

Required Inputs or questions were confusing and lengthy.
 

Severity : Low

Fatigue and Time Constraints
 

 Do not expect great data if you are boring the shit out of people

​​​​
Severity : High

Technical Limitations of Annotation Tools
 

Tools are not present or either not bad user experience

 

Severity : High

Complex Inputs/Question

​

Required Inputs or questions were confusing and lengthy

​

Severity : Low

Overwhelming Tag Lists

​

Sometime a tag has multiple meaning and can have multiple label

​

Severity : High

Lack of knowledge
 

Human limitation to access informatio

​

Severity : High

Biased Perspective
 

Human perception is skewed due to historical event

​

Severity : High

Insufficient Supervision and Feedback
 

No standard S.O.P., and no guardrails and feedback mechanism exist.
 

Severity: High

Multiple Tasks at Same Time

​

Open-ended question or complex question found to be wrongly annotated.

​

Severity : Low

AI Studio has broadly three components

  • Golden Data Preparation

  • Data Explorer

  • Evaluation Studio

AI Studio landing page

Landing Page
Data Preparation Prototype Flow

Once we understood the intricacy, we proposed an intuitive way to data preparation.

Now, data is prepared

Model gets updated with new data 

New model/Pipeline released for evaluation

Evaluation Process

After the new model pipeline version is released, legal domain experts evaluate AI pipeline performance against a golden dataset, adjusting configurations to optimize results and conducting experiments to refine accuracy.

User Journey of evaluating the new model/pipeline

Below is the user journey for testing any AI performance of pipeline 

AI Studio  User Journey

Click on image to expand the view

Evaluation Studio Components and Design Details

The result page design

This page design was crucial to have the best user experience as this brings clarity on evaluation and also its bolis down the effort of evaluation, reducing errors and improving AI performance

​

Key Decisions

  • Storing a version of edits to help users revert to better options.

  • Showing Impacts on parameter scores gives control and direction

  • and flexibility to change data sets gives the freedom and control​

​

Result page got improved

from this version

Olde Design v1

To this enhanced version

Smarter, Simpler, Sharper – Here’s What Changed

  • ​​User-Friendly Language and Labels

  • Improved Feedback Loop with Experiment History

  • Legal-User-Centric Design Approach

  • Consistent and Comparable Score Visualization

  • Guided Workflow for Continuous Improvement

  • Clear Categorization of Performance Metrics

  • Visual Trendlines for Score Progression

  • Actionable Recommendations for Next Steps

  • Reduced Cognitive Load with Progressive Disclosure

  • Inline Help for Better Score Interpretation

Edit/Adjust Configurations

Lawyers edit/adjust the config to see if the scores have improved.

They do it iteratively to achieve the best configuration. Sometimes, they might revert to older configured versions. 

​

Every version is automatically saved in the history log.

Anchor 1

Selecting Testing Parameters

These parameters are to populate results for AI performance.
 

Lawyers are new to this AI training domain. To help them decide, I added extra layers of info upfront teaching what parameters signifies what all indicators
 

Furthermore, if still help is needed, they can go to a more explanatory view.

Landing Page
Data set selection 

The user can choose the data set for the next run either from recent run or by manually selecting the data 

bottom of page