Evidence-first contract evaluation using AI

P.A.R.T.S is an AI-powered contract evaluation tool developed through a government SBIR award to help PCNs assess contracts against mission-critical standards. The challenge was enabling accurate, defensible evaluations while accommodating complex regulations and limited user tolerance for cognitive overhead.

I owned the UX strategy and high-fidelity design, balancing compliance constraints, immature AI workflows, and minimal access to end users. The final solution integrated task-based evaluation with a conversational AI interface, providing both structured scoring and contextual analysis. The initial contract work was approved for development within the SBIR program.

Project overview: Context, problem, and constraints

A web-based contract evaluation platform for a government grant (DCMA), designed to support automated analysis of high-volume supply chain contracts against multiple compliance and quality standards.

Product and client

Quality Engineers responsible for mission-critical government contracts lack a unified, efficient way to evaluate daily contract activity against multiple compliance and quality standards. As a result, contract assessments are slow, inconsistent, and error-prone, increasing operational risk and reducing mission readiness.

Core problem

Root causes

  • Contract data and evaluation tools are fragmented across multiple systems

  • Evaluations against multiple standards must be performed manually and repeatedly

  • There is no consistent, centralized view of daily contract workload or evaluation status

Who is impacted

  • Quality Engineers (QEs): Spend excessive time gathering information, duplicating effort, and performing manual analysis instead of applying judgment

  • Quality Managers (QMs): Lack real-time visibility into contract quality, prioritization, and risk across the day’s workload

  • Government Programs: Face delayed decisions, inconsistent evaluations, and increased compliance risk

Role and scope

Senior UI/UX Designer

  • Analyzing sensitive government data through selective disclosure methods while meeting federal security, access control, and audit-ability requirements

  • Contract volume, complexity, and multi-standard evaluation requirements exceeded what could be reliably performed through manual review alone

Key constraints

Why this matters

Without a reliable and timely way to evaluate contracts daily, quality oversight becomes reactive instead of proactive, undermining accountability, decision speed, and mission outcomes.

Intended outcome

Enable government quality teams to analyze mission-critical contracts against multiple standards in a single, secure system, improving evaluation speed, consistency, and oversight.

What we knew vs. what we assumed

Assumptions and hypotheses

Known inputs

Design strategy

Solution overview

Key flows and screens

1. Task list and board flows

Explore In Figma

2. AI assisted contraction evaluation

Explore In Figma

3. Messaging chats and communication

Explore In Figma
Watch Flow
Watch Flow
Watch Flow

Impact and business value

What i’d validate next

Reflection

Working without extensive upfront research forced me to design with explicit assumptions and clear mitigation strategies, which became the project's strength. If I could revisit one decision, I'd document why each assumption was reasonable more rigorously during design, not just in retrospect for this case study. The insight I'm carrying forward: constraint-driven design isn't a compromise, it's a forcing function for clarity. Making assumptions visible early creates better stakeholder conversations than presenting "validated" solutions that rest on thin research.

Next
Next

Bring your wallet along (2023)