Aya Healthcare
The hardest part of introducing AI to Nova wasn't the technology. It was designing an experience complex enough to earn clinical trust, consistent enough to work across every module, and scalable enough to outlast the first use case.
Client Project
Product Design
Lead Designer
Enterprise Internal Platform
Feb 2026 - Present

Problem
A legacy platform. A new AI capability. No obvious home for it.
Nova is Aya Healthcare's internal enterprise platform - used daily by recruiters, account managers, and clinical staff to evaluate traveling clinicians against complex healthcare job requirements. As the volume of clinical evaluations grew, leadership saw an opportunity for AI to meaningfully accelerate that process.
But the design question wasn't what the AI should surface. It was where it should live, how users across different roles and pages would access it, and how to build it in a way that could scale well beyond this first use case.

The foundation: Match Breakdown
At the core of every clinical evaluation is the Match Breakdown - a component that already lived across four Nova modules (Clinical, Job Submittal, Account Manager Live List and Job Details page). It compares a candidate's certifications, skills, work experience, and other requirements against what a job actually demands, surfacing where a candidate meets the bar, where they fall short, and where manual review is still needed.
This is exactly the content the AI agent would analyze. The AI doesn't replace the Match Breakdown - it reads it, interprets it, and surfaces a verdict, flagged risks, and suggested next steps on top of it. Which meant the AI experience couldn't live in isolation. It had to be present wherever Match Breakdown was - across all four modules - and it had to feel just as consistent.
The complication
Match Breakdown already behaved differently depending on where you were: a persistent side panel on the Clinical page, a modal triggered by a match percentage chip on Recruiting and AM pages. Any AI integration had to reckon with this split - and resolve it, not compound it.
High stakes, low tolerance for error
Clinical placements have real consequences. An AI layer users didn't trust - or that felt inconsistent - would simply be ignored. Trust had to be designed in, not assumed.
Designed to scale from day one
Clinical evaluation was use case one. Nova's AI ambitions extended further. The architecture had to support future agents without requiring a new design pattern each time.
Solution
Architecture first. Interface second.
Before any UI exploration, I established a set of design principles to govern how AI should behave in this context - especially given the clinical stakes involved. The solution that followed wasn't just a new panel; it was a reusable AI component architecture for the Nova platform.
AI as assistant, not authority
Every AI output is a starting point for human judgment - never a conclusion. Insights are labeled, caveated, and positioned as inputs to the evaluator's decision, not replacements for it.
Transparency is non-negotiable
Every AI-generated element carries an explicit badge, a source disclaimer, and a note that the AI may make mistakes. At clinical stakes, trust is the foundation - not a nice-to-have.
Progressive disclosure over information density
A verdict and top-line insights surface first. Detailed breakdowns live in tabbed sub-sections. Evaluators control how deep they go - the AI doesn't front-load everything it knows.
Design for the platform, not just the feature
The Nova AI Panel was designed as a reusable component from day one - so future AI use cases can plug in without requiring a new surface each time.

Phase 1
Read-Only AI Panel
AI verdict, insights, and suggested actions across all four modules. Session caching and manual refresh. Page-aware feedback logic - enabled where submittals exist, handled via an alternate mechanism where they don't. Clinical evaluation as the initial use case.
Phase 2
Interactive Feedback Loop
Row-level thumbs up/down active across relevant pages. Feedback history surfaced within the evaluation panel. The model improvement loop closes for the first time - evaluator judgment feeds back into AI quality.
Phase 3
Platform-Wide AI Agent (Planned)
Nova AI Panel extended to non-clinical use cases. Inline micro-signals on list views for at-a-glance AI signaling. Conversational query capability for deeper candidate analysis. The architecture scales beyond its origin.
Early Ideation
In the early stages of the project, I used AI to rapidly explore how enterprise products surface AI agents - generating a range of entry point patterns and interaction models to pressure-test assumptions before committing to a direction in Figma. It helped me move through a wider solution space faster, so the concepts I brought into team conversations were more considered and better differentiated.
Design Iteration & Comparison
Throughout the design process I used AI to generate and compare layout variations of the panel - testing different information hierarchies, content prioritization, and component structures side by side. What would have taken multiple rounds of Figma iteration was compressed into focused working sessions, freeing me to spend more time on the harder judgment calls: what the AI should surface, in what order, and why a clinician would trust it.
Research Partner
I used AI as a live research partner to explore best practices for designing AI agent experiences - querying patterns around trust, transparency, and human-in-the-loop interaction in enterprise contexts. It helped me quickly identify the principles most relevant to a high-stakes clinical environment, turning what could have been days of independent research into a focused conversation that directly shaped the work.
~6 min → 1–3 min
Average clinical review time before and after AI-assisted evaluation - a projected 50–66% reduction per submittal review.
0 → 1
The number of centralized evaluation surfaces in Nova. Reviews previously happened over email with no system, no queue, and no consistency.
Consistent across 4 modules
The same AI experience designed to work across every Nova page where clinical evaluation happens - eliminating judgment variance across reviewers.
Platform Architecture
A reusable Nova AI Panel component that any future AI agent can adopt - so the platform's AI investment compounds over time rather than starting over with each new use case.
Team Alignment
A research-backed proposal with specific responses to every technical and product constraint on the table - shifting the team from competing preferences to shared criteria and a clear path forward.
Evaluator Experience
A clinical evaluation workflow that previously happened over email, with no system and no consistency, now has a structured AI-assisted experience - giving reviewers a faster, more confident path to a decision.
01
Domain shapes design - borrow carefully
Prescriptive AI in a clinical context demands a different standard than consumer summarization tools. I made a deliberate choice not to import patterns from lower-stakes products. The domain set the bar, and the design had to meet it - transparency, attribution, and human control weren't optional layers.
02
Architecture is a design decision
Where AI lives in a product shapes trust, discoverability, and every future AI feature that follows. Treating the entry point and component structure as seriously as the visual design changed the quality of the outcome - and gave the team a foundation they could actually build on.
