Aya Healthcare

The hardest part of introducing AI to Nova wasn't the technology. It was designing an experience complex enough to earn clinical trust, consistent enough to work across every module, and scalable enough to outlast the first use case.

Client Project

Product Design

Lead Designer

Enterprise Internal Platform

Feb 2026 - Present

Problem

A legacy platform. A new AI capability. No obvious home for it.

Nova is Aya Healthcare's internal enterprise platform - used daily by recruiters, account managers, and clinical staff to evaluate traveling clinicians against complex healthcare job requirements. As the volume of clinical evaluations grew, leadership saw an opportunity for AI to meaningfully accelerate that process.

But the design question wasn't what the AI should surface. It was where it should live, how users across different roles and pages would access it, and how to build it in a way that could scale well beyond this first use case.

The foundation: Match Breakdown

At the core of every clinical evaluation is the Match Breakdown - a component that already lived across four Nova modules (Clinical, Job Submittal, Account Manager Live List and Job Details page). It compares a candidate's certifications, skills, work experience, and other requirements against what a job actually demands, surfacing where a candidate meets the bar, where they fall short, and where manual review is still needed.

This is exactly the content the AI agent would analyze. The AI doesn't replace the Match Breakdown - it reads it, interprets it, and surfaces a verdict, flagged risks, and suggested next steps on top of it. Which meant the AI experience couldn't live in isolation. It had to be present wherever Match Breakdown was - across all four modules - and it had to feel just as consistent.

The complication

Match Breakdown already behaved differently depending on where you were: a persistent side panel on the Clinical page, a modal triggered by a match percentage chip on Recruiting and AM pages. Any AI integration had to reckon with this split - and resolve it, not compound it.

High stakes, low tolerance for error

Clinical placements have real consequences. An AI layer users didn't trust - or that felt inconsistent - would simply be ignored. Trust had to be designed in, not assumed.

Designed to scale from day one

Clinical evaluation was use case one. Nova's AI ambitions extended further. The architecture had to support future agents without requiring a new design pattern each time.

Solution

Architecture first. Interface second.

Before any UI exploration, I established a set of design principles to govern how AI should behave in this context - especially given the clinical stakes involved. The solution that followed wasn't just a new panel; it was a reusable AI component architecture for the Nova platform.

AI as assistant, not authority

Every AI output is a starting point for human judgment - never a conclusion. Insights are labeled, caveated, and positioned as inputs to the evaluator's decision, not replacements for it.

Transparency is non-negotiable

Every AI-generated element carries an explicit badge, a source disclaimer, and a note that the AI may make mistakes. At clinical stakes, trust is the foundation - not a nice-to-have.

Progressive disclosure over information density

A verdict and top-line insights surface first. Detailed breakdowns live in tabbed sub-sections. Evaluators control how deep they go - the AI doesn't front-load everything it knows.

Design for the platform, not just the feature

The Nova AI Panel was designed as a reusable component from day one - so future AI use cases can plug in without requiring a new surface each time.

The Nova AI Panel

A dedicated, consistent home for AI - across every module.

A dedicated, consistent home for AI - across every module.

The Nova AI Panel

Rather than embedding AI content inside an existing component, the solution gives the AI agent its own first-class surface: a contextual side sheet that opens consistently across all four Nova pages, with the same entry point, the same interaction pattern, and the same visual language regardless of which module the user is in.

The panel leads with a plain-language verdict - a clear AI recommendation before any detail. Below it, AI Insights surface the most important flags and patterns, visually distinct so users always know what's AI-generated. Suggested Actions translate those insights into concrete next steps. And for users who want to go deeper, tabbed Evaluation Details break down the full picture by Certs & Licenses, Skills, Work Experience, and Unit Information - without front-loading everything at once.

Rather than embedding AI content inside an existing component, the solution gives the AI agent its own first-class surface: a contextual side sheet that opens consistently across all four Nova pages, with the same entry point, the same interaction pattern, and the same visual language regardless of which module the user is in.

The panel leads with a plain-language verdict - a clear AI recommendation before any detail. Below it, AI Insights surface the most important flags and patterns, visually distinct so users always know what's AI-generated. Suggested Actions translate those insights into concrete next steps. And for users who want to go deeper, tabbed Evaluation Details break down the full picture by Certs & Licenses, Skills, Work Experience, and Unit Information - without front-loading everything at once.

Engineering Alignment

Solving state persistence before it became a blocker.

The engineering team's core concern: closing and reopening the AI panel could trigger a re-run of the analysis mid-session, potentially returning different results and eroding user trust. Rather than waiting for this to surface as a blocker in review, I arrived with a specific technical proposal.

Session-level caching keyed by [candidateId + jobId] keeps AI results stable for the duration of a session. A manual refresh control in the panel header gives users who want fresh analysis an explicit way to get it. Predictable, transparent, user-controlled — and it shifted the design review conversation from is this feasible to here's how we build it.

Engineering Alignment

Solving state persistence before it became a blocker.

The engineering team's core concern: closing and reopening the AI panel could trigger a re-run of the analysis mid-session, potentially returning different results and eroding user trust. Rather than waiting for this to surface as a blocker in review, I arrived with a specific technical proposal.

Session-level caching keyed by [candidateId + jobId ] keeps AI results stable for the duration of a session. A manual refresh control in the panel header gives users who want fresh analysis an explicit way to get it. Predictable, transparent, user-controlled - and it shifted the design review conversation from is this feasible to here's how we build it.

Planning

Planning

How we phased the build.

How we phased the build.

Phase 1

Read-Only AI Panel

AI verdict, insights, and suggested actions across all four modules. Session caching and manual refresh. Page-aware feedback logic - enabled where submittals exist, handled via an alternate mechanism where they don't. Clinical evaluation as the initial use case.

Phase 2

Interactive Feedback Loop

Row-level thumbs up/down active across relevant pages. Feedback history surfaced within the evaluation panel. The model improvement loop closes for the first time - evaluator judgment feeds back into AI quality.

Phase 3

Platform-Wide AI Agent (Planned)

Nova AI Panel extended to non-clinical use cases. Inline micro-signals on list views for at-a-glance AI signaling. Conversational query capability for deeper candidate analysis. The architecture scales beyond its origin.

How I used AI in my process

How I used AI in my process

AI wasn't a shortcut in this process - it was a thinking partner. I used it in three specific ways that meaningfully shaped the direction of the work.

AI wasn't a shortcut in this process - it was a thinking partner. I used it in three specific ways that meaningfully shaped the direction of the work.

Early Ideation

In the early stages of the project, I used AI to rapidly explore how enterprise products surface AI agents - generating a range of entry point patterns and interaction models to pressure-test assumptions before committing to a direction in Figma. It helped me move through a wider solution space faster, so the concepts I brought into team conversations were more considered and better differentiated.

Design Iteration & Comparison

Throughout the design process I used AI to generate and compare layout variations of the panel - testing different information hierarchies, content prioritization, and component structures side by side. What would have taken multiple rounds of Figma iteration was compressed into focused working sessions, freeing me to spend more time on the harder judgment calls: what the AI should surface, in what order, and why a clinician would trust it.

Research Partner

I used AI as a live research partner to explore best practices for designing AI agent experiences - querying patterns around trust, transparency, and human-in-the-loop interaction in enterprise contexts. It helped me quickly identify the principles most relevant to a high-stakes clinical environment, turning what could have been days of independent research into a focused conversation that directly shaped the work.

Metrics

Metrics

By the numbers

By the numbers

The case for this project was clear from the start - manual reviews were slow, inconsistent, and happening over email with no system to support them.

The case for this project was clear from the start - manual reviews were slow, inconsistent, and happening over email with no system to support them.

~6 min → 1–3 min

Average clinical review time before and after AI-assisted evaluation - a projected 50–66% reduction per submittal review.

0 → 1

The number of centralized evaluation surfaces in Nova. Reviews previously happened over email with no system, no queue, and no consistency.

Consistent across 4 modules

The same AI experience designed to work across every Nova page where clinical evaluation happens - eliminating judgment variance across reviewers.

Outcomes

Outcomes

An AI architecture Nova can build on.

An AI architecture Nova can build on.

The work produced more than a feature design - it established a scalable AI component, a set of interaction principles for high-stakes AI contexts, and a team dynamic that made future design conversations faster and more collaborative.

The work produced more than a feature design - it established a scalable AI component, a set of interaction principles for high-stakes AI contexts, and a team dynamic that made future design conversations faster and more collaborative.

Platform Architecture

A reusable Nova AI Panel component that any future AI agent can adopt - so the platform's AI investment compounds over time rather than starting over with each new use case.

Team Alignment

A research-backed proposal with specific responses to every technical and product constraint on the table - shifting the team from competing preferences to shared criteria and a clear path forward.

Evaluator Experience

A clinical evaluation workflow that previously happened over email, with no system and no consistency, now has a structured AI-assisted experience - giving reviewers a faster, more confident path to a decision.

Reflection

Reflection

What this project taught me.

What this project taught me.

01

Domain shapes design - borrow carefully

Prescriptive AI in a clinical context demands a different standard than consumer summarization tools. I made a deliberate choice not to import patterns from lower-stakes products. The domain set the bar, and the design had to meet it - transparency, attribution, and human control weren't optional layers.

02

Architecture is a design decision

Where AI lives in a product shapes trust, discoverability, and every future AI feature that follows. Treating the entry point and component structure as seriously as the visual design changed the quality of the outcome - and gave the team a foundation they could actually build on.

Ready to build something together?

Let's connect!

Ready to build something together?

Let's connect!

Ready to build something together?

Let's connect!

Ashley Carmen Uy • Lead Product Designer