Aura
Know your health. Own your care.
Designing an AI-native conversational assistant inside a patient portal - so anyone can understand their records, make sense of a visit, and feel in control of their own care.
Passion Project
AI Native
Lead Designer
Web · iOS
April 2026

Introducing Aura
Aura is a fictional patient health portal built on a single belief: your health information belongs to you - and you should be able to understand it. Unlike existing portals built for clinical documentation workflows, Aura is designed from the patient's point of view - centered on comprehension, not compliance.
This case study focuses on the design of Aura's core AI feature: a conversational health assistant embedded directly in the portal, allowing patients to ask plain-language questions about their records, understand their visit summaries, and get the context they need - without a medical degree.
Problem
The moment everything falls apart
A patient leaves their doctor's appointment. They log into their portal to review what was discussed. They find a PDF discharge summary written in clinical shorthand, three lab results with values they can't interpret, and a medication update with no context. They close the tab.
This is the defining failure of every patient portal in use today. They were built to give physicians documentation infrastructure - patients were an afterthought. The result is a system where access to information exists, but comprehension does not.
80%
of medical information patients receive during a visit is forgotten immediately after leaving
50%
of what is retained is remembered incorrectly - wrong doses, wrong follow-up timelines
9 in 10
adults have difficulty using everyday health information when presented in clinical language
The Core Tension
Physicians now have AI tools that document visits for them in seconds. Patients still have nothing that helps them understand what was documented. The information gap is widening, not closing.
The Competitive Landscape
Tools like Abridge, DeepScribe, and Nuance DAX are solving documentation for physicians - and doing it well. Oracle Health recently announced AI in their patient portal for record Q&A. But no product has yet designed this experience from the patient's perspective first - with the interaction design, trust signals, and plain-language guardrails that a non-clinical user actually needs.
The physician-side ambient AI problem is largely solved. The patient-side comprehension problem hasn't even been seriously designed yet. That's where Aura sits.
Research
What patients actually experience
Research spanned four methods:
Contextual inquiry with 6 participants navigating their real portals (MyChart, athenahealth, a regional hospital system) while thinking aloud;
Intercept interviews with 4 people recruited from doctor visits, centered on one question - "After your last visit, what did you wish you could ask someone?"
Competitive UX audit of MyChart, athenahealth, Oracle Health, and Apple Health Records
Secondary research from AHRQ, NIH, and JAMA on health literacy rates, information retention, and published studies on patient trust in AI-generated health information.
Key Insights
They don't know what they don't know
Patients can't query what confuses them - they lack the vocabulary. The AI needs to surface what matters without waiting to be asked.
AI trust is fragile
"Is this from my chart or did it make this up?" Source transparency isn't a feature - it's the foundation.
Post-visit = peak anxiety
The 24–48 hours after a visit is when questions peak and answers are hardest to find. The highest-value moment for AI.
Patients want to know what to do
Understanding what happened isn't enough. "What should I watch for?" "When do I call?" These are the real questions.
Language is the barrier, not tech
Portals aren't hard to use - the content inside requires clinical literacy most patients don't have.
Strategy
What Aura's AI must never do
The strategic framing for this feature starts not with what to build, but with what to protect against. AI in healthcare carries specific failure modes that erode trust catastrophically - and a lead designer's job is to name them before touching any interface.
Design Principles
Show your sources, always
Every AI response is anchored to a specific document, lab result, or visit note from the patient's own chart. No answer exists without a citation. Patients can always see exactly where the information came from.
Never diagnose, never prescribe
Aura explains and contextualizes - it never interprets symptoms into diagnoses or suggests medication changes. The guardrail is hard and always visible. "This is what your chart says. Your doctor is the right person to interpret it."
Proactive, not passive
Don't wait for patients to ask. Surface what's new, what changed, what needs attention - especially in the 48 hours after a visit. The assistant should feel like it's looking out for you, not waiting to be interrogated.
Meeting patients where they are
Not everyone has a clinical background, and they shouldn't need one. Aura translates medical language into plain terms, so every patient - regardless of education or experience - feels informed, not overwhelmed.
Human escalation is always one tap away
The AI is not a replacement for a provider. Every interaction has a visible path to message your care team, find urgent care, or call the office. Aura augments the relationship with your doctor - it doesn't substitute for it.
Feature Scope
Conversational chart Q&A
Ask anything about your records in plain language. "What did my doctor say about my blood pressure?" "When was my last tetanus shot?"
Post-visit AI summary
After each visit, Aura surfaces a plain-language recap: what was discussed, what changed, what to do before the next appointment.
Lab result plain-language explainer
When a new lab result arrives, Aura explains what it measures, what your result means in context, and flags if anything is outside your personal baseline.
Proactive visit prep
Before an upcoming appointment, Aura surfaces questions you might want to ask based on recent results and open items from your last visit.

Key Design Decisions
Source citations inline
Every AI response includes a subtle but always-present link to the source document in the chart. Tap it and you go directly to the relevant note or result. Non-negotiable for trust
Suggested questions to reduce blank-page anxiety
The empty state of the chat surfaces 3–4 contextually relevant questions based on recent activity — "You have a new lab result. Want me to explain it?" Eliminates the "I don't know what to ask" barrier.
Post-visit summary as a push moment
Rather than burying the summary in a menu, Aura sends a notification 2 hours after a visit: "Your visit summary is ready - 3 things to know from today." Opens directly into a scannable, plain-language digest.
Guardrail language that doesn't feel like a legal disclaimer
The hardest copy problem in the project. Testing multiple versions of how Aura communicates its limits - warmly, not defensively.

Research Partner
Exploring best practices
I used AI as a live research partner to explore UX best practices for designing AI agent experiences - querying patterns around trust, transparency, progressive disclosure, and human-in-the-loop interaction in healthcare contexts. It helped me build a principled foundation before touching any interface.
Early Ideation
Pressure-testing the concept
In the early stages, I used AI to rapidly explore how existing products surface health information to patients - generating a wide range of interaction models and entry point patterns to pressure-test assumptions before committing to a direction. It compressed weeks of exploratory thinking into focused working sessions.
Design Iteration
Refining the conversation
The hardest design problem was the copy - how does Aura communicate its limits warmly, not defensively? I used AI to generate and compare dozens of variations of guardrail language, empty states, and response tone until the voice felt human enough to trust. That's the kind of iteration that's nearly impossible to do manually at speed.
Projected Clinical Impact
Research shows 40-80% of visit information is forgotten immediately. A post-visit AI summary delivered the next morning directly addresses this - giving patients a reliable record they can return to, share with family, and act on.
Equity Impact
Aura's plain-language design has the greatest impact for patients who've historically been most underserved - older adults, non-native English speakers, and those navigating complex health journeys without clinical backgrounds. Clarity isn't a feature. It's equity.
01
What Worked
Starting from the patient's anxiety - not the portal's information architecture - kept every design decision grounded. The "what do I do now?" framing consistently surfaced better solutions than "how do we display this data?"
02
What's Next
The caregiver use case: a family member accessing Aura on behalf of an aging parent with proxy consent. The same AI layer, but with a fundamentally different permission model and emotional context.
Key Learnings
Designing for trust is designing for language.
The visual design of the AI was the easy part. The hardest, highest-impact work was getting the copy right - every word Aura says either builds or erodes confidence.
Proactive beats reactive - always.
Waiting for patients to ask questions assumes they know what to ask. Designing Aura to surface the right thing at the right moment was the decision that made the biggest difference in testing.
Guardrails are a design problem, not a legal one.
Every AI health product needs limits - but how those limits are communicated determines whether patients feel protected or dismissed. The boundary is necessary. The tone is a choice.
