Redesigning an AI medical assistant to keep patient talking
Redesigning an AI medical assistant to keep patient talking
Previsit.ai is an AI-powered medical assistant that gathers patient information before doctor appointments. Patients answer questions through a chatbot, so doctors walk in prepared. The problem: nearly half of the patients were quitting before finishing, and doctors were entering appointments with incomplete data.
Previsit.ai is an AI-powered medical assistant that gathers patient information before doctor appointments. Patients answer questions through a chatbot, so doctors walk in prepared. The problem: nearly half of the patients were quitting before finishing, and doctors were entering appointments with incomplete data.
57%
→
77%
Completion rate
Completion rate
Completion rate
3
→
4
Answer quality (out of 5)
Answer quality (out of 5)
Answer quality (out of 5)
77
Questions answered by completers
Questions answered by completers
My role
My role
Sole product designer at Previsit.ai, responsible for all design decisions across the product. For this project: conversation analysis, patient interviews, flow redesign, UI, copy, and interaction design.
Sole product designer at Previsit.ai, responsible for all design decisions across the product. For this project: conversation analysis, patient interviews, flow redesign, UI, copy, and interaction design.
User Experience
User Interface
Research
Conversation design
Copywriting
Before - 57% completion
Before - 57% completion
Generic greeting. All questions at once. No progress. No time estimate.
Generic greeting. All questions at once. No progress. No time estimate.
After - 77% completion
After - 77% completion
Doctor-linked intro. Time estimate. One question at a time. Progress bar.
Doctor-linked intro. Time estimate. One question at a time. Progress bar.
Patient didn't drop off because the chatbot asked too much. They dropped off because they couldn't see the end.
Patient didn't drop off because the chatbot asked too much. They dropped off because they couldn't see the end.
Patient didn't drop off because the chatbot asked too much. They dropped off because they couldn't see the end.
Analysing 30 conversations and interviewing 6 patients, I found a clear threshold: patients didn't want to spend too much time. Asking them around 6 questions was a sweet spot.
Analysing 30 conversations and interviewing 6 patients, I found a clear threshold: patients didn't want to spend too much time. Asking them around 6 questions was a sweet spot.
Constraint
Constraint
This became the constraint: 6 core questions from the doctor, with AI handling up to 2 targeted follow-ups when answers are too vague. But this created a new problem - how do you show progress when the total number of questions isn't fixed?
Of course, the system still allowed doctors to add as many questions as they wanted, but recommended 6.
This became the constraint: 6 core questions from the doctor, with AI handling up to 2 targeted follow-ups when answers are too vague. But this created a new problem - how do you show progress when the total number of questions isn't fixed?
Of course, the system still allowed doctors to add as many questions as they wanted, but recommended 6.
THREE PROBLEMS, ONE CONNECTED SYSTEM
THREE PROBLEMS, ONE CONNECTED SYSTEM
A time estimate sets expectations. A progress bar maintains momentum. AI follow-ups happen invisibly within that framework.
A time estimate sets expectations. A progress bar maintains momentum. AI follow-ups happen invisibly within that framework.
01
01
"About 3 minutes"
"About 3 minutes"
Patients wouldn't commit without knowing the time investment. A time estimate in the intro sets the expectation before the first question.
Patients wouldn't commit without knowing the time investment. A time estimate in the intro sets the expectation before the first question.
02
02
Progress bar, not question count
Progress bar, not question count
A fixed count like "3 of 8" breaks if AI adds follow-ups - suddenly it's "3 of 9" and trust is gone. A progress bar always moves forward, regardless of how many questions are asked.
A fixed count like "3 of 8" breaks if AI adds follow-ups - suddenly it's "3 of 9" and trust is gone. A progress bar always moves forward, regardless of how many questions are asked.
03
03
AI follow-ups within the system
AI follow-ups within the system
"I smoke" is useless without frequency. The AI asks targeted clarifications when medically needed - but the bar keeps progressing. The patient never sees the goalposts move.
"I smoke" is useless without frequency. The AI asks targeted clarifications when medically needed - but the bar keeps progressing. The patient never sees the goalposts move.
In medical settings, a vague answer can be useless. "I take medication" means nothing without the dosage. The AI recognises incomplete answers and asks one targeted follow-up - but only when medically relevant. The patient sees the bar move forward, not a counter changing.
In medical settings, a vague answer can be useless. "I take medication" means nothing without the dosage. The AI recognises incomplete answers and asks one targeted follow-up - but only when medically relevant. The patient sees the bar move forward, not a counter changing.
57%
57%
→