

previsit.ai
Problem
Problem
Patients frequently abandoned the Previsit.ai chatbot before completing pre-appointment questions, leading to incomplete medical data for doctors.
Executive Summary
Executive Summary
Role
Role
As a Product Designer, I analysed conversations, ran interviews, identified key pain points, and redesigned the chatbot flow to improve trust and completion rates.
As a Product Designer, I analysed conversations, ran interviews, identified key pain points, and redesigned the chatbot flow to improve trust and completion rates.
Impact at glance
Impact at glance
Increased conversation completion from 57% (17/30) → 77% (23/30)
Ensured patients who completed answered 100% of the required questions
Improved perceived clarity and patience with chatbot interactions
Reduced drop-offs by addressing uncertainty, overload, and rushed pacing
Increased conversation completion from 57% (17/30) → 77% (23/30)
Ensured patients who completed answered 100% of the required questions
Improved perceived clarity and patience with chatbot interactions
Reduced drop-offs by addressing uncertainty, overload, and rushed pacing
Context & Problem
Context & Problem
Previsit.ai is an AI-powered medical assistant used to gather patient information before doctor appointments.
Previsit.ai is an AI-powered medical assistant used to gather patient information before doctor appointments.
Analysis showed:
Analysis showed:
13 out of 30 users quit before finishing conversations
Many provided incomplete answers, which limited the doctor’s preparation
Interviews revealed patients were frustrated by too many questions, lack of clarity on time, and rushed pacing
13 out of 30 users quit before finishing conversations
Many provided incomplete answers, which limited the doctor’s preparation
Interviews revealed patients were frustrated by too many questions, lack of clarity on time, and rushed pacing
research & insigths
research & insigths
Research protocol
Research protocol
Participants: 6 patients from 2 GDP doctors’ practices
Methods: Conversation analysis (30 transcripts), 1:1 interviews
Focus: Why patients abandoned chats
Participants: 6 patients from 2 GDP doctors’ practices
Methods: Conversation analysis (30 transcripts), 1:1 interviews
Focus: Why patients abandoned chats
Key Insights:
Key Insights:
Uncertainty about time - patients didn’t know how long it would take
Overload - too many questions at once
Rushed feeling — lack of pacing cues made the interaction stressful
Uncertainty about time - patients didn’t know how long it would take
Overload - too many questions at once
Rushed feeling — lack of pacing cues made the interaction stressful
chat analysis


process & design decisions
process & design decisions
Conversation Flow Redesign
Conversation Flow Redesign
Added a loading bar/progress indicator so patients knew how far they were in the flow
Limited questions per step to one at a time for better focus
Introduced typing animation to simulate natural pauses and reduce stress
Reduced the total number of questions by merging and prioritising key ones
Added a loading bar/progress indicator so patients knew how far they were in the flow
Limited questions per step to one at a time for better focus
Introduced typing animation to simulate natural pauses and reduce stress
Reduced the total number of questions by merging and prioritising key ones
Tone & Context
Tone & Context
Reframed intro message: “I’m the medical assistant to Dr Smith…” → increased trust by making it feel personal and doctor linked
Reframed intro message: “I’m the medical assistant to Dr Smith…” → increased trust by making it feel personal and doctor linked
Collaboration & Constraints
Collaboration & Constraints
Worked with developers to ensure chat would be a better prompt
Balanced between shorter, friendlier UX and collecting necessary medical data
Worked with developers to ensure chat would be a better prompt
Balanced between shorter, friendlier UX and collecting necessary medical data
old designs



Testing & Iterations
Testing & Iterations
Round 1 — Prototype Testing
Round 1 — Prototype Testing
Feedback: Users appreciated one-at-a-time questions, but still felt impatient.
Change: Introduced typing animation to create natural pacing.
Feedback: Users appreciated one-at-a-time questions, but still felt impatient.
Change: Introduced typing animation to create natural pacing.
new designs

Round 2 — Refined Flow
Round 2 — Refined Flow
Feedback: Users said the chatbot felt “friendlier” and “less stressful”
Change: Adjusted microcopy for empathy (“I understand” vs. neutral responses)
Feedback: Users said the chatbot felt “friendlier” and “less stressful”
Change: Adjusted microcopy for empathy (“I understand” vs. neutral responses)
results & impact
results & impact
Metric
Before
After
Completion rate
Completion rate
60%
60%
23/30 (77%)
23/30 (77%)
Answer quality
Answer quality
3.1 /5
3.1 /5
4.1 /5
4.1 /5
User sentiment
User sentiment
“Too long, rushed”
“Too long, rushed”
"Clear, friendly, easy”
"Clear, friendly, easy”
reflection & learnings
reflection & learnings
Progress indicators reduce anxiety — medical users need transparency on time.
Pacing matters — slowing down with microcopy and animations can improve trust.
Constraints shaped design — balancing fewer questions with doctors’ needs was critical.
Progress indicators reduce anxiety — medical users need transparency on time.
Pacing matters — slowing down with microcopy and animations can improve trust.
Constraints shaped design — balancing fewer questions with doctors’ needs was critical.
takeaways
takeaways
Previsit taught me that in sensitive, high-stakes contexts like healthcare, clarity and empathy are as important as efficiency. By redesigning the chatbot flow, I helped reduce drop-offs, improve data quality for doctors, and create a friendlier patient experience.
Previsit taught me that in sensitive, high-stakes contexts like healthcare, clarity and empathy are as important as efficiency. By redesigning the chatbot flow, I helped reduce drop-offs, improve data quality for doctors, and create a friendlier patient experience.
