From Answers to Guidance: Enhancing Health Conversations with Wayfinding AI Based on Gemini

CN
@Zakariae BEN ALLALCreated on Tue Sep 30 2025
Visualization of a wayfinding AI with a dual-panel chat interface guiding health discussions

Introduction

Have you ever posed a health-related question to a chatbot, only to receive a lengthy, generic reply that didn’t quite address your specific concerns? If that sounds familiar, you’re not alone. Finding clear, tailored health information online can be challenging. Recent research from Google proposes a new approach: rather than jumping straight to an answer, an AI should start by asking insightful clarifying questions and then adapt its guidance as the conversation progresses. This innovative concept, termed “wayfinding” AI and built on Gemini, explores this very idea.

In this article, we’ll delve into what the research team uncovered, the significance of their design, the changes observed in conversations, and how these insights can enhance everyday health interactions with AI. Our aim is to keep these findings accessible for everyone, while maintaining the rigor of the research.

Important Note: This research focuses on an experimental system for information and navigation—not on medical diagnosis or treatment. Always consult a qualified clinician for medical advice.

Why Online Health Searches Miss the Mark

Many individuals struggle to identify which details are medically relevant, often leading them to either overshare or undershare their concerns. This results in broad, impersonal, or irrelevant search outcomes and chatbot responses. Researchers at Google interviewed participants about their experiences, discovering that many had difficulty articulating their health concerns in a way that AI could effectively understand. Most preferred AI agents that initially asked questions to clarify goals and context before providing a complete answer. This method felt more personal and trustworthy.

The Core Idea: Context-Seeking Wayfinding

The research team frames their approach as wayfinding—navigating someone through a complicated topic by actively seeking context. Their prototype is grounded in three key design principles:

  • Proactive Conversational Guidance: At each interaction, the AI asks up to three targeted questions to minimize ambiguity and help users articulate a more complete health narrative.
  • Best-Effort Answers at Every Turn: The system shares the best available information based on the ongoing dialogue, while emphasizing that responses can evolve with new information.
  • Transparent Reasoning: The AI explains how each new detail refines the answer, allowing users to understand the logic behind the responses.

These principles transform the role of AI from simply providing answers to acting as a collaborative guide.

Why the Interface Matters as Much as the Model

A subtle yet significant design choice is the two-column layout: the chat appears on the left, while clarifying questions and the evolving “best information so far” reside on the right. This layout keeps questions visible and actionable rather than obscure in long paragraphs. It also reflects how clinicians alternate between asking targeted questions and sharing explanations. Such minor interface decisions can determine whether users engage fully or miss critical prompts.

How the Team Tested the Idea

To evaluate whether wayfinding enhances health conversations, the researchers conducted a randomized user study with 130 adults in the United States who had real health inquiries. Each participant interacted with both the new wayfinding agent and a baseline chatbot utilizing Gemini 2.5 Flash. Post-session, participants rated their experiences on criteria such as helpfulness, relevance, contextual tailoring, ease of use, and efficiency. This design allowed the research team to compare each agent’s perception regarding the same individual and topic.

What Changed in Conversations

Participants expressed a preference for the wayfinding agent across several key metrics, including helpfulness, relevance, goal understanding, and contextual tailoring. Conversations also grew more interactive; when exploring possible causes of symptoms, discussions averaged 4.96 turns with the wayfinding agent versus 3.29 turns with the baseline. In essence, incorporating clarifying questions not only maintained the pace of the dialogue but also fostered richer, more focused exchanges that yielded information closely aligned with users’ needs.

Why Context-Seeking Works for Health Topics

  • It Mirrors Clinical Reasoning: Clinicians begin with questions to refine diagnoses before providing explanations. Wayfinding adopts this method for consumer AI tools.
  • It Reduces Guesswork: Users often lack knowledge about medically relevant details. Structured follow-ups extract important context without requiring medical expertise.
  • It Builds Trust Through Transparency: Demonstrating how new information alters responses fosters a sense of understanding and accountability in the AI.

A Closer Look at the Design Playbook

For those developing health-centric or other high-stakes assistants, the study suggests several practical patterns:

  1. Ask Fewer, Better Questions
  2. Limit clarifying questions to a small, consistent number, such as three.
  3. Ensure each question is specific, answerable, and clearly worded.
  4. Avoid double-barreled questions requiring multiple responses.
  5. Utilize short, scannable prompts for quick user replies.

  6. Provide Iterative Value

  7. Deliver a best-effort answer after each interaction, ensuring users gain information promptly.
  8. Explicitly mention how follow-up answers can enhance guidance.
  9. Summarize the best information currently available in a stable panel that updates as context evolves.

  10. Show Your Work

  11. Briefly clarify the reasoning behind any changes in responses.
  12. Connect each refinement to a specific user detail for easy understanding of the logic.
  13. Keep explanations concise to avoid overwhelming users.

  14. Separate Questions from Answers

  15. Position clarifying questions prominently to encourage responses.
  16. Keep the conversational flow distinct from the evolving explanations.
  17. Employ headings, bullet points, and short paragraphs to prevent cognitive overload.

How This Relates to the Broader Health AI Landscape

The wayfinding initiative aligns with other research examining more capable, conversational health agents. For instance, Google has also explored AMIE, a research tool designed for multimodal diagnostic dialogue, capable of reasoning over text and medical visuals using Gemini. Though distinct from the wayfinding prototype, AMIE underscores the transition toward conversational systems that gather context and reason more like medical professionals.

At the consumer level, recent announcements indicate a movement toward more personalized health coaching experiences powered by AI. For example, the Fitbit app is launching an AI-driven coach capable of interactive conversations regarding fitness data and goals. While these features differ from health information wayfinding, they reflect a broader trend toward proactive, context-aware guidance. As always, proper evaluation, expert oversight, and robust privacy protections are crucial when AI intersects with health.

Key Findings at a Glance

  • Participants favored the wayfinding agent over a baseline Gemini 2.5 Flash chatbot in terms of helpfulness, relevance, goal understanding, and tailoring.
  • Conversations became more interactive, especially for questions about symptom causes, with average turns rising from 3.29 to 4.96.
  • Proactive, well-targeted clarifying questions alongside a two-column UI made it simpler for individuals to provide timely context.

What This Means for Teams Building Health Assistants

  • Shift from an answer-centric to a question-guided design. View each exchange as a step toward better answers, rather than final solutions.
  • Ensure context capture is effortless. Favor simple, yes-or-no prompts and brief free-text responses over lengthy forms.
  • Keep responses visible and dynamic. Present a single, continually updating summary of the best available information.
  • Balance empathy with efficiency. Short, direct prompts often feel more respectful than prolonged, generic replies.
  • Implement safety measures. Establish guardrails for emergencies, risky behaviors, and misinformation, and escalate to human care as necessary.

Limitations and Open Questions

  • Research Prototype, Not Clinical Tool: The findings stem from user studies in non-clinical environments. Results do not ensure clinical accuracy or safety.
  • Population and Topics: The randomized study included 130 US-based adults with self-identified health questions. Different populations or languages may yield varying outcomes.
  • Measurement Scope: While user preference and conversation length are significant, future research should also address clinical appropriateness, safety, and long-term outcomes.
  • Personalization vs. Privacy: Systems that retain user details or connect to personal data must prioritize strong controls and clear consent. Industry updates indicate that personalization features are expanding in consumer AI, raising essential privacy concerns.

How to Apply Wayfinding Patterns in Your Product

Here’s a practical blueprint you can adapt for your specific domain:

  1. Clarify User Intent Early
  2. Start by asking about goals: relief, understanding a diagnosis, discussing concerns with a clinician, or preparing for an appointment.
  3. Offer quick-pick intents for users, such as learning options, understanding risks, or preparing questions for the doctor.

  4. Gather Context in Small Steps

  5. Limit questions per turn to three, and ensure each is specific.
  6. Use structured inputs when applicable, like age ranges, timelines, or checkboxes for common concerns.
  7. For health queries, acknowledge uncertainty and encourage professional care when necessary.

  8. Provide Iterative Answers

  9. After each interaction, present the best information available, including relevant next steps and how to enhance the response.
  10. Highlight how new information modifies the guidance.

  11. Design the UI to Prevent Missed Questions

  12. Keep clarifying questions consistently visible and distinct from answer sections.
  13. Use short headings and bullet points to avoid overwhelming users with text.

  14. Evaluate with Real Users and Real Questions

  15. Conduct within-subject studies, allowing each person to compare your agent against a baseline for the same issue.
  16. Measure perceptions such as helpfulness and tailoring, along with behavioral indicators like conversation turn counts and follow-through.
  17. Collect qualitative feedback to refine question quality and tone.

A Short Example: Wayfinding in Action

User: I have on-and-off dizziness and ringing in my ears. What could it be?

Agent responds with:
– Best Information So Far: A concise summary of common causes linked to dizziness and tinnitus, along with criteria for when to seek urgent care.
– Three Clarifying Questions: How long does each episode last? Have you experienced any recent infections, pressure changes, or injuries? Are you experiencing any hearing loss or nausea?
– Transparent Reasoning: A note indicating that answers will evolve as the user provides more details.

As the user replies, the agent updates its summary and explains the changes. This straightforward pattern consistently guides the conversation toward a more precise, individualized response without crossing into diagnostic territory.

Responsible Use and Safety

Health issues are high-stakes. Even with improved conversations, AI guidance might remain incomplete, outdated, or incorrect. Researchers emphasize that this is a research prototype for information seeking rather than a clinical tool. Systems should incorporate safety measures for emergencies, disclaimers regarding limitations, and clear pathways to human assistance. Evaluations should assess not just user preferences but also appropriateness, bias, and safety in real-world contexts.

Where This Could Go Next

  • Multimodal Context: Combining text with images or documents could further enhance context capture, as suggested by other research initiatives.
  • Memory and Personalization: With appropriate consent and privacy safeguards, remembering user preferences or past context could reduce repetitiveness and enhance continuity. Industry trends indicate a movement in this direction, although strong safeguards are essential.
  • Beyond Health: Wayfinding patterns may assist in navigating taxes, legal documents, benefits, education – any field where asking the right questions is as critical as providing the right answers.

Conclusion

The most effective health agents won’t just talk at us; they will engage in meaningful dialogues. Google Research’s wayfinding prototype illustrates that by asking insightful questions, updating answers in light of new context, and maintaining transparent logic, health conversations can become more relevant and trustworthy. This framework is a practical blueprint for product teams to implement today, as the field continues to evaluate safety and real-world impacts.

FAQs

Q1: Is the wayfinding agent a medical device or a substitute for a doctor?
A1: No. It is an early research prototype designed to assist users in finding relevant health information, not a diagnostic tool and is not a replacement for professional medical advice.

Q2: What models power the agent?
A2: The prototype is built on Google’s Gemini models. The user study compared the wayfinding agent to a baseline chatbot powered by Gemini 2.5 Flash.

Q3: Why limit to three clarifying questions per turn?
A3: Research indicates users engage more when questions are direct and visible. A predictable, small number offers depth without complicating the interaction.

Q4: Did users actually prefer this approach?
A4: Yes. Participants in a randomized study with 130 US adults preferred the wayfinding agent for its helpfulness, relevance, goal understanding, and contextual tailoring. Conversations were also longer and more context-rich.

Q5: How does this differ from other health AI projects at Google?
A5: This project focuses on helping laypeople seek information through better dialogue. Other initiatives like AMIE examine multimodal diagnostic dialogue within clinical contexts. While both emphasize enhanced context gathering and reasoning, they cater to different scenarios.

Disclosure and Care Guidance

This article summarizes research findings for educational purposes only. It does not provide medical advice. If you have health concerns, please consult a qualified healthcare professional.

Thank You for Reading this Blog and See You Soon! 🙏 👋

Let's connect 🚀

Newsletter

Your Weekly AI Blog Post

Subscribe to our newsletter.

Sign up for the AI Developer Code newsletter to receive the latest insights, tutorials, and updates in the world of AI development.

By subscription you accept Terms and Conditions and Privacy Policy.

Weekly articles
Join our community of AI and receive weekly update. Sign up today to start receiving your AI Developer Code newsletter!
No spam
AI Developer Code newsletter offers valuable content designed to help you stay ahead in this fast-evolving field.