You found information online about your condition. Maybe it was in a patient support group where someone mentioned a treatment option your doctor has not discussed. Maybe an AI health tool surfaced a comorbidity connection you were not aware of. Maybe a blog post linked to a peer-reviewed study that contradicts something you were told.
Now what?
This is the moment where the value of online health resources is either realized or wasted. The information itself is only useful if it enters your clinical relationship — and how you bring it up matters.
Why Patients Hesitate
Research consistently shows that patients underreport their use of online health resources to clinicians. A 2022 survey in Journal of Medical Internet Research found that only 35% of patients who used online health information discussed it with their healthcare provider, despite the majority finding the information helpful.
The reasons patients give for not mentioning online health resources to their doctors:
- Fear of being dismissed. "My doctor will think I'm a difficult patient" is the most commonly cited concern.
- Time constraints. Average primary care appointments are 15-18 minutes. Patients feel they cannot afford to spend time on information their doctor may not value.
- Perceived hierarchy. Many patients still operate under a model where the doctor is the expert and the patient is the passive recipient — bringing outside information feels like overstepping.
- Uncertainty about quality. Patients are sometimes unsure whether the information they found is reliable, and they do not want to embarrass themselves by bringing up something wrong.
- AI-specific stigma. Patients who used AI health tools may worry that doctors will view this as irresponsible or naïve.
What Doctors Actually Think
The physician perspective is more nuanced — and more positive — than most patients assume.
A 2023 survey published in The BMJ found that 78% of physicians said they welcomed patients who brought health information to appointments, provided the information came from identifiable sources. Physicians distinguished between:
- Patients who bring questions (welcomed by nearly all physicians)
- Patients who bring information from identifiable sources (welcomed by most)
- Patients who bring demands based on unverified claims (viewed less favorably)
Most physicians — especially those under 50 — have adapted to the reality that patients arrive informed. They prefer informed patients over uninformed patients. They just want the conversation to be collaborative rather than adversarial.
How to Bring Information From Support Groups Into Appointments
Step 1: Prepare Before the Appointment
- Write down your questions. Specific questions get specific answers. "Is this treatment an option for me?" is more productive than "what about this treatment?"
- Note the source. Whether the information came from a support group, an AI tool, or a blog post, noting the source helps your doctor assess the quality.
- Prioritize. You probably have 5-10 minutes for this discussion. Choose the 1-2 most important items.
- Frame as a question, not a conclusion. "I found this — what do you think?" invites collaboration. "I've decided to try this" closes it.
Step 2: Use Clear Language
Some phrases that work well in clinical conversations:
- "In my support group, another patient mentioned [treatment/approach]. Is that something that might apply to my situation?"
- "I used a health information tool that showed my condition is connected to [comorbidity]. I'd like to understand if that's relevant for my care."
- "I read a study about [intervention] for [condition]. Can you help me understand if the findings apply to me?"
- "Someone in my peer community had a different experience with [medication]. Should I be concerned about my current regimen?"
Step 3: Listen to the Response
Your doctor may:
- Agree that the information is relevant and adjust your care accordingly
- Explain why it does not apply to your specific situation (different severity, different type, contraindication)
- Need time to review the study or information before responding — which is a responsible reaction, not a dismissive one
- Disagree based on evidence you have not seen or clinical experience with the approach
Specific Scenarios
"I learned about a treatment option that was not discussed with me"
This is common. Treatment guidelines are complex, and physicians make judgment calls about which options to present based on their assessment of your case. Sometimes a treatment was not discussed because it does not fit your specific situation. Sometimes it was an oversight or a limitation of appointment time. Asking about it gives your doctor the opportunity to explain their reasoning or reconsider.
"AI said I might have a comorbidity that was not mentioned"
Tools like PatientSupport.AI, which uses the Harvard PrimeKG knowledge graph, can surface disease-disease relationships that may not have come up in clinical conversations. If an AI tool suggests a comorbidity connection, bring it up — but be clear that it came from an AI tool and you want your doctor's clinical assessment.
For context on how knowledge-graph-grounded AI works and why it is more reliable than generic chatbots (but still not infallible), see: How Knowledge Graphs Make Health AI More Accurate.
"Something I read online contradicts what you told me"
This requires particular sensitivity. Lead with curiosity, not accusation:
- "I found information that seems different from what you recommended. Can you help me understand the discrepancy?"
- "I want to make sure I'm understanding correctly — is there a reason [approach] wouldn't work for my case specifically?"
"I want to join a support group — will that help?"
The answer, based on extensive peer-reviewed evidence, is almost certainly yes — as a complement to clinical care. Most physicians support patient involvement in support groups. If your doctor discourages it without a specific clinical reason, that itself is worth discussing.
AI Health Tools: What to Tell Your Doctor
If you have used an AI health tool like PatientSupport.AI, you might say:
- "I used an AI tool that's grounded in a Harvard medical knowledge base to learn about my condition. Here's what it surfaced — I'd like your input on whether this is relevant."
- "I know AI tools can make errors, which is why I'm bringing this to you rather than acting on it independently."
Red Flags: When Online Information Should Concern You
Not all online health information deserves a place in your appointment. Be cautious about information that:
- Claims to cure your condition when established medicine says it cannot be cured
- Recommends stopping prescribed medication without medical supervision
- Comes from anonymous sources with no citations or credentials
- Is selling a product — particularly supplements or devices that "doctors don't want you to know about"
- Uses emotional manipulation — fear, urgency, conspiracy — rather than evidence
The Bigger Picture: Shared Decision-Making
The best clinical relationships operate on a model called shared decision-making: the physician brings clinical expertise and knowledge of treatment options; the patient brings knowledge of their own values, preferences, and life context; together, they arrive at decisions that are both medically sound and personally meaningful.
Patient support groups and grounded AI health tools contribute to this model by helping patients arrive at appointments better informed and better prepared to participate in their care decisions. The goal is not to replace the doctor — it is to be a better partner in your own care.
PatientSupport.AI provides health information grounded in the PrimeKG knowledge graph. It is free to use without an account. It is not a diagnostic or prescriptive tool and does not replace your healthcare provider's guidance. Always discuss treatment decisions with your medical team.