Most pulmonologists don’t think of “patient engagement” as practice management.

Until you’re in clinic trying to explain why a patient’s COPD keeps flaring… and you realize the biggest failure wasn’t your inhaler choice.

It was everything that happened between visits.

OpenAI is launching ChatGPT Health: a dedicated ChatGPT experience where patients can connect medical records and wellness apps and ask questions grounded in their own data.

For pulmonary and critical care practices, this is a preview of a new “engagement layer” sitting between patients and the system.

The real problem: the space between encounters is where pulmonary patients decompensate

Pulm and ICU care is chronic disease management plus high-stakes transitions.

In outpatient practice, outcomes depend on boring things happening reliably:

  • Controller meds getting taken

  • Follow-up tests happening (spirometry, imaging, sleep studies)

  • Side effects getting caught early

  • Patients knowing when to escalate before they show up in the ED

In critical care, outcomes depend on transitions and comprehension:

  • Patients and families understanding what happened in the ICU

  • Post-ICU medication changes not getting reversed accidentally

  • Follow-up getting scheduled (post-ICU clinic, pulm follow-up, sleep follow-up)

We lose people in the dead zone: portal messages, discharge instructions, and lab results that no one can interpret.

What OpenAI is building (nuts and bolts)

ChatGPT Health is a separate “Health” space inside ChatGPT that can be grounded in connected data, including:

  • Medical records (labs, visit summaries, clinical history)

  • Apple Health (activity, sleep, movement)

  • Wellness apps like MyFitnessPal, Weight Watchers, Function, Peloton, and others

OpenAI says conversations in Health are not used to train its foundation models, and Health runs with additional privacy protections and isolation.

In plain English: they’re trying to make “What does this lab mean?” a ChatGPT question instead of a portal question.

Why pulm/crit practices should care

If a patient starts using ChatGPT Health, the practice may see:

  • Fewer “I didn’t understand my labs” calls

  • More prepared clinic visits

  • Better adherence (especially in COPD/asthma) because patients finally understand the why

  • More questions outside clinic visits (which could be good care or a workload bomb)

But in pulmonary medicine, more engagement is not automatically better engagement.

Outpatient pulmonology: the upside and the failure modes

Where it could help

  • COPD and asthma education at scale: helping patients understand controller vs rescue meds, exacerbation red flags, and why adherence matters

  • Trend interpretation: symptoms + wearable sleep/activity data + oximetry (if connected) could help patients recognize deterioration earlier

  • Pre-visit preparation: sharper questions and clearer goals for a visit

Where it could backfire

  • Miscalibrated escalation: patients may over-trust a “reassuring” answer, or flood clinics with low-yield messages

  • Over-indexing on numbers: wearable trends without context can create anxiety and unnecessary utilization

  • Medication nuance: pulmonary meds look simple until they aren’t (ICS risk, steroid bursts, drug interactions, tachycardia, adherence masking worsening disease)

The key risk is “better info” turning into more noise unless the practice builds a way to channel it.

Critical care: engagement is mostly about transitions, not wellness

ICU patient engagement is about making sure the handoff from ICU → floor → home isn’t a slow-motion disaster.

Where it could help

  • Summarizing ICU course for families: translating ICU jargon into understandable language

  • Post-discharge clarity: what changed, what to watch for, when to follow up

  • Medication reconciliation support: helping patients understand why meds were started or stopped

Where it could backfire

  • False confidence: a clean summary can hide uncertainty and nuance

  • Accountability gaps: when something goes wrong, who is responsible for what the patient was told?

The incentives and privacy question (the part we shouldn’t ignore)

If this becomes a widely used engagement layer, we need to ask the uncomfortable question:

Who pays for it, and who benefits financially?

It’s not hard to imagine downstream models where:

  • Health systems pay to be the “recommended” destination for certain conditions in a geography.

  • Pharma advertises directly to patients based on what they ask.

This is the internet’s default playbook.

What pulm/crit practices should do now (practical takeaways)

If ChatGPT Health (or similar tools) spreads, the winning practices will be the ones that build guardrails so engagement stays tethered to clinical care.

  1. Decide what you will and won’t respond to

    • Set expectations for portal use when patients are using external AI tools.

  2. Create “re-engagement triggers”

    • For asthma/COPD: steroid bursts, rescue overuse, missed follow-ups, repeated symptom escalation.

  3. Standardize education packets that patients can verify against

    • If patients ask AI, at least have a practice baseline for meds, devices, and red flags.

  4. Plan for the new workflow reality

    • More informed patients can improve care.

    • But it can also increase message volume unless you design for it.

In summary

ChatGPT Health is OpenAI’s bet that patient engagement will be won by the conversational layer, grounded in personal data.

For pulmonologists and intensivists, the upside is real: fewer confused patients and better between-visit continuity.

But the risk is also real: noise, trust drift, and incentives that may not align with patient care.

This is practice management now—whether we like it or not.

Huddle+ Members Only

Want to go deeper? Upgrade to Huddle+

Get exclusive courses, expert analysis, and the tools to understand how healthcare really works—from AI to policy to the business of medicine.

Upgrade Now
Premium courses & guides
Community access
Weekly insights

Reply

or to participate