Bad Health Information

31079031677?profile=RESIZE_192XA new test of ChatGPT Health tools revealed an integration flaw that produces inconsistent health grades.  The report highlights risks associated with using AI to analyze wearable data without medical context or oversight and clear limits.

OpenAI recently announced its new tool, ChatGPT Health, and now a newly discovered integration flaw has raised serious concerns about it. Recent testing shows the limitations of AI in the medical field.  It also sparks debate on how artificial intelligence should handle sensitive information.[1]

ChatGPT Health integration flaw shows the limitations of AI in medical - Earlier this month, OpenAI announced that it had begun testing the ChatGPT Health feature.  It is a separate section within the AI-powered chatbot and is designed specifically for handling personal data and medical reports.  The service can relate to Apple Health, MyFitnessPal, Peloton, and other fitness apps.  The goal is to give more personal answers by using activity, sleep, and heart data.  The tech giant had earlier emphasized that the tool is optional and meant to inform, not diagnose or replace doctors.

To see its real-world performance, a reporter from the Washington Post tested the feature using long-term data from the Apple Watch.  When the reporter asked the AI to rate heart health, it gave a failing grade.  It showed a high chance of heart disease, and the result was alarming.  The tester later spoke with a real doctor, who fully rejected the score and said the risk of heart disease was very low.

AI is still not good enough to diagnose an independent medical record - The test report shows how a simple integration error can appear when the system turns raw data into grades without a complete medical context.  Moreover, cardiologist Eric Topol from the Scripps Research Institute called the assessment report baseless.  He warned that people might panic over poor scores or feel falsely safe after good ones.  The chatbot also gave different grades when the same question was asked again using the same medical record.

When asked repeatedly, the chatbot even forgot basic details like age and gender, despite the details being present in the conversation.  After all, no matter how good the AI tools get, choosing a real doctor and treatment will always remain a sensible and reliable option.

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5207428251321676122

[1] https://www.androidheadlines.com/2026/01/chatgpt-health-integration-flaw-highlights-limits-of-ai-medical-insight.html

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!