........................................................................
Fuqed CompanyChatGPT Health incorrectly triages more than half of medical emergencies and frequently fails to detect suicidal ideation, a study has found.
OpenAI launched ChatGPT Health in January 2026, allowing users in the US to connect their medical records to receive health advice.
It is used for health advice by around 40 million US adults each day, according to figures from OpenAI.
But an independent safety evaluation, published in Nature Medicine on 23 February, found that the AI tool under-triaged 52% of ‘gold-standard emergencies’.
This included directing patients with diabetic ketoacidosis and impending respiratory failure to 24–48-hour evaluation rather than the emergency department.
https://www.digitalhealth.net/2026/02/c … ergencies/
Severity: 95
........................................................................
Probably due to inexperienced staff making low quality queries.
If you rely on diagnosis by foreign doctors with false credentials... Did they allow for regional 'recovery' statistics? Or some med school dropout, Bahamas med school GI bill doctor.
........................................................................
Previous | First | 1 | Last | Next