OpenAI’s advanced ChatGPT shies away from replacing family pediatricians, as it struggled to accurately diagnose most hypothetical pediatric cases. A recent study published in JAMA Pediatrics on Jan. 2 by researchers from Cohen Children’s Medical Center in New York revealed that the bot had an 83 percent error rate across various tests.

The study analyzed 100 pediatric case challenges, originally designed as diagnostic challenges, to test ChatGPT. The bot provided incorrect diagnoses for 72 out of 100 cases and gave 11 answers that were considered too broad to be deemed entirely correct.

The researchers attribute this failure to the AI’s inability to identify links between certain medical conditions and external or pre-existing circumstances, crucial in diagnosing patients in a clinical setting. For instance, ChatGPT failed to connect “neuropsychiatric conditions” like autism to common cases of vitamin deficiency and other restrictive-diet-based conditions.

The study concludes that ChatGPT requires ongoing training and involvement of medical professionals to ensure it is fed with vetted medical literature and expertise, rather than relying solely on internet-generated information, which can often be rife with misinformation.

Although AI-based chatbots have shown promise in certain areas, their limitations are evident, especially in specialized medical settings. Despite their potential in administrative and communicative tasks, like explaining diagnoses and generating patient-side text, they are yet to match the full breadth of human expertise.

Topics
Artificial Intelligence
Social Good

Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *