Skip to main content

Unfortunately we don't fully support your browser. If you have the option to, please upgrade to a newer version or use Mozilla Firefox, Microsoft Edge, Google Chrome, or Safari 14 or newer. If you are unable to, and need support, please send us your feedback.

Publish with us
Press release

Trust Your Doctor: Study Shows Human Medical Professionals Are More Reliable than Artificial Intelligence Tools

Ann Arbor | 2 April 2024

New research in the American Journal of Preventive Medicine puts the accuracy of advice given by large language models to the test

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations and can sometimes generate incorrect advice or instructions. A new studyopens in new tab/windowinthe American Journal of Preventive Medicineopens in new tab/window, published by Elsevier, assesses the accuracy and reliability of AI-generated advice against established medical standards and finds that LLMs are not trustworthy enough to replace human medical professionals just yet.

Andrei Brateanu, MD, Department of Internal Medicine, Cleveland Clinic Foundation, says, "Web search engines can provide access to reputable sources of information, offering accurate details on a variety of topics such as preventive measures and general medical questions. Similarly, LLMs can offer medical information that may look very accurate and convincing, when in fact it may be occasionally inaccurate. Therefore, we thought it would be important to compare the answers from LLMs with data obtained from recognized medical organizations. This comparison helps validate the reliability of the medical information by cross-referencing it with trusted healthcare data."

In the study 56 questions were posed to ChatGPT-4 and Bard, and their responses were evaluated by two physicians for accuracy, with a third resolving any disagreements. Final assessments found 28.6% of ChatGPT-4's answers accurate, 28.6% inaccurate, and 42.8% partially accurate but incomplete. Bard performed better, with 53.6% of answers accurate, 17.8% inaccurate, and 28.6% partially accurate.

Caption: Artificial intelligence (AI) tools are not reliable enough to be substituted for medical professionals in providing accurate medical information, research in the American Journal of Preventive Medicine shows. Final assessments found 28.6% of ChatGPT-4's answers accurate, 28.6% inaccurate, and 42.8% partially accurate but incomplete. Bard performed better, with 53.6% of answers accurate, 17.8% inaccurate, and 28.6% partially accurate (Credit: American Journal of Preventive Medicine).

Dr. Brateanu explains, "All LLMs, including ChatGPT-4 and Bard, operate using complex mathematical algorithms. The fact that both models produced responses with inaccuracies or omitted crucial information highlights the ongoing challenge of developing AI tools that can provide dependable medical advice. This might come as a surprise, considering the advanced technology behind these models and their anticipated role in healthcare environments."

This research underscores the importance of being cautious and critical of medical information obtained from AI sources, reinforcing the need to consult healthcare professionals for accurate medical advice. For healthcare professionals, it points to the potential and limitations of using AI as a supplementary tool in providing patient care and emphasizes the ongoing need for oversight and verification of AI-generated information.

Dr. Brateanu concludes, "AI tools should not be seen as substitutes for medical professionals. Instead, they can be considered as additional resources that, when combined with human expertise, can enhance the overall quality of information provided. As we incorporate AI technology into healthcare, it's crucial to ensure that the essence of healthcare continues to be fundamentally human.”

Notes for editors

The article is“Accuracy of Online Artificial Intelligence Models in Primary Care Settings,” by Joseph Kassab, MD, MS, Abdel Hadi el Hajjar, MD, Richard M. Wardrop III, MD, PhD, and Andrei Brateanu, MD ( in new tab/window).It appears online in advance of the American Journal of Preventive Medicine, volume 66, issue 6 (June 2024), published by Elsevier.

The article is openly available for 30 days at in new tab/window.

Full text of this article is also available to credentialed journalists upon request; contact Jillian B. Morgan at +1 734 936 1590 or [email protected]opens in new tab/window. Journalists wishing to interview the authors should contact Katie Ely, Cleveland Clinic Corporate Communications, at +1 216 906 5597 or [email protected]opens in new tab/window.

About the American Journal of Preventive Medicine

The American Journal of Preventive Medicineopens in new tab/window is the official journal of the American College of Preventive Medicineopens in new tab/window and the Association for Prevention Teaching and Researchopens in new tab/window. It publishes articles in the areas of prevention research, teaching, practice and policy. Original research is published on interventions aimed at the prevention of chronic and acute disease and the promotion of individual and community health. The journal features papers that address the primary and secondary prevention of important clinical, behavioral and public health issues such as injury and violence, infectious disease, women's health, smoking, sedentary behaviors and physical activity, nutrition, diabetes, obesity, and alcohol and drug abuse. Papers also address educational initiatives aimed at improving the ability of health professionals to provide effective clinical prevention and public health services. The journal also publishes official policy statements from the two co-sponsoring organizations, health services research pertinent to prevention and public health, review articles, media reviews, and editorials. www.ajpmonline.orgopens in new tab/window

About Elsevier

As a global leader in scientific information and analytics, Elsevier helps researchers and healthcare professionals advance science and improve health outcomes for the benefit of society. We do this by facilitating insights and critical decision-making with innovative solutions based on trusted, evidence-based content and advanced AI-enabled digital technologies.

We have supported the work of our research and healthcare communities for more than 140 years. Our 9,500 employees around the world, including 2,500 technologists, are dedicated to supporting researchers, librarians, academic leaders, funders, governments, R&D-intensive companies, doctors, nurses, future healthcare professionals and educators in their critical work. Our 2,900 scientific journals and iconic reference books include the foremost titles in their fields, including Cell Press, The Lancet and Gray’s Anatomy.

Together with the Elsevier Foundationopens in new tab/window, we work in partnership with the communities we serve to advance inclusion and diversity in science, research and healthcare in developing countries and around the world.

Elsevier is part of RELXopens in new tab/window, a global provider of information-based analytics and decision tools for professional and business customers. For more information on our work, digital solutions and content, visit



Jillian B. Morgan

MPH, Managing Editor AJPM

+1 734 936 1590

E-mail Jillian B. Morgan