Most people know ChatGPT, a natural language processing tool that uses artificial intelligence (AI) technology to facilitate human-like conversations and assist with tasks such as composing emails or essays. The technology attempts to understand a prompt using a massive corpus of data scraped from books, articles, and other documents across various topics, styles, and genres. Its output is strings of phrases, sentences, or paragraphs it predicts will best answer your question. The training is supervised based on a “reward model” where a human labeler ranks outputs from best to worst, then uses neural networks to optimize outputs it thinks best matches the prompt.

Attempts to use AI in clinical medicine have been ongoing for decades. Significant progress in data science, including advances in storage capacity, informational libraries, standardized data collection, and mathematical algorithms, have accelerated the possibilities. Broadly speaking, AI in medicine has 2 early applications: to improve tedious administrative functions such as scribing, billing, communicating, pre-authorization and to improve diagnostic recommendations through pattern recognition (such as in electrocardiograms, clinical images, and public health monitoring) and differential diagnoses. Indeed, the possibilities to standardize, extend expertise, and improve medical decision making are limitless. 

While it is tempting to jump in, what most physicians have learned from experience with aggressive electronic medical record (EMR) implementation is that there are real risks using IT solutions in clinical medicine. Providers have massively underestimated how extensively IT solutions require clinicians and administrators to rethink and manage new work flows, governance structures, and roles and responsibilities. While most physicians initially saw EMRs as sophisticated document management systems to improve communications, they have become painfully aware that these systems have forced rethinking of every aspect of how team members interact with each other and with patients. Have you checked your inbox lately? 


Continue Reading

There are pitfalls if clinicians develop an overreliance on AI in clinical medicine: The art of patient management requires more than just interpretation of objective signs and symptoms. Moreover, the onus on accuracy for clinical decision making remains with the provider, who may not be able to verify whether the answer given by AI is the best among others when the topic is outside their knowledge base.1

Ultimately, machine learning is human learning. They are not independent, they are intertwined. AI is subject to many of the same interpretive biases and risks that providers experience over their own professional journey. The difference lies in leveraging broader data to arrive more quickly at proposed diagnoses and management options. While eliminating some risks and tediums, AI opens physicians and patients up to others. Caution is advised!

Nicole Uzzo is a junior at Clemson University Honors College majoring in molecular genetics.
She currently works as a research associate at the Medical University of South Carolina in Charleston.

Robert G. Uzzo, MD, MBA, is President and CEO of Fox Chase Cancer Center and Professor and Senior Associate Dean for Clinical Cancer Research at Lewis Katz School of Medicine at Temple University in Philadelphia, Pennsylvania. He also is Medical Director for Urology at Renal & Urology News.

  1. Haug CJ, Drazen JM. Artificial intelligence and machine learning in clinical medicine. N Engl J Med. 2023;388;1201-1208. doi:10.1056/NEJMra2302038