By now, most of us know that ChatGPT can do essays, solve code, create songs, do research, and crack a joke. But to assist doctors and nurses at the hospital, and doing this potentially in operating and emergency rooms (OR and ER)?
That’s no joke.
GenAI passes medical college test
Ok, so you need an Advanced form for generative AI to even consider playing doctor, and today that’s ChatGDP-4, available from OpenAI.
But even its predecessors, GPT versions 3.0 and 3.5, did well to pass the US Medical College Admission Test (MCAT).
A study found they performed at or above the median performance of 276,779 student test takers and demonstrated both a high level of agreement with the official answer key as well as insight into its explanations.
Based on these promising results, the study anticipates such models could provide free or low-cost access to personalized and insightful explanations of MCAT competency-related questions to help students and could be used to generate additional test questions by test-makers or for targeted preparation by pre-medical students. It’s already hard to get admitted to medical school, and now, thanks to GenAI, it just got potentially harder if the tech starts to replace students in the classroom.
Beyond passing medical exams, is GPT-4 capable of saving human lives or treating emergency room patients?
Microsoft vice president of research Peter Lee, journalist Carey Goldberg, and Harvard computer scientist and doctor Isaac Kohane teamed up to test this theory with GPT-4 and understand its medical capabilities, the findings of which will be published in a book entitled “The AI Revolution in Medicine,” available as an e-book as early as April 15.
They say this new GenAI is more advanced than the previous chatbot ChatGPT and proves capable of at generating intelligent information they say could be immediately helpful in emergency rooms to both save time and potentially save lives.
“We need to start understanding and discussing AI’s potential for good and ill now,” the book authors urge.
Asking the GPT-4 AI for advice in an emergency situation when typical medical intervention is not yielding satisfactory results while providing critical details and descriptions of the patient and symptoms, the bot could be able to respond with a coherent text explaining why the patient might be reacting this way, provide relevant recent research, and suggest an alternative treatment or medicine.
The bot can further fill the patient’s prescription, or automate required paperwork for insurance purposes.
But one thing for sure in medicine is that mistakes cannot be made and if they are made, they could be lethal.
And GPT-4 can still make blunders, errors, or inaccuracies, which still necessitate human supervision and intervention, no matter how believable AI diagnosis is, and the book authors describe GenAI as “both smarter and dumber than any person you’ve ever met.”
“It is still just a computer system,” the authors conclude, “fundamentally no better than a web search engine or a textbook,” although some doctors believe ChatGPT4 could produce a correct diagnosis accurately at close to the level of a third- or fourth-year medical student.
Today, ChatGPT works from a raw computer-generated information base, but once fed textbooks written by humans as its main source of facts, the system could provide safer and more reliable data.
ChatGPT could also automate clinical documentation, analyze medical research, and provide medical education and chatbot-based applications to improve patient engagement.
It could also assist doctors in research findings like tracking countless variables about a person’s health and comparing them with other people who were similar across millions of other variables, tirelessly processing features of every patient treated globally. But this cannot happen before the AI bot is first fed millions of patient data sets that include those many features and their outcomes.
Is ChatGPT too limited today to be the future of medicine? yes. Promising? For sure.
For more on ChatGPT, click here.