Dinesh Bojja: AI in Healthcare

The release of ChatGPT in November 2022 resulted in the biggest and fastest boom in artificial intelligence (AI) technology. Within a week, 1 million people had signed up to use the technology; by January 2023, ChatGPT boasted over 100 million users, a feat that took giants like TikTok, Instagram, and Google far longer to achieve (9 months, 2.5 years, and 5 years, respectively) (1). With the AI buzz only growing, everyone from start-ups to researchers to individuals is looking for their own way into the sphere. The Wall Street Journal reports that over $2.6 billion was invested into AI startup technology last year—a trend likely to continue, as AI is considered one of the most secure investments in modern technological innovation (2).

But one industry has yet to feel the true impact of AI integration: healthcare. And for a good reason; while other industries mainly deal with finance, production, and operations, the healthcare industry deals with lives. An algorithmic error in other industries may cost a month or two of productivity, but a misdiagnosis by AI imaging software could be lethal. However, the increased accuracy, speed, and efficiency of AI make it a potential game-changer. Even though AI integration in healthcare has been slower when compared to other technology-focused industries due to a lack of technical expertise in the healthcare space (among other factors), the boom is projected to start this year, with the market expected to reach $187 billion by 2030 (3).

As we enter a new age of healthcare, it’s important to recognize both the good and the bad of AI technology, and to take the necessary measures to prevent harm from affecting consumers. Do the benefits outweigh the harms? Is the boom just a temporary hype, or is it truly a revolution?

 

Augmented Medicine: Fast, Smart, and Accurate

In March 2023, Google’s healthcare-focused AI shocked the medical world. The model, Med-PaLM 2 (Pathways Language Model), scored an 85% accuracy on a U.S. Medical Licensing Examination practice test—which is considered to be at an expert level compared to practicing physicians. It was able to answer multiple-choice and extended-response questions and evaluated its own responses and scores to improve itself (4). OpenAI’s new GPT4.0 model (the upgraded ChatGPT) is even able to diagnose and provide specific treatment directions for a patient given their symptoms, history, and complications (5).

 

This type of use of AI in conjunction with a normal physician’s daily tasks is known as augmented medicine. Like how augmented reality uses technology to supplement and add to our vision of the world around us, augmented medicine uses artificial intelligence to better inform and support medical professionals in their industry. By allowing for immediate access to healthcare records, a more effective understanding of a patient’s condition, and diagnosis assistance, AI can help physicians ease their workload and allow them to maintain their responsibilities in the healthcare industry (6).

 

The first step of augmented medicine is allowing AI to do the menial tasks of today’s clinics, hospitals, and doctor’s offices. A lot of a doctor’s time is spent in front of a screen, analyzing charts, and data, and inputting information into electronic health records—mindless tasks that prevent them from being able to interact with and help more patients on a day-to-day basis. By giving the role of basic analysis and information retrieval to an AI model, augmented medicine can help doctors free up more time to have personal interactions with patients, the arguably more important part of patient care (7).

 

But AI can contribute more than data analytics; the technology is already making its way to the mainstream through many different avenues. One AI model was able to take an ultrasound with astonishing speed and accuracy, with 93% accuracy in their image captures and a capture time around 7.5 minutes faster than normal (8). In another study, scientists tested a different deep-learning algorithm to detect tuberculosis which had an accuracy and specificity comparable to that of trained radiologists (9). Google even has a joint collaboration with Mayo Clinic to train AI to visually separate cancerous and healthy tissue for more effective and healthy radiology treatment (10). In the near future, it could be possible to use augmented medicine to fully diagnose and provide treatment options for patients, freeing up time for doctors to interact with patients and preventing the chance of misdiagnosis.

While already astonishing, AI has taken things one step further, making the leap from digital medical assistance to physical care. The Smart Tissue Autonomous Robot, or STAR, was able to implement a complicated surgical procedure more accurately than today’s physicians could without any intervention. The surgery—which required the robot to reconnect two ends of an intestine—is considered one of the most challenging parts of gastrointestinal surgery because of the precision and accuracy required for a successful connection; while physicians may have a minor tremor or incorrect stitching pattern due to natural human error, robots do not have that hindrance, making them more effective at these types of surgeries (11). Integrated with technologies like the daVinci Surgical System, a robot that augments surgeon movements to be more precise and accurate, AI has the ability to work hand-in-hand with humans moving forward, making surgery safer, faster, and more effective moving forward.

 

Maximizing Innovation: For Better or For Worse?

The benefits of AI in healthcare extend beyond just augmented medicine.

With the help of 3D modeling and the predictive capabilities of AI, scientists have used artificial intelligence to help create more effective drugs to use for clinical therapies. Using clinical, chemical, and animal data, these algorithms are able to determine potential drug targets, the molecular structures most conducive to the desired function, and even methods to create the drug itself. Instead of spending years in the brainstorming phase, trying to understand what drugs may or may not work, some companies have started using human data to narrow down their drug development process, making the process more efficient and more likely to pass clinical trials (12).

With the ability to access information across the entire internet and training on mountains of data, medical AI has both the information and the basic predictive modeling necessary to understand the current healthcare landscape. While these mark incredible strides in information gathering and interpretation, there come some downsides; Microsoft’s BioGPT system, for example, has often made up answers and sources—even sometimes spouting fallacious claims like “vaccines lead to autism”—a dangerous trend that could lead to a faster spread of misinformation.[13]

Moving forward, it’s important to recognize that blindly trusting and improving AI in the name of innovation and efficiency runs a risk of disaster. While AI has shown the capabilities to take on a multitude of tasks in the medical industry, deferring to these systems without proper foresight could cause more problems than it fixes, especially when humans can do the task well already. For example, AI chatbots make it far easier to spread legitimate-looking misinformation, as it often takes a medical expert to recognize that there is something off about the model’s response. For now, information from medical chatbots can be verified by professionals, but if the healthcare industry begins to allow patients to talk directly to a chatbot for a diagnosis, there is no way to regulate this misinformation. Medical misinformation is already a major issue that cost countless lives during the COVID-19 pandemic; allowing this to continue could only exacerbate the issue. 

The risks are not limited to just misinformation; when people put full faith in a computer model, there could be a risk of misdiagnosis, something that could be avoided by direct patient care. While this may seem like a distant future, AI-powered diagnosis tools are already making their way into the mainstream. For example, Healthily’s Symptom Checker has been designed to give a preliminary diagnosis for a patient using an AI questionnaire system. Over 5.5 million people have used the service already; if this growth continues, it becomes even more important than ever that these types of AI systems are regulated to prevent incorrect or misinformed diagnoses (13).


Even if misinformation were to be controlled, AI cannot fully replace the benefits of direct patient care in today’s society. Computers only have as much information as we give them. We can plug in numbers, descriptions, and statistics, but there’s no way to mimic subjective human perceptions. Doctors can see the most minute details, picking up on facial expressions, cues, and slang. Sometimes patients even fail to describe things that trained medical professionals can pick up on, a nuance in the case that a computer may miss, which could lead to an incorrect diagnosis. It's possible that these issues get fixed in the future, and that computers can have the same level of specificity that humans can, but in the foreseeable future, doctors will always need to confirm or support the diagnosis.

 

Medical Racism: When AI Takes Differential Analysis Too Far

 Despite the progressions that have been made in AI, recent tests with artificial intelligence chatbots have revealed a dangerous trend. ChatGPT users found that the model is capable of spouting blatantly racist or sexist ideas and concepts. The system itself does not “think” these things are true; ChatGPT and other similar models run on predictive modeling, and simply reproduce some of the information they are trained on—inevitably some of which has subtle (or sometimes obviously) racist or sexist undertones (14). In predictive modeling, the AI does not make a conscious decision on what to do in a scenario but rather looks at existing data to determine an existing trend and follows that pattern. 

In one famous example, Amazon used a predictive model to decide whether to hire new recruits or not; the model, seeing that most employees at Amazon were men at the time, was far more likely to accept a woman with the same exact resume as a man. By virtue of the data fed into the model, the AI system was “sexist,” leading Amazon to scrap the model entirely. Similar stories can be found in the healthcare industry, sometimes with even more sinister backstories. it was discovered that AI predictive modeling was used to cut people off from healthcare at times that would best help insurance companies, stripping people who need care the most from the support they need. By taking patient data and running predictive analyses, these models give specific estimated timelines for treatments and discharges, giving hospitals and insurance companies plausible evidence to support discharging a patient to save the firms money—all the while removing patients from the healthcare support they need (16).

Broadly speaking, this issue of racial discrimination in healthcare is known as medical racism. Medical racism is already a major issue in the United States, with many institutions failing to adequately treat minority groups as well as they should. AI could potentially exacerbate the issue through this predictive modeling, making it even harder for healthcare to be truly equitable across racial and ethnic boundaries.

Medical information that is fed into an AI model tends to have structural racism embedded within the data. One study showed that in a dataset used to train one algorithm, it seemed that black patients had to be significantly more ill to be considered for care than white patients, making the model assume black people did not need care when they actually did. However, this was a reflection of systemic racism in healthcare: black patients historically had less to spend on their healthcare due to centuries of economic and social discrimination, meaning that they got actual hospital care when their conditions were far worse (17). No matter our intentions, if the data fed into the AI model is inherently discriminatory against different groups, it will reflect in the standard of care given to each individual. Before we make stronger AI models, we first have to build stronger data to train it on, information that is trustworthy and reliable for all medical intents and purposes.

The Future of Healthcare AI

As with any revolutionary technology, there exists the potential for it to be used for good and for bad. But is general approval high for AI approaches in healthcare systems?

The Pew Research Center published survey results that suggest that Americans are ambivalent about the true benefits of artificial intelligence in healthcare. Sixty percent of overall respondents said they would not be comfortable with their healthcare provider using AI; no matter what demographic is chosen—stratified by age, ethnicity, education, income, or knowledge— at least half of the people in each group stated their discomfort with health care providers using AI (18).

Furthermore, 75% of individuals—spread across a spectrum of people who know a little to a lot about artificial intelligence—believe that AI integration into healthcare is moving too fast to understand the risks it may pose. In contrast, only 38% of Americans overall think that AI in healthcare would lead to positive benefits (17).

Even though public opinion is daunting when it comes to cancer screening, a resounding 65% of respondents stated they would want AI models to be used in their own skin cancer screens (17). Although AI has a long way to go before it wins the trust of the entire public, it has proven its merit in some regards of the healthcare sphere. With careful deliberation at every step forward, it’s possible the public will become more receptive to AI integration in medicine.

Artificial intelligence surely has the potential to be a powerful positive force for the healthcare system, and it cannot be denied how helpful it can be in streamlining innovation and hospital management while helping the individual. But this same tool can be wielded as a weapon, whether it be an opportunity for hospitals and insurance companies to profit off of the needy or a source of medical misinformation and discriminatory sentiments. Is it worth the risk?

The healthcare world is undergoing a technological revolution, and there’s no stopping it. All that is in our hands now is whether it uplifts or impedes medicine as we know it.

Dinesh Bojja is a first year at Yale University in Morse College

 

Citations

  1.  https://www.zdnet.com/article/chatgpt-just-became-the-fastest-growing-app-of-all-time/

  2. https://www.wsj.com/articles/ai-silicon-valley-crypto-boom-blockchain-artificial-intelligence-59622e9c

  3. https://venturebeat.com/ai/can-healthcare-show-the-way-forward-for-scaling-ai/

  4. https://www.medpagetoday.com/special-reports/exclusives/103522

  5. https://www.nytimes.com/2023/03/14/technology/openai-new-gpt4.html

  6. https://www.unite.ai/ai-in-medicine-must-prioritize-the-other-a-augmentation/

  7. https://www.armoneyandpolitics.com/algorithmic-medicine-brings-ai-tools-to-the-bedside/

  8. https://www.digitalhealth.net/2022/02/research-reveals-ai-ultrasounds-gives-time-saving-benefits/

  9. https://pubs.rsna.org/doi/10.1148/radiol.212213

  10.  https://sites.google.com/pressatgoogle.com/thecheckup2023-presssite/blog-posts/health-ai

  11. https://hub.jhu.edu/2022/01/26/star-robot-performs-intestinal-surgery/

  12. https://www.politico.com/newsletters/future-pulse/2023/03/13/your-new-medicine-brought-to-you-by-ai-00086702

  13. https://www.livehealthily.com/symptom-checker

  14. https://futurism.com/neoscope/microsoft-ai-biogpt-inaccurate

  15. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

  16. https://www.statnews.com/2023/03/13/medicare-advantage-plans-denial-artificial-intelligence/

  17. https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism

  18. https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/

Previous
Previous

Evelyn Jiang: Stem Cell Tourism: Balancing Hope and Harm

Next
Next

Sindi Daci: Telehealth—A Sustainable System Attempting to Address Issues in the Healthcare Industry