Tiki Kazeem: AI Integration in Medical Science: Transformative Benefits and Ethical Implications

Resource availability is often the limiting factor in modern medicine. That resource may be scientific knowledge or lab operating costs. In the case of 82-year-old Austrian blood cancer patient, Paul, that limiting factor may be time. With the typical cancer drugs failing one by one, and with nothing to lose, Paul’s doctors enrolled him in a clinical trial that used robotic automation and a field of artificial intelligence called computer vision to match people to cancer drugs based on individual biological differences (1). The idea behind this was similar to a traditional doctor’s approach; the machine learning models, trained to identify minuscule changes on the cellular level, were testing different drugs to find out what was effective. But instead of months of chemotherapy on an already frail physique, the computer system could do this all at once, requiring only a small tissue sample from Paul. Miraculously, one of the drugs identified by this process was successful, putting Paul into complete remission two years later. According to current knowledge, that drug was not shown to be effective in his type of cancer. But AI was able to predict something that doctors may never have known. In Paul’s case, AI saved a life by dramatically reducing the time and energy spent on drug testing. As AI increases in prevalence, it is essential to understand its prospective role in healthcare research and outcomes. It has the potential to expedite healthcare processes such as treatment and drug development by providing information that humans alone cannot possess. At the same time, there are critical ethical considerations to take into account when determining how best to implement AI in medical science.

AI prediction of 3-D protein structures from amino acid sequences

DNA is the blueprint of life. It provides instructions for human development, survival, and reproduction. Sequences of DNA form structures called genes, so the summation of all the DNA inside a person is called the human genome. In 1990, an ambitious group of researchers, hoping to better understand this concept, started the Human Genome Project (HGP) (2). This aimed to sequence the entire genome, identifying the DNA nucleotides that make up all human genes. The completion of the HGP in April 2003 is one of humanity’s greatest scientific feats, taking more than 2000 researchers from across the globe thirteen years to complete. In addition to fostering international scientific collaboration, it also provided fundamental information for the current state of human biology and medicine. Genes are frequently reused to code for hundreds of thousands of different proteins that perform a variety of essential functions within cells. Therefore, the human proteome (all human proteins) is even larger and more complex than the genome. Proteomics is a field of study that has recently experienced the profound benefit of AI technology. 

In 2021, the AlphaFold Protein Structure Database was released as a collaboration between the European Bioinformatics Institute and the artificial intelligence company DeepMind (3). This technology can predict a protein’s 3D structure from its amino acid sequence. This has extensive significance, as proteins are essential for nearly every cellular process, being used as enzymes that propel chemical reactions, allowing us to digest food, produce energy, and maintain hormones. A protein’s shape determines its function because it bonds with biological molecules that fit into their corresponding protein: just like a key fits in a lock. Therefore, understanding protein structure contributes to a better understanding of human biology and disease.

The Critical Assessment of Structure Prediction competition (CASP) is a biennial international experiment that challenges researchers to generate predictions for protein structures based on inputting amino acid sequences (4). The basis of this competition is Anfinsen’s dogma: a hypothesis that predicts that a protein's 3-D structure can be determined solely from its amino acid sequence. Amino acids are the carbon, hydrogen, oxygen, and nitrogen-containing molecules that serve as the building blocks of proteins (4). However, seeing the applications of this hypothesis has been difficult. Theoretically, a single protein can be folded in ~10300 different configurations, making the prediction of protein folding a historically arduous task (5). Previous methods of predicting protein structure include X-ray crystallography and nuclear magnetic resonance, but these are tedious, expensive, and incomprehensive. (5). AlphaFold has completely revolutionized this task; it has been singularly more successful than every protein-structure prediction method ever employed.  AlphaFold employs a machine-learning algorithm called deep learning to predict 3-D structures by using homologous protein information and multiple sequence alignments (4). Before AlphaFold, 17% of human protein 3-D structures were known (5). Now, that number is closer to 98.5%, with 58% of predictions being either highly accurate or very highly accurate. AlphaFold represents a milestone in AI’s contribution to science. John Molt, the founder of CASP, declared “This is the first time a serious scientific problem has been solved by AI.” In 2022 DeepMind released protein structures for nearly 200 million proteins, essentially every protein known to mankind (6). European Bioinformatics Institute director, Ewan Birney, described this as “one of the most important datasets since the mapping of the Human Genome” (5). The significance of this feat is highlighted by the fact that this knowledge was made freely and publicly available. This data is completely accessible for the facilitation of the common good.

Unfortunately, every stride toward innovation is inevitably accompanied by limitations. For example, the implications of using AlphaFold for drug development are limited because it can only predict individual protein folding structures. When it comes to pharmaceuticals, broader protein-drug interactions are key (5). AlphaFold also produces low-confidence predictions for some particularly complex proteins: those that are “intrinsically disordered or unstructured in isolation” (7). However, AlphaFold can still contribute to structure-based drug design, especially in cases where there was previously little knowledge (8). Currently, AlphaFold is accelerating drug research and development, being used to improve malaria vaccines and cancer drugs and to fight antibacterial resistance (5). It has also been used to predict hundreds of millions of viral, bacterial, and microbial structures. Using AI to advance scientific knowledge suggests that the technology can be used further to solve problems that humans alone cannot. 

Use of AI for drug discovery

Unfortunately, AlphaFold cannot even come close to replacing the process of drug discovery. Once the initial phase of target discovery is completed, long and expensive laboratory work and human clinical trials have to occur (5). Still, the predictive power of AI should not be underestimated. Exscientia, the company that designed the machine-learning matchmaking referred to in the introductory paragraph also aims to develop new drugs using AI assistance (1). The process that lasts from drug discovery to clinical trials typically takes a decade and costs billions of dollars. Using AI streamlines this pathway by using machine-learning models to predict how a drug might work in the body with computational techniques like molecular modeling. This reduces the required quantity of lab work by allowing research to focus on the molecules with the best chances of success. AI can also read and sort through biomedical data much more efficiently than a human. However, one danger in AI-aided drug development is overstating the capabilities of AI. In the foreseeable future, there is no way to completely cut out physical lab testing or human clinical trials from the drug development process. While being wary of this fact, researchers can use AI assistance to accelerate the task of reaching clinical trials. 

Potential ethical concerns of AI as medical technology

The most natural argument against AI integration in medical science comes from the perspective of ethical concern. How do we maintain privacy and patient confidentiality while attempting to collate information for a common benefit? Can we trust artificial intelligence to be unbiased and non-discriminatory in its predictions? These are rational concerns that may arise as research shifts toward greater dependence on computerized systems.

Privacy and data protection is a major area of ethical concern with AI. For AI to make accurate scientific predictions, it needs access to databases that are as comprehensive as possible. Therefore, the mission of AI encourages data sharing instead of patient confidentiality. There is an argument that this benefits the greater good, resulting in much-needed innovation (9). At the same time, this does not negate individual patient rights, and the lack of strict regulation of the exchange of health information is likely to violate these. 

Additionally, one of AI’s biggest ethical pitfalls is a potential lack of accountability. There is ongoing debate as to whether AI even fits into current legal categories, and that maybe new parameters need to be created. The “intelligence” aspect of artificial intelligence creates a liability gap where “computing approaches can hide the thinking behind the output of an Artificial Intelligent System” (10). In essence, AI is capable of employing discriminatory healthcare practices, highlighting the danger of relying on it too heavily. In the past, a biased algorithm, using race to estimate kidney function, depicted black patients as having much better kidney function than white patients with the same level of function, resulting in delayed organ transplants and worse health outcomes for those black patients (11). At Duke University, research found that doctors, using decision-making algorithms, were taking longer to order blood tests for Hispanic children who would be eventually diagnosed with sepsis than for white children (12). In 2019, Science published a study revealing that using a clinical algorithm that is commonly used by many hospitals, black patients had to be deemed much sicker than white patients to receive treatment. These examples just scratch the surface, as the story of AI discrimination in healthcare continues to play itself many times over. Ultimately, AI runs on data, so if the data is biased, AI decision-making will be too. This is dangerous because systemic racism or biased data collection practices can find their way into a purportedly objective space, all while shifting the burden of blame to an artificially intelligent system. For this reason, AI algorithms need to operate with transparency whenever possible. This includes taking extra precautions to ensure that AI is meeting standards of informed patient consent. This also means making sure that AI algorithms operate with a reasonable level of objectivity. Therefore, intentional action toward non-discrimination in computer algorithms is vital in alleviating the unfortunate but justifiable ethical concerns of AI bias. 

As meritorious as the ethical concerns against AI integration in medicine may be, they cannot dispute its groundbreaking and innovative implications. It is necessary to address regulations concerning privacy and non-discrimination. This will likely need to be a collaboration between research scientists, AI experts, and lawmakers. But ultimately, the fact is that AI can perform feats of speed and efficiency that human efforts alone cannot dream of achieving. To not utilize AI in healthcare would be to suppress the opportunity to change lives. Predicting 3-D protein structure using AlphaFold revolutionized proteomics in such a short amount of time. Also, AI has the potential to rapidly advance personalized medicine and accelerate drug development. Innovations such as this allow researchers to develop better diagnostic tools, treatments, and prevention methods to reduce disease and create a healthier population. Additionally, the exponential progress made by utilizing AI suggests that there is tangible value in the continued investment of AI for scientific research. Future research that expands the scope of AI’s applications would only continue its transformative impacts on the medical field.

Tiki Kazeem is a First Year at Yale University in Saybrook College

Citations: 

1) Douglas Heaven, W. (2023, February 15). AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work. MIT Technology Review; MIT Technology Review. https://www.technologyreview.com/2023/02/15/1067904/ai-automation-drug-development/#:~:text=The%20vision%20is%20to%20use,need%20for%20painstaking%20lab%20work. 

2) The Human Genome Project. (n.d.). Genome.Gov. Retrieved March 1, 2024, from https://www.genome.gov/human-genome-project

3) AlphaFold Protein Structure Database. (n.d.-a). AlphaFold Protein Structure Database. Retrieved March 2, 2024, from https://alphafold.ebi.ac.uk/

4) Yang, Z., Zeng, X., Zhao, Y., & Chen, R. (2023). AlphaFold2 and its applications in the fields of biology and medicine. Signal Transduction and Targeted Therapy, 1. https://doi.org/10.1038/s41392-023-01381-z

5) Toews, R. (2021, October 3). AlphaFold Is The Most Important Achievement In AI—Ever. Forbes; Forbes. https://www.forbes.com/sites/robtoews/2021/10/03/alphafold-is-the-most-important-achievement-in-ai-ever/?sh=44857ba26e0a

6) Geddes, L. (2022, July 28). DeepMind uncovers structure of 200m proteins in scientific leap forward | DeepMind | The Guardian. The Guardian; The Guardian. https://www.theguardian.com/technology/2022/jul/28/deepmind-uncovers-structure-of-200m-proteins-in-scientific-leap-forward

7) AlphaFold Protein Structure Database. (n.d.-b). AlphaFold Protein Structure Database. Retrieved March 2, 2024, from https://alphafold.ebi.ac.uk/faq#faq-6

8) Ren, F., Ding, X., Zheng, M., Korzinkin, M., Cai, X., Zhu, W., Mantsyzov, A., Aliper, A., Aladinskiy, V., Cao, Z., Kong, S., Long, X., Man Liu, B. H., Liu, Y., Naumov, V., Shneyderman, A., Ozerov, I. V., Wang, J., Pun, F. W., … Zhavoronkov, A. (2023). AlphaFold accelerates artificial intelligence powered drug discovery: efficient discovery of a novel CDK20 small molecule inhibitor. Chemical Science, 6, 1443–1452. https://doi.org/10.1039/d2sc05709c

9) Hill, T. (2023, October 31). Data Privacy and Protection in AI for Precision Medicine. REPROCELL Global. https://www.reprocell.com/blog/data-privacy-and-protection-in-ai-for-precision-medicine

10) Naik, N., Hameed, B. M. Z., Shetty, D. K., Swain, D., Shah, M., Paul, R., Aggarwal, K., Ibrahim, S., Patil, V., Smriti, K., Shetty, S., Rai, B. P., Chlosta, P., & Somani, B. K. (2022). Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Frontiers in Surgery. https://doi.org/10.3389/fsurg.2022.862322

11) Chin, M. H., Afsar-Manesh, N., Bierman, A. S., Chang, C., Colón-Rodríguez, C. J., Dullabh, P., Duran, D. G., Fair, M., Hernandez-Boussard, T., Hightower, M., Jain, A., Jordan, W. B., Konya, S., Moore, R. H., Moore, T. T., Rodriguez, R., Shaheen, G., Snyder, L. P., Srinivasan, M., … Ohno-Machado, L. (2023). Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Network Open, 12, e2345050. https://doi.org/10.1001/jamanetworkopen.2023.45050

12) The dangers of algorithm bias. (n.d.). MOBE. Retrieved March 3, 2024, from https://www.mobeforlife.com/the-dangers-of-algorithm-bias

Previous
Previous

Adam Tufts: A Systematic Approach to the Invisible Epidemic of Iatrogenesis

Next
Next

Daniel Park: Legal Compliance, Ethical Responsibility, and Profitability in the Pharmaceutical Industry