Bioethics : Individual Paper


The Ethical Dilemma of Using Artificial Intelligence In Medicine

With the evolution of modern technology, the emergence of artificial intelligence in medical care is inevitable. Although this advancement in technology presents innovation in diagnoses and treatment, it also proposes several drawbacks in the ethics of using data driven machinery for patient care. The article, “Artificial Intelligence in Medicine : Today and Tomorrow” from Frontiers in Medicine takes a deeper dive in the benefits and detrimental aspects of artificial intelligence in medicine (Briganti, 2020). Although artificial intelligence (AI) offers a faster approach in detecting disease and with greater accuracy than the human eye, it poses a strong threat to medical ethics if used improperly. Using ideas of autonomy, beneficence, non-maleficence, and confidentiality, this paper explores why artificial intelligence may be implemented to facilitate medical evaluations, but should not replace providers in making medical decisions. AI has been generating enthusiasm among the medical community as it embodies the 4P model of medicine (Predictive, Preventative, Personalized, Participatory) to provide greater beneficence in patient care. AI has the potential to produce greater accuracy in screening and predict future diagnoses at an earlier stage. As a result, AI affords more time for a provider to take necessary preventative measures in patient management. AI can also create a more personalized and participatory experience for patients by delivering convenient and fast medical care.

One such example is Kardia. FDA-approved in 2014, Kardia was one of the first applications of AI in medicine and acts as a smartphone application that detects early onset of atrial fibrillation. Patients must purchase the mobile sensor and then pay for a monthly service subscription. Patients are then able to make regular EKG recordings that providers can easily review on their end (Pearson, 2018). Patients experience greater autonomy as they have more control over how frequently they partake in a medical evaluation. Patients are able to conveniently receive EKG recordings without having to go to a clinic. They can simply login to their phone and make a recording at anytime and anyplace. Beneficence is also clearly demonstrated by the Kardia app as patients receive more frequent EKG readings. Early detection of atrial fibrillation becomes more likely and patients can take the necessary steps to prevent heart failure.

Unfortunately, autonomy may also be jeopardized with an over reliance on medical technology to make final medical judgements. In 2018, IDx-DR became the first FDA-authorized AI technology that can make a diagnosis without human interpretation. IDx-DR analyzes images from a retinal camera to diagnosis diabetic retinopathy. Although IDx-DR demonstrates 87% sensitivity and 90% specificity, it runs the risk of missing diagnoses and generating unnecessary referrals due to false-positive readings (Savoy, 2020). AI is ultimately limited by its computerized algorithm and the competency of the user itself. AI works by building upon data and patterns. If the wrong pattern is learned, an inappropriate diagnosis is made. For example, if IDx-DR picks up that a specific image of the eye indicates diabetic retinopathy, it develops the pattern that all such images points to diabetic retinopathy. Detrimental outcomes may be placed into effect if medicine were to depend only on AI decisions without taking into account of exceptions and a holistic view of bodily functions. For this reason, AI must be used with caution as its beneficence may be compromised with improper use and programming.

The “Artificial Intelligence in Medicine : Today and Tomorrow” article also states how AI was found to replicate racial, gender, and socioeconomic status bias during trials (Briganti, 2020). If a sample population presents with any skew, AI builds off of that data and amplifies the bias (Rigby, 2019). This becomes especially problematic as the goal of medicine is to treat patients as individuals with minimal partial judgement. AI pushes the ethical boundaries of non-maleficence as it is a mechanical system that does not have the logical capabilities to remove prejudice. It may develop the wrong algorithm and wrong medical conclusions for a target population. AI does not embody the humanistic experiences, empathy, or reasoning for it to ethically produce definitive judgements on the course of human life. For this reason, it would be dangerous to sacrifice provider autonomy by solely relying on AI to determine the final outcomes in patient management.

Another shortcoming to AI is that it may breach patient confidentiality. Kardia highlights this risk as it is linked to a third party software system. Patient information can easily be hacked or accessed outside of the realms of health care (Gattadahalli, 2020). As smartphones increasingly use facial recognition to unlock functions, security is especially fragile in accessing patient history. Already, the world of health care is struggling to keep up with patient confidentiality with the use of smartphones and telemedicine. AI increases security risks by involving developers outside the medical field in the gathering of patient information.

In conclusion, AI is a promising development in the medical community that may save countless lives. However, in it its quest to continuously improve modern medicine, the medical community should take caution on an over reliance on AI to make decisions. This would dehumanize the art of medicine, pose risks to patient and provider autonomy in decision-making, and jeopardize patient confidentiality. Ultimately, each patient requires a unique approach based on personal experiences and preferences that cannot be simply determined by a computerized algorithm.

References

  1. Briganti G, Le Moine O. Artificial Intelligence in Medicine: Today and Tomorrow. Frontiers. https:// www.frontiersin.org/articles/10.3389/fmed.2020.00027/full. Published January 17, 2020. Accessed June 14, 2021.
  2. Gattadahalli S. Health care needs ethics-based governance of artificial intelligence. STAT News. https://www.statnews.com/2020/11/03/artificial-intelligence-health-care-ten-steps-to-ethics-based- governance/. Published November 4, 2020. Accessed June 14, 2021.
  3. Peason A. Skeptical Cardiologist: Do Mobile Heart-Monitoring Devices Work? Medical News. https://www.medpagetoday.com/cardiology/arrhythmias/71622. Published March 8, 2018. Accessed June 14, 2021.
  4. Rigby MJ. Ethical Dimensions of Using Artificial Intelligence in Health Care. Journal of Ethics | American Medical Association. https://journalofethics.ama-assn.org/article/ethical-dimensions- using-artificial-intelligence-health-care/2019-02. Published February 1, 2019. Accessed June 14, 2021.
  5. Savoy M. IDx-DR for Diabetic Retinopathy Screening. American Family Physician. https:// www.aafp.org/afp/2020/0301/p307.html. Published March 1, 2020. Accessed June 14, 2021.
Skip to toolbar