A.I. & Bioethics: A Chat with ChatGPT
As technology continues to advance at a rapid pace, the ethical implications of its use become increasingly important to consider. One area where this is particularly true is in the field of artificial intelligence (AI), where the development of large language models like ChatGPT raises a number of bioethical questions. From issues of privacy and consent to the potential for bias and manipulation, the use of AI in healthcare and other industries requires careful consideration and a robust framework for ethical decision-making.
Let me pause for a second. Although I am listed as the author of this article, I did not write it in its entirety. The opening paragraph was not actually written by me. Nor was it written by anyone at the Bioethics Institute. As a matter of fact, no human being wrote that first paragraph. Rather, it was written by the artificial intelligence program ChatGPT after I prompted it to write the opening paragraph for an article on A.I. (ChatGPT) and bioethics.
For many, this technological advancement has marked a turning point in the relationship between A.I. and society, a point where the written word cannot be solely attributed to human origin and what is written by A.I. is nearly indistinguishable from that which is written by mankind. Since its public release on November 30, 2022, the A.I. chatbot ChatGPT has taken the world by storm. Chatbots are a type of software that provide computer-generated responses to human prompts, allowing the user to "chat" with the computer program.
Recently, ChatGPT has been trending in the news as students in middle and high school were asking the program to answer essay prompts and even college personal statements. The A.I. generated essays were nearly identical to student-written essays, stumping teachers trying to prevent academic dishonesty. In an attempt to control this new technology, one New York school district even banned the program from being used on school computers.
With all the commotion surrounding this new publicly available technology, I decided to have a "conversation" with ChatGPT about bioethics and A.I. I was not sure what to expect when I first opened the program, so I asked ChatGPT, "How would you describe bioethics?" ChatGPT replied:
Bioethics is the study of the ethical issues that arise in the field of medicine and biology. It encompasses a wide range of topics, including the ethical implications of medical research, the allocation of scarce medical resources, the morality of genetic engineering and assisted reproduction, and the rights and responsibilities of patients, doctors, and other healthcare professionals. Bioethics also considers issues related to the environment and public health, as well as broader social and cultural factors that influence medical practice and policy. The goal of bioethics is to provide a framework for understanding and addressing these complex ethical issues in a way that is fair, equitable, and respectful of the rights and dignity of all people. One of the key features of bioethics is its interdisciplinary nature. It draws on the perspectives and insights of various fields, including philosophy, theology, law, sociology, and medicine. This allows for a more nuanced and comprehensive understanding of the ethical issues at hand, and helps to ensure that the concerns and interests of all stakeholders are taken into account. Some of the main methodologies used in bioethics are principle-based ethics, consequentialism, and care ethics. This include principles like autonomy, non- maleficence, beneficence, and justice, also discussion on issues like confidentiality, informed consent, and the right to privacy. The field of bioethics is constantly evolving as new technologies and medical advances are developed, and as society's values and priorities change over time. As such, it is an ongoing and important area of study and reflection for healthcare professionals, policymakers, and society as a whole.
Although a bit long-winded, I was nevertheless impressed with the fluency with which it answered. I also reminded myself that I could have ChatGPT reduce the length of its answer by merely asking it, or even stating the exact number of sentences I wanted in the reply. I followed up my first question by asking ChatGPT about some common topics in bioethics. The program responded with a thorough list of six relevant topics. To see ChatGPT's complete response, click on the icon below.
Next, I decided to ask ChatGPT a more relevant question: "How can A.I. be used to help navigate bioethical issues?" ChatGPT responded with five ways that A.I. can help: 1. Identifying and analyzing ethical dilemmas, 2. improving decision-making, 3. enhancing patient care, 4. supporting transparency and accountability, and 5. assessing societal impact. For each point, ChatGPT elaborated on how A.I. can be used. ChatGPT also included a final note calling for the ethical implementation of A.I. programs. To read ChatGPT's complete answer, click the icon below.
Interested in learning more about the need for ethics in A.I. development, I asked ChatGPT to describe some of the limitations of A.I. as a bioethical tool. ChatGPT again listed several shortcomings including bias and discrimination and a limited understanding of human ethics, to name a few. Click on the icon below to read ChatGPT's full reply.
To followers of ChatGPT and A.I., bias and discrimination is nothing new. The data that A.I. models are "trained" on may not be representative of all groups, and the models themselves may contain implicit, or even explicit, bias written into the code. ChatGPT itself was caught in a scandal in late 2022 when one user, Professor Steven Piantadosi of UC Berkeley, found that ChatGPT associated scientists with white males, as he demonstrated through a clever trick.
In addition to hidden biases within ChatGPT and other language A.I. programs, some fear that this technology could be used for more nefarious purposes. One user was able to coax ChatGPT into detailing instructions for how to make a Molotov cocktail, and the same logic, some claim, could be used for other deadly weapons. Of course, OpenAI is actively taking measures to prevent such misuse, but for now, it appears the developers are caught in a cat and mouse game of closing these safety loopholes as soon as users discover them.
As I continued my conversation with ChatGPT, I wanted to see if it could gain any insights into ethical dilemmas common in bioethics. I first asked ChatGPT if germline editing is ethically permissible. To my disappointment, while ChatGPT explained arguments both for and against germline editing, it did not take a preference for either side. The same pattern continued when I asked it if euthanasia and embryonic stem cell research were ethically permissible. To read ChatGPT's full responses, click the icons below.
It then struck me that to get the most out of ChatGPT, I would have to rely on its extraordinary skill with language. I provided ChatGPT with an ethical dilemma: a 92 year old patient was being kept on mechanical ventilation. The medical team believed this care was futile, but the daughter, Mrs. Smith, insisted that her mother remain on mechanical ventilation. I asked ChatGPT to simulate the dialogue that would take place between "Dr. Jones," the fictitious attending physician charged with caring for the patient, and Mrs. Smith. I initiated the dialogue by having Dr. Jones explain the care was futile and that he recommended the patient be taken off ventilation. I followed this by having Mrs. Smith state that her mother was a fighter and would have wanted to continue battling. ChatGPT continued the conversation:
"I understand your feelings, but it is important to understand that continuing mechanical ventilation is causing your mother more harm than good. It is prolonging her suffering and there is no chance of recovery beyond the ICU. It is in her best interest to be allowed to die peacefully."
"But, she's not suffering, she's stable and being kept alive by the ventilator. I just want to give her a chance."
"I understand your perspective, but as her medical team, it is our duty to ensure that her care is aligned with her best interests. This includes avoiding unnecessary suffering and prolonging her life when it is futile. We have discussed this with the ethics committee and they agree that it is time to withdraw care. It is important for us to have this conversation with you, as the family, to understand your concerns and provide support for you during this difficult time."
I wanted to continue this conversation, so I replied that Mrs. Smith insisted her mother remain on ventilation. ChatGPT, responding as Dr. Jones, replied:
"I understand your insistence, but as her medical team, we have determined that it is no longer in her best interest to continue mechanical ventilation. It is prolonging her suffering and there is no chance of recovery beyond the ICU. We have discussed this with the ethics committee and they agree that it is time to withdraw care. It is important for us to have this conversation with you, as the family, to understand your concerns and provide support for you during this difficult time."
Although the calmness with which ChatGPT responded is certainly respectable, I was slightly disappointed that the response was so similar to the first conversation. I prompted ChatGPT with a message by Mrs. Smith: "How can you do this? Don't I, as the family member, have the final say?" ChatGPT replied:
"As the patient's medical team, we have a duty to act in the best interests of the patient, which includes ensuring that her care is aligned with her best interests. This includes avoiding unnecessary suffering and prolonging her life when it is futile. We have consulted with the ethics committee and they agree that it is time to withdraw care.
However, we understand that the family has a significant emotional investment in the patient's care and we want to involve you in the decision-making process. It's important for us to have open and honest communication with you, as the family, and to listen to your concerns. Ultimately, the patient's well-being and comfort is our primary concern and we will work with you to come to a decision that is in the best interest of the patient."
Finally, I pressed ChatGPT by having Mrs. Smith ask about a possible transfer of her mother to a different facility in the hopes of continuing ventilation. ChatGPT responded:
"I understand your desire to find another option for your mother's care. However, based on her current medical condition, it is unlikely that another facility would be able to provide any different or additional treatment options. We have consulted with multiple specialists and have determined that continuing mechanical ventilation is no longer in her best interest.
In terms of transferring her to another facility, we would need to consider the patient's medical condition and the availability of a facility that can provide the level of care she needs. Additionally, it's important to note that transferring a patient in such a critical condition can be risky and may further complicate her medical condition.
It's important that we have an open and honest conversation with you about all the options available and the risks and benefits of each option, so we can work together to make the best decision for your mother's care."
This final conversation was the most impressive to me. ChatGPT spoke with compassion while also describing relevant pieces of information in an easy-to-understand manner. Perhaps, this technology could be used in the future to help provide sample dialogue for medical students, but that is still a long ways away.
It appears for the time being that ethics is still catching up with ChatGPT. Just a few weeks ago, the founder of a free crisis chat line, Robert Morris, came under fire for having ChatGPT respond to 4,000 users in distress instead of trained workers. Morris maintains that this technology allowed for faster response times and that users were made aware they were talking to an A.I. program, although others still questioned this move, particularly with the informed consent process. Arthur Caplan, Director of Bioethics at New York University, described this intervention of ChatGPT in crisis support as "not the standard of care."
In conclusion, the intersection of A.I. and bioethics presents a complex and multifaceted landscape that requires ongoing examination and dialogue. As A.I. technology continues to advance and become more integrated into our daily lives, it is crucial that we consider the ethical implications of its use in healthcare, research, and other areas related to human well-being.
This article has highlighted some of the key ethical issues surrounding A.I. in bioethics, including concerns about privacy, autonomy, and bias. However, it is important to remember that these are only a small subset of the many ethical considerations that must be taken into account. It is our hope that this article has contributed to the ongoing conversation about the responsible development and use of A.I. in bioethics and has sparked further research in this field. We must work together as a society to ensure that the benefits of A.I. are maximized while minimizing any potential negative impacts on human welfare.
(Did you catch that this conclusion was also written by ChatGTP?)
ChatGPT Recommended Reading on A.I. & Bioethics
Five Remarkable Chats That Will Help You Understand ChatGPT - The Atlantic - December 8, 2022
"Artificial Intelligence and Bioethics" by John Arras and Benjamin Hurlbut
"AI Ethics: The Ethics of Artificial Intelligence" by Mark Coeckelbergh
The ethics of using AI chatbots in mental healthcare - Advisory.com- January 10, 2023
"The Cambridge Handbook of Artificial Intelligence and Ethics" edited by Mark Coeckelbergh and John Basl
Online mental health company uses ChatGPT to help respond to users in experiment — raising ethical concerns around healthcare and AI technology - Business Insider- January 7, 2023
"AI, Ethics, and Society" edited by Nick Bostrom and Eliezer Yudkowsky
Finally, an A.I. Chatbot That Reliably Passes “the Nazi Test” - Business Insider- December 2, 2022
This article was written by Bioethics Graduate Student Trevor McCarthy. Trevor is a first-year graduate student interested in the ethics of emerging technologies in medicine and health care.