Reference no: EM133673526
Assignment:
In this week's readings, the authors address the question of who to hold responsible when an AI system used in a healthcare setting generates a wrong prediction, leading to someone either not getting treatment that they need, or receiving unnecessary treatment.
In the Lang et. al. paper, the authors argue that using Black Box Healthcare AI creates four different types of "responsibility gaps". Moral responsibility is typically viewed as having at least two components: an epistemic condition and a control condition.
The epistemic condition means that in order to be responsible, the agent has to have relevant knowledge. For example, suppose I make cookies that have vegetable oil as an ingredient and offer one to a friend, not knowing that the oil includes corn and my friend is allergic to corn. As a result, my friend has an allergic reaction. Because I did not know about their allergy or that my cookies had the allergen in them, I would not be held morally responsible.
The control condition means that in order to be held responsible, the agent has to have the relevant ability and control. For example, there are certain things I cannot make it at my workplace because I lack the authority. As a philosophy professor, I cannot, for example, decide which chemistry courses will be offered next semester. Therefore, I cannot be held morally responsible if the chemistry department doesn't offer a course that students need.
The authors assume that AI agents cannot themselves be held morally responsible for any harm that they cause because they lack one or both of these requirements. However, because these systems are inscrutable to humans, humans also lack the epistemic condition and so therefore cannot be held morally responsible. This situation is what the authors call a "responsibility gap".
In both papers, the authors discuss a number of different paths to addressing the problem of responsibility gaps.
Option One: One response is to argue that actually, no responsibility gaps exist. Instead, responsibility can be traced back to some appropriate agent. Alternatively, deniers might argue that no responsibility gap exists because no wrongdoing occurred - if using AI is standard of care, then doctors are not wrong for using it, even when it causes harm.
Option Two: The second response is that if AI causes a responsibility gap, then we should not use it in medical contexts or should use it only as a supplementary tool.
Option Three: The third response, suggested by Verdicchio and Perin is to address the gap by maintaining "meaningful human control", through implementing a "principle of confidence" (p. 16). According to this principle, people engaged in an activity can act in confidence that all other participants will act in accordance with their own duties of care. If there is good reason to think that a participant will NOT act in accordance with their duties, then others will replace their confidence with a duty to act.
Option Four: The fourth response, suggested by Lang et. al. is "responsibilization", wherein individuals take on responsibility that is not technically theirs to take. For example, in the radiologist example they begin their paper with, the authors suggest that the radiologist ought to acknowledge that they relied on a technology that caused the patient harm and accept responsibility for that, despite failing to meet the epistemic condition for responsibility.
(1) One preliminary question we might ask is: Why do we need a concept of responsibility at all? What is the purpose of holding someone responsible?
(2) Given your thoughts on why the concept of responsibility is important in the first place, which of the above options do you think best addresses the problems that using medical Black Box AI creates? Why do you think that option best addresses the issue?