PDF Doing and Deserving: Essays in the Theory of Responsibility

Free download. Book file PDF easily for everyone and every device. You can download and read online Doing and Deserving: Essays in the Theory of Responsibility file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Doing and Deserving: Essays in the Theory of Responsibility book. Happy reading Doing and Deserving: Essays in the Theory of Responsibility Bookeveryone. Download file Free Book PDF Doing and Deserving: Essays in the Theory of Responsibility at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Doing and Deserving: Essays in the Theory of Responsibility Pocket Guide.
See a Problem?
Contents:


  1. The Responsibility to Protect
  2. Search Tips
  3. Stanford Libraries
  4. The Responsibility to Protect | Foreign Affairs

Some the questions dealt with in this collection of 11 essays.

The Responsibility to Protect

One very short closed tear to dust wrapper else very clean and neat. Seller Inventory More information about this seller Contact this seller 1. Published by Princeton University Press. About this Item: Princeton University Press. Condition: Good. A copy that has been read, but remains in clean condition. All pages are intact, and the cover is intact.

The spine may show signs of wear. Pages can include limited notes and highlighting, and the copy can include previous owner inscriptions. The dust jacket is missing. Seller Inventory GI3N More information about this seller Contact this seller 2. Published by Princeton Univ Pr About this Item: Princeton Univ Pr, Ships from Reno, NV. Former Library book.

Shows some signs of wear, and may have some markings on the inside. Seller Inventory GRP More information about this seller Contact this seller 3. Condition: UsedAcceptable. More information about this seller Contact this seller 4. From: Der Buchfreund Wien, Austria. Gebrauchsspuren, Umschlag geknickt.

Search Tips

More information about this seller Contact this seller 5. First Ed. More information about this seller Contact this seller 6. Condition: Used: Good. More information about this seller Contact this seller 7. Condition: Near Fine. Dust Jacket Condition: Very Good.

Nice clean copy. Seller Inventory ABE More information about this seller Contact this seller 8. About this Item: Princeton Univ Pr , More information about this seller Contact this seller 9.

Stanford Libraries

Hard Cover. Condition: Fine. Dust Jacket Condition: Fine. Book and dust jacket in excellent condition, unmarked aside from small owner name. More information about this seller Contact this seller From: Anybook Ltd. Lincoln, United Kingdom. This book has hardback covers. With usual stamps and markings, In good all round condition. No dust jacket. Please note the Image in this listing is a stock photo and may not match the covers of the actual item,grams, ISBN Condition: Very Good.

Finally, assuming the individual model of responsibility, once we have blamed the computer, there is no need to investigate other, human factors. A fourth barrier to responsibility is the current practice of extending the privileges and rights of ownership of software and computer systems without also demanding that owners accept responsibility for their products Nissenbaum ; As Johnson and Mulvey note, users expect owners and creators to accept responsibility for disastrous consequences; this makes sense based on their experience with other products and services, some of which are governed by strict liability i.

Rather than accepting full or partial responsibility, owners and creators shirk responsibility by, e. Many of these barriers have, as an underlying problem, the poor articulation and vague understanding of the relevant norms, a concern raised explicitly by Johnson and Mulvey Without a clear understanding of what each party in the creation, implementation, and use of a system is responsible for doing, we are poorly placed to assess fault which contributes to the problem of many hands. Without a clear understanding of what we can reasonably expect from the system, we are vulnerable to disappointed expectations and the temptation to blame the computer that has disappointed us.

The Responsibility to Protect | Foreign Affairs

Finally, without a clear understanding of when creators and owners are to be held liable, users and the community at large are vulnerable to bearing the brunt of any harms that result. The final barrier to responsibility is the assumption that technology is ethically neutral. In contrast with blaming the computer, this assumption prevents us from considering the impact that technological choices have on our actions.

Ladd suggests that this is due in part to the transparency of computer systems: when we don't notice them, we fail to consider how the technology affects our actions. In other words, the assumption that technology is ethically neutral is a barrier to responsibility because it obscures our responsibility for the choice to use technology, as well as for the choice of which technology to use.

Ultimately, however, this assumption of ethical neutrality is false. Further, the analytic distinction between means i. Unfortunately, several features about the field of computing reinforce the assumption of ethical neutrality. For example, Gotterbarn notes that computing has matured in theoretical fields such as mathematics rather than practical fields such as engineering and applied science; as such, problems and solutions are articulated and developed in a context in which their impact on humanity is less visible than it should be, leading to a myopic problem-solving style that gives no attention to the context of the problem and the consequences of the solution.

Three main areas of recommendations are prevalent. First, we must ensure that our understanding of and assumptions about responsibility are appropriate for the task at hand, namely using our practice of responsibility to improve both practice and technology. Second, we should re-design computer systems to reveal that they are not responsible.

Third, we should clearly articulate those norms most relevant to the creation, implementation, and use of computer systems. Gotterbarn and Ladd's advocacy of positive rather than negative responsibility serves the same end. The temptation to blame the computer can be addressed redesigning the computer to make its lack of capacity-responsibility more visible Friedman and Millett ; Friedman and Kahn For example, redesigning computer systems so as to minimize if not eliminate the felt presence of the computational system—e.

Two other strategies are worth noting. First, we might also reconsider the wisdom of using the computer system to begin with Ladd ; Moor ; Kuflik In short, those overseeing the operation of the computer in question must accept their responsibility for the decision to use the computer system to begin with, and mnight perhaps exercise their responsibility by revoking that decision. Although it is taken for granted that computers must be able to match or exceed the accuracy, efficiency, and reliability of the humans whose tasks they now perform, there has so far been little effort to match or exceed the responsibility of these humans.

Thus, a second strategy is to build responsible computers Ronald and Sipper Toward that end, Thompson recognizes that training, certification, and assessment are used when preparing humans to take on responsibility, and advocates similar measures be taken with respect to computer systems. Whether computers could be responsible is discussed in section 3.

Recognizing that an awareness of norms is central to our practice of responsibility, and that it plays a key role in the process of professionalization, Johnson and Mulvey argue that it is crucial to clearly articulate norms. Establish norms regarding the relationship between designer creator and client. Further clarifying what each can expect from the other is an important step toward professionalization. Establish norms regarding collaboration with affected parties. Appealing to Neibuhr's responsibility ethic, Dillard and Yuthas point out that responsible behavior involves identifying and working with the affected members of the community, taking into account that community's history and future, and being prepared to account for one's actions.

Rather than describe particular behaviors i. Establish norms regarding the production and use of computer systems. Establish norms of behavior for the various roles involved in creating, implementing, and using computer systems. Surveying these recommendations is taken up in section 2. Establish norms regarding possibly strict liability. Two authors in particular—Murray and Cass —have attempted to articulate clear norms for the various roles involved in the creation of computer systems. Murray's norms focus on role responsibility, while Cass explicitly includes moral responsibilities as well.

Both provide useful guidelines for improving practice through increased awareness of responsibilities, as well as providing standards by which to assess fault. Murray is primarily focused on role responsibility rather than moral responsibility. Responding to the challenges posed by introducing an expert system ES into contexts where it will be consulted by non-experts, Cass re-defines the various roles involved in the design, creation, and use of the expert system to explicitly include a discussion of when an agent filling that role can be held morally responsible for an bad outcome.

Since all those involved in the design, creation, and use of the expert system satisfy the causal criteria, Cass's analysis largely focuses on identifying the knowledge that each person is expected to have and to share with others, and identifying any potentially coercive circumstances that might undermine the voluntary nature of their actions.

Some particularly relevant responsibilities include the following. In addition, since knowledge is relevant to assessing responsibility, the responsibility of the user will depend on whether the user is a domain-expert or a domain-novice. Two factors can undermine a domain expert's responsibility: first, coercive policies that demand that the advice of the ES be followed regardless of the user's assessment thus undermining the user's ability to act voluntarily ; second, lack of relevant environmental or contextual information thus undermining the user's ability to act knowingly.

In contrast, the domain-novice is ignorant about the domain of expertise; as such, the novice is responsible for compensating for this ignorance by using the help and explanation features of the expert system. Cass reminds the reader that while each person involved in the design, creation, and use of the ES is responsible for having and sharing the relevant knowledge from their own domain of expertise, and for learning enough about other domains of expertise to reasonably ensure that cross-domain communication is accurate, even the best efforts cannot guarantee that all assumptions made in this process are correct; in short, we cannot guarantee that there is no involuntary ignorance.

As anticipated above in the discussion of barriers to responsibility section 2. Although computer systems may clearly be causally responsible for the injuries and deaths that resulted from their flawed operation, it is not so clear that they can be held morally responsible for these injuries or deaths. As discussed above section 2. Nevertheless, the possibility that computers can be morally responsible has been explored by several authors. Most notably, Dennett's account of intentionality in terms of an intentional stance licenses attributions of responsibility to computers.

In addition, Dennett argues that intentionality in general [ 5 ] —and higher-order intentionality e. These are characteristics that A Space Odyssey's HAL computer is portrayed as having, and which Dennett suggests that real life robots such as Rodney Brooks' cog might someday possess. Finally, by identifying several potentially exculpating factors—insanity, brainwashing or, more appropriately, programming and duress including either self-defense or loyalty to a goal —he implicitly suggests that HAL is a legitimate candidate for moral responsibility in general precisely because these sorts of excusing or exempting factors can seriously be applied to HAL.

Bechtel appeals to a modified version of Dennett's concept of intentional systems to support his claim that intentionality is possible for a computer and, therefore, that computers could be responsible for their intentional decisions. Looking to the future when a computer could pass as a human in conversation i. As such, we will need to attend not merely to whether computers can be held responsible, but also to how we can make them responsible. In contrast, Friedman and Kahn argue that computers cannot be moral agents because according to Searle and his Chinese room argument they lack the intentionality which is necessary for responsibility.

Finally, concerned with the cases of properly functioning and well-designed computers which nevertheless make errors, Snapper argues that, despite worries about the control the programmer has over the program—and therefore over the computer's output—computers are capable of deliberate choice. However, appealing to an Aristotelian analysis of moral responsibility, he further argues that they are incapable of the appropriate mental attitude e.

Kuflik identifies six senses of responsibility: 1 Causal Responsibility, 2 Functional Role Responsibility, 3 Moral Accountability, 4 an honorific sense of responsibility, 5 Role Responsibility, and 6 Oversight Responsibility. Making use of these six senses, he asks:. How much responsibility in either sense 2 or sense 5 , could responsible sense 3 human beings responsibly sense 4 allocate to a computer, without at the same time reserving to themselves oversight-responsibility sense 6?

Despite some minor differences, Bechtel , Ladd , Moor , Nissenbaum ; , and Kuflik all agree that responsible humans cannot responsibly allocate all responsibility to computers. Ladd argues that computer control of machines or systems is sufficiently similar to human control for computers to be given control in some situations. However, computers are better suited to control than humans in certain situations e.