Search results
Results from the WOW.Com Content Network
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.
For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human; however, Asimov's stories eventually included robot surgeons ("The Bicentennial Man" being a notable example). When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of ...
James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots.As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents.
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. [15] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. [16] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers ...
The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science [1] that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, [2] and free will.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. [1]
The Machine Question: Critical Perspectives on AI, Robots and Ethics (2012) [5] which examines whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. This book won "best book of the year" from the National ...