Search results
Results from the WOW.Com Content Network
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. [14] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. [15] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers ...
The 1974 Lyuben Dilov novel, Icarus's Way (a.k.a., The Trip of Icarus) introduced a Fourth Law of robotics: "A robot must establish its identity as a robot in all cases." Dilov gives reasons for the fourth safeguard in this way: "The last Law has put an end to the expensive aberrations of designers to give psychorobots as humanlike a form as ...
The robots in Asimov's stories, being Asenion robots, are incapable of knowingly violating the Three Laws but, in principle, a robot in science fiction or in the real world could be non-Asenion. "Asenion" is a misspelling of the name Asimov which was made by an editor of the magazine Planet Stories. [ 27 ]
The about:robots page in Firefox states "Robots may not injure a human being or, through inaction, allow a human being to come to harm.", the first law of robots. In the game Zero Escape: Virtue's Last Reward , a certain character presumed dead is found to be a robot who was ordered to act as close to a real human being as possible.
James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots.As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents.
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.
Alan Winfield CEng (born 1956) is a British engineer and educator. [1] He is Professor of Robot Ethics at UWE Bristol, [2] Honorary Professor at the University of York, [3] and Associate Fellow in the Cambridge Centre for the Future of Intelligence. [4]