Search results
Results from the WOW.Com Content Network
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. [14] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. [15] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers ...
James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots.As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents.
Given this definition of emotion, Hans Moravec believes that "robots in general will be quite emotional about being nice people". [74] Fear is a source of urgency. Empathy is a necessary component of good human computer interaction. He says robots "will try to please you in an apparently selfless manner because it will get a thrill out of this ...
Seems pretty adapted to the article. Robopsychology is the study of the personalities and behavior of intelligent machines.The term was coined by Isaac Asimov in the short stories collected in I, Robot, which featured robopsychologist Dr. Susan Calvin, and whose plots largely revolved around the protagonist solving problems connected with intelligent robot behaviour.
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.
A broader definition of artificial empathy is "the ability of nonhuman models to predict a person's internal state (e.g., cognitive, affective, physical) given the signals (s)he emits (e.g., facial expression, voice, gesture) or to predict a person's reaction (including, but not limited to internal states) when he or she is exposed to a given ...
Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with ...