Search results
Results from the WOW.Com Content Network
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. [1]
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. [15] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. [16] Not all robots function through AI systems and not all AI systems are robots.
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. [305] The field of machine ethics is also called computational morality, [305] and was founded at an AAAI symposium in 2005. [306]
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
A public administration approach sees a relationship between AI law and regulation, the ethics of AI, and 'AI society', defined as workforce substitution and transformation, social acceptance and trust in AI, and the transformation of human to machine interaction. [32]
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases).
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.
The second discussion concerns efforts to construct machines with ethically-significant behaviors - see machine ethics. Finally, there is debate about whether robots should be constructed as moral agents. Research has shown that humans do perceive robots as having varying degrees of moral agency.