Search results
Results from the WOW.Com Content Network
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. [1]
The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into ...
One possible explanation of the paradox, offered by Moravec, is based on evolution. All human skills are implemented biologically, using machinery designed by the process of natural selection. In the course of their evolution, natural selection has tended to preserve design improvements and optimizations.
The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." [ 7 ] Allen Newell and Herbert A. Simon 's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
Descriptive evolutionary ethics consists of biological approaches to morality based on the alleged role of evolution in shaping human psychology and behavior. Such approaches may be based in scientific fields such as evolutionary psychology , sociobiology , or ethology , and seek to explain certain human moral behaviors, capacities, and ...
Igor Aleksander suggested 12 principles for artificial consciousness: [34] the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is ...
Our moral behavior, while more complex than the social behavior of other animals, is similar in that it represents our attempt to manage well in the existing social ecology. ... from the perspective of neuroscience and brain evolution, the routine rejection of scientific approaches to moral behavior based on Hume's warning against deriving ...