Search results
Results from the WOW.Com Content Network
The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.
The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into ...
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. [1]
The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." [ 8 ] Allen Newell and Herbert A. Simon 's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
Book cover of the 1979 paperback edition. Hubert Dreyfus was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI, What Computers Can't Do (1972; 1979; 1992) and Mind over Machine, he presented a pessimistic assessment of AI's progress and a critique of the philosophical foundations of the field.
Igor Aleksander suggested 12 principles for artificial consciousness: [34] the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is ...
The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended ...