Search results
Results from the WOW.Com Content Network
Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. [1]
The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into ...
The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." [ 7 ] Allen Newell and Herbert A. Simon 's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).
Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes.
A moral injury, researchers and psychologists are finding, can be as simple and profound as losing a loved comrade. Returning combat medics sometimes bear the guilt of failing to save someone badly wounded; veterans tell of the sense of betrayal when a buddy is hurt because of a poor decision made by those in charge.
In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human ...
“The moral power of an army is so great that it can motivate men to get up out of a trench and step into enemy machine-gun fire.” Dr. Jonathan Shay, in his book Achilles in Vietnam The entire military is “a moral construct,” said retired VA psychiatrist and author Jonathan Shay .