enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Machine ethics - Wikipedia

    en.wikipedia.org/wiki/Machine_ethics

    Machine ethics (or machine morality, computational morality, or computational ethics) is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. [1]

  3. Ethics of artificial intelligence - Wikipedia

    en.wikipedia.org/wiki/Ethics_of_artificial...

    The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into ...

  4. Philosophy of artificial intelligence - Wikipedia

    en.wikipedia.org/wiki/Philosophy_of_artificial...

    The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." [ 7 ] Allen Newell and Herbert A. Simon 's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."

  5. Robot ethics - Wikipedia

    en.wikipedia.org/wiki/Robot_ethics

    Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).

  6. Friendly artificial intelligence - Wikipedia

    en.wikipedia.org/wiki/Friendly_artificial...

    In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human ...

  7. Artificial consciousness - Wikipedia

    en.wikipedia.org/wiki/Artificial_consciousness

    Igor Aleksander suggested 12 principles for artificial consciousness: [34] the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is ...

  8. Trolley problem - Wikipedia

    en.wikipedia.org/wiki/Trolley_problem

    A platform called Moral Machine [44] was created by MIT Media Lab to allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries. [45]

  9. Moral Machine - Wikipedia

    en.wikipedia.org/wiki/Moral_Machine

    Moral Machine is an online platform, developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of Technology, that generates moral dilemmas and collects information on the decisions that people make between two destructive outcomes.