enow.com Web Search

Search results

  1. Results from the WOW.Com Content Network
  2. Robot ethics - Wikipedia

    en.wikipedia.org/wiki/Robot_ethics

    Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as 'killer robots' in war), and how robots should be designed such that they act 'ethically' (this last concern is also called machine ethics).

  3. Machine ethics - Wikipedia

    en.wikipedia.org/wiki/Machine_ethics

    James H. Moor, one of the pioneering theoreticians in the field of computer ethics, defines four kinds of ethical robots.As an extensive researcher on the studies of philosophy of artificial intelligence, philosophy of mind, philosophy of science, and logic, Moor defines machines as ethical impact agents, implicit ethical agents, explicit ethical agents, or full ethical agents.

  4. Three Laws of Robotics - Wikipedia

    en.wikipedia.org/wiki/Three_Laws_of_Robotics

    For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human; however, Asimov's stories eventually included robot surgeons ("The Bicentennial Man" being a notable example). When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of ...

  5. Ethics of artificial intelligence - Wikipedia

    en.wikipedia.org/wiki/Ethics_of_artificial...

    The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots. [15] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software. [16] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers ...

  6. Laws of robotics - Wikipedia

    en.wikipedia.org/wiki/Laws_of_robotics

    A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. [1]

  7. The Machine Question - Wikipedia

    en.wikipedia.org/wiki/The_Machine_Question

    The Machine Question: Critical Perspectives on AI, Robots, and Ethics is a 2012 nonfiction book by David J. Gunkel that discusses the evolution of the theory of human ethical responsibilities toward non-human things and to what extent intelligent, autonomous machines can be considered to have legitimate moral responsibilities and what legitimate claims to moral consideration they can hold.

  8. Moral agency - Wikipedia

    en.wikipedia.org/wiki/Moral_agency

    An example of this would be a young child old enough to understand right from wrong, yet they hit their siblings on an occasion when they get angry. The action of hitting is up for moral consideration because the child is old enough to consider whether or not it is the correct action to take and the morality of their behavior.

  9. Friendly artificial intelligence - Wikipedia

    en.wikipedia.org/wiki/Friendly_artificial...

    It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained.