Search results
Results from the WOW.Com Content Network
The description of ideal utilitarianism was first used by Hastings Rashdall in The Theory of Good and Evil (1907), but it is more often associated with G. E. Moore. In Ethics (1912), Moore rejects a purely hedonistic utilitarianism and argues that there is a range of values that might be maximized.
For example, rule utilitarianism was criticized for implying that in some cases an individual should pursue a course of action that would obviously not maximise utility. Conversely, act utilitarianism was criticized for not allowing for a 'human element' in its calculations, i.e. it is sometimes too difficult (or impossible) for an ordinary person.
Act utilitarianism is a utilitarian theory of ethics that states that a person's act is morally right if and only if it produces the best possible results in that specific situation. Classical utilitarians, including Jeremy Bentham , John Stuart Mill , and Henry Sidgwick , define happiness as pleasure and the absence of pain.
Rule utilitarianism is a form of utilitarianism that says an action is right as it conforms to a rule that leads to the greatest good, or that "the rightness or wrongness of a particular action is a function of the correctness of the rule of which it is an instance". [1]
Preference utilitarianism (also known as preferentialism) is a form of utilitarianism in contemporary philosophy. [1] Unlike value monist forms of utilitarianism, preferentialism values actions that fulfill the most personal interests for the entire circle of people affected by said action.
This is an incomplete list of advocates of utilitarianism and/or consequentialism This is a dynamic list and may never be able to satisfy particular standards for completeness. You can help by adding missing items with reliable sources .
Utilitarianism is a consequentialist ethical theory, meaning that it holds that acts are justified insofar as they produce a desirable outcome. The overarching goal of utilitarianism—the ideal consequence—is to achieve the "greatest good for the greatest number as the result of human action". [82]
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who is worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it.