Search results
Results from the WOW.Com Content Network
Lexical threshold" negative utilitarianism says that there is some disutility, for instance some extreme suffering, such that no positive utility can counterbalance it. [22] 'Consent-based' negative utilitarianism is a specification of lexical threshold negative utilitarianism, which specifies where the threshold should be located.
In ethical philosophy, utilitarianism is a family of normative ethical theories that prescribe actions that maximize happiness and well-being for the affected individuals. [ 1 ] [ 2 ] In other words, utilitarian ideas encourage actions that lead to the greatest good for the greatest number.
Two-level utilitarianism is a utilitarian theory of ethics according to which a person's moral decisions should be based on a set of moral rules, except in certain rare situations where it is more appropriate to engage in a 'critical' level of moral reasoning.
Strong rule utilitarianism (SRU) gives a utilitarian account for the claim that moral rules should be obeyed at all places and times.SRU does not deteriorate into act utilitarianism like weak rule utilitarianism, but it shares weaknesses with similarly absolutist moral stances (notably, deontological ones).
Preference utilitarianism therefore can be distinguished by its acknowledgement that every person's experience of satisfaction is unique. The theory, as outlined by R. M. Hare in 1981, [ 4 ] is controversial, insofar as it presupposes some basis by which a conflict between A's preferences and B's preferences can be resolved (for example, by ...
Act utilitarianism is a utilitarian theory of ethics that states that a person's act is morally right if and only if it produces the best possible results in that specific situation. Classical utilitarians, including Jeremy Bentham , John Stuart Mill , and Henry Sidgwick , define happiness as pleasure and the absence of pain.
The main problem for total utilitarianism is the "mere addition paradox", which argues that a likely outcome of following total utilitarianism is a future where there is a large number of people with very low utility values. Parfit terms this "the repugnant conclusion", believing it to be intuitively undesirable. [4]
For example, Rawls' maximin considers a group's utility to be the same as the utility of the member who is worst off. The "happy" utility monster of total utilitarianism is ineffective against maximin, because as soon as a monster has received enough utility to no longer be the worst-off in the group, there's no need to accommodate it.