Search results
Results from the WOW.Com Content Network
The instrumental convergence thesis applies only to instrumental goals; intelligent agents may have various possible final goals. [5] Note that by Bostrom's orthogonality thesis , [ 5 ] final goals of knowledgeable agents may be well-bounded in space, time, and resources; well-bounded ultimate goals do not, in general, engender unbounded ...
Instrumental value is the criterion of judgment which seeks instrumentally-efficient means that "work" to achieve developmentally-continuous ends. This definition stresses the condition that instrumental success is never short term; it must not lead down a dead-end street.
The system maintains itself by means of four instrumental functions: pattern maintenance, goal attainment, adaptation, and integration. [8] Weber's instrumental and value-rational action survives in Parson's system of culturally correlated means and ends.
Philosopher Robert Nozick accepted the reality of Weber's two kinds of rationality. He believed that conditional means are capable of achieving unconditional ends. He did not search traditional philosophies for value rational propositions about justice, as Rawls had done, because he accepted well-established utilitarian propositions, which Rawls found unacceptable.
An "instrumental" goal is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtually any ultimate goal, such as acquiring resources or self-preservation. [77]
While goal-directed instrumental agents need both of these abilities to represent a goal-state in the future and achieve it in a rational and efficient manner, navigational agents are supposed to have only perceptual abilities, that is a distal sensitivity in space to avoid collision with objects in their environments.
Short-term goals. Long-term goals. Vacation. Retirement. Down payment for a car or house. Opening a business. Deposit for a new apartment. Paying for a child’s education
Advanced AI systems may develop unwanted instrumental strategies, such as seeking power or survival because such strategies help them achieve their assigned final goals. [ 1 ] [ 4 ] [ 5 ] Furthermore, they might develop undesirable emergent goals that could be hard to detect before the system is deployed and encounters new situations and data ...