On 1/20/2014 5:32 PM, LizR wrote:
I am beginning to think that Russell is using a very narrow or perhaps formal definition
of rationality, in which case perhaps objections that random (or unpredictable)
behaviour can be rational don't fit it, even though most people think that such actions
are at times the most rational choice.
He cited the Wikipedia entry for "Rational agent":
"In economics, game theory, decision theory, and artificial intelligence, a rational agent
is an agent which has clear preferences, models uncertainty via expected values, and
always chooses to perform the action with the optimal expected outcome for itself from
among all feasible actions. Rational agents are also studied in the fields of cognitive
science, ethics, and philosophy, including the philosophy of practical reason.
A rational agent can be anything that makes decisions, typically a person, firm, machine,
or software.
The action a rational agent takes depends on:
the preferences of the agent
the agent's information of its environment, which may come from past
experiences
the actions, duties and obligations available to the agent
the estimated or actual benefits and the chances of success of the actions.
In game theory and classical economics, it is often assumed that the actors, people, and
firms are rational. However, the extent that people and firms behave rationally is subject
to debate. Economists often assume the models of rational choice theory and bounded
rationality to formalize and predict the behavior of individuals and firms. Rational
agents sometimes behave in manners that are counter-intuitive to many people, as in the
Traveler's dilemma."
But I see nothing that would imply that a rational agent is predictable or that he could
not make a random choice.
Brent
If the definition of rationality is the course of action you believe will "maximise your
utility function" then in practice that varies from person to person, and even from
moment to moment for a given person. Maybe there is some game-theoretic definition of a
utility function that is an idealisation of the real-life version, and an idealised
version of rationality that strives to maximise it?
It sounds to me like Russell's definition of a "rational agent" is along the lines of a
fictional robot that can't "break its programming", while everyone else is thinking of
real people who have to decide what to do in a short time, with inadequate information,
in a fluid environment, etc.
I can see that if this is the case, it will lead to a conflict of opinion!
--
You received this message because you are subscribed to the Google Groups "Everything
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.