Pei,

many thanks for your comments. Good input on rationality and AIXI.

Kind regards,
Stefan

On Nov 14, 2007 10:13 PM, Pei Wang <[EMAIL PROTECTED]> wrote:

> Stefan,
>
> Though I agree with most of your analysis on inter-agent relationship,
> I don't share your conception of rationality.
>
> To me, "rationality" itself is relativistic, that is, what
> behavior/action is rational is always judged according to the
> assumptions and postulations on a system's goal, knowledge, resources,
> etc. There is no single "rationality" that can be used in all
> situations.
>
> Similar ideas have been argued by I.J. Good, H.A. Simon, and some others.
>
> In the context of AGI, AIXI is an important model of rationality, but
> not the only one. At least there are NARS and OSCAR, which are based
> on different assumptions about the system and its environment. Being
> impractical is not the only problem of AIXI. As soon as one of its
> assumptions (infinite resources is only one of them) is dropped, its
> conclusions become inapplicable.
>
> Some people think "in theory" we should accept unrealistic
> assumptions, like infinite resources, since they lead to rigorous
> models; then, in implementation, the realistic restrictions (on
> resources etc.) can be introduced, which lead to approximations of the
> idealized model. What they fail to see is that when a new restriction
> is added, it may change the problem to the extent that the "ideal
> theory" becomes mostly irrelevant. To me, it is much better to start
> with more realistic assumptions in the first place, even though it
> will make the problem harder to solve.
>
> Pei
>
> On Nov 13, 2007 10:40 PM, Stefan Pernar <[EMAIL PROTECTED]> wrote:
> > Would be great if people could poke the following with their
> metaphorical
> > sticks:
> >
> >
> > Imagine two agents A(i) each one with a utility function F(i),
> capability
> > level C(i) and no knowledge as to the other agents F and C values. Both
> > agents are given equal resources and are tasked with devising the most
> > efficient and effective way to maximize their respective utility with
> said
> > resources.
> >
> > Scenario 1: Both agents have fairly similar utility functions F(1) =
> F(2),
> > level of knowledge, cognitive complexity, experience - in short
> capability
> > C(1) = C(2) - and a high level of mutual trust T(1->2) = T(2->1) = 1.
> They
> > will quickly agree on the way forward, pool their resources and execute
> > their joint plan. Rather boring.
> >
> > Scenario 2: Again we assume F(1) = F(2), however C(1) > C(2) - again
> T(1->2)
> > = T(2->1) = 1. The more capable agent will devise a plan, the less
> capable
> > agent will provide its resources and execute the plan trusted by C(2). A
> bit
> > more interesting.
> >
> > Scenario 3: F(1) = F(2), C(1) > C(2) but this time T(1->2) = 1 and
> T(2->1) =
> > 0.5 meaning the less powerful agent assumes with a probability of 50%
> that
> > A(1) is in fact a self serving optimizer who's difference in plan will
> turn
> > out to be decremental to A(2) while A(1) is certain that this is all
> just
> > one big misunderstanding. The optimal plan devised under scenario 2 will
> now
> > face opposition by A(2) although it would be in A(2)'s best interest to
> > actually support it with its resources to maximize (F2) while A(1) will
> see
> > A(2)'s objection as being detrimental to maximizing their shared utility
> > function. Fairly interesting: based on lack of trust and differences in
> > capability each agent perceives the other agent's plan as being
> irrational
> > from their respective points of view.
> >
> > Under scenario 3, both agents now have a variety of strategies at their
> > disposal:
> > deny pooling of part or all of ones resources = If we do not do it my
> way
> > you can do it alone.
> > use resources to sabotage the other agent's plan = I must stop him with
> > these crazy ideas!
> > deceive the other agent in order to skew how the other agent is
> deploying
> > strategies 1 and 2
> > spend resources to explain the plan to the other agent = Ok - let's help
> him
> > see the light
> > spend resources on self improvement to understand the other agent's plan
> > better = Let's have a closer look, the plan might not be so bad after
> all
> > strike a compromise to ensure a higher level of pooled resources = If we
> > don't compromise we both loose out
> >
> > Number 1 is a given under scenario 3. Number 2 is risky, particularly as
> it
> > would cause a further reduction in trust on both sides if this strategy
> gets
> > deployed assuming the other party would find out similarly with number
> 3.
> > Number 4 seems like the way to go but may not always work particularly
> with
> > large differences in C(i) among the agents. Number 5 is a likely
> strategy
> > with a fairly high level of trust. Most likely however is strategy 6.
> >
> > Striking a compromise is trust building in repeated encounters and thus
> > promises less objection and thus higher total payoff the next times
> around.
> >
> > Assuming the existence of an arguably optimal path leading to a
> maximally
> > possible satisfaction of a given utility function anything else would be
> > irrational. Actually such a maximally intelligent algorithm exists in
> the
> > form of Hutter's universal algorithmic agent AIXI. The only problem
> being
> > however that the execution of said algorithm requires infinite resources
> and
> > is thus rather unpractical as every decision will always have to be made
> > under resource constrains.
> >
> > Consequentially every decision will be irrational to that degree that it
> > differs from the unknowable optimal path that AIXI would produce. Throw
> in a
> > lack of trust and varying levels of capability among the agents and all
> > agents will always have to adopt their plans and strike a compromise
> based
> > on the other agent's relativistic irrationality independent of their
> > capabilities in oder to minimize the other agents objection cost and
> thus
> > maximizing their respective utility function. --
> > Stefan Pernar
> > 3-E-101 Silver Maple Garden
> > #6 Cai Hong Road, Da Shan Zi
> > Chao Yang District
> > 100015 Beijing
> > P.R. CHINA
> > Mobil: +86 1391 009 1931
> > Skype: Stefan.Pernar ________________________________
> >  This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64931915-a43013

Reply via email to