Hi,

It follows that the AIXItl algoritm applied to friendliness would be
effectively more friendly than any other time t and space bounded
agent.

Personally I find that satisfying in the sense that once
"compassion", "growth" and "choice" or the classical "friendliness"
has been defined an optimal algorythm will be available to achive the
goal.

Best regards,

Stefan

Yes, but the problem is that AIXItl, in order to run effectively,
requires unpracticably massive amounts of space and time resources....

Also, the theorems about AIXItl show that AIXItl would (if given  goal
G) be effectively better at achieving G than any other agent with the
same space and time resource limitations (t,l), **within a constant
factor**.  This constant factor may be large and is pretty
important....  So the theorem is really only about how the
effectiveness of AIXItl compares to the effectiveness of other systems
as (t,l) becomes very large ... it doesn't tell you much about the
situation for specific, realistic (t,l) values...

In short: it's some pretty math with some conceptual evocativeness,
but not of any pragmatic value...

-- Ben

-------
AGIRI.org hosts two discussion lists: http://www.agiri.org/email
[singularity] = more general, [agi] = more technical

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to