>
> BEN>>>> [referring the Vlad's statement that about AIXI's
> uncomputability]"Now now, it doesn't require infinite resources -- the
> AIXItl variant of AIXI only requires an insanely massive amount of
> resources, more than would be feasible in the physical universe, but not an
> infinite amount ;-) "
>
> ED>>>> So, from a practical standpoint, which is all I really care about,
> is it a dead end?
>

"Dead end" would be too strong IMO, though others might disagree.

However, the current form of AIXI-related math theory gives zero guidance
regarding how to make  a practical AGI.  To get practical guidance out of
that theory would require some additional, extremely profound math
breakthroughs, radically different in character from the theory as it exists
right now.  This could happen.  I'm not counting on it, and I've decided not
to spend time working on it personally, as fascinating as the subject area
is to me.


>  Also, do you, or anybody know, if  Solmononoff (the only way I can
> remember the name is "Soul man on off" like Otis Redding with a microphone
> problem) Induction have the ability of deal with deep forms of non-literal
> similarity matching in is complexity calculations.  And is so how?  And if
> not, isn't it brain dead?  And if it is a brain dead why is such a bright
> guy as Shane Legg spending his time on it.
>

Solomonoff induction is mentally all-powerful.  But it requires infinitely
much computational resources to achieve this ubermentality.

-- Ben G

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62920482-657287

Reply via email to