I think my criticism of Hutter's theorem may not have been that
strong.  I do think that Hutter's theorem may shed some light on why
the problem is difficult.  More importantly it helps us to think
outside the box.  For instance, it might be the case that an effective
AI program cannot be completely defined.  It might need to be
constantly changing, in that the program itself can never be defined.
I am not saying that is the case, just that it is a possibility.

But, in one sense a general AI program is not going to typically halt.
 It just keeps going until someone shuts it off.  So perhaps the
halting problem is fly in the ointment.  On the other hand, the
halting problem does hinge around the question whether a function can
be defined, and this issue is most definitely relevant to the problem.

Whether or not an effective AGI program can be defined is not a
feasible present-day computational problem.  So in that sense the
halting problem is relevant. The question of whether or not an AGI
program is feasible is a problem for higher intelligence, not present
day computer intelligence.

Jim Bromer



On Mon, Dec 1, 2008 at 2:38 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Mon, Dec 1, 2008 at 8:04 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>>
>> The value of AIXI is not that it solves the general intelligence problem, 
>> but rather
>> it explains why the problem is so hard.
>
> It doesn't explain why it's hard (is impossible "hard"?). That you
> can't solve a problem exactly, doesn't mean that there is no simple
> satisfactory solution.
>
>
>> It also justifies a general principle that is
>> already used in science and in practical machine learning algorithms:
>> to choose the simplest hypothesis that fits the data. It formally defines
>> "simple" as the length of the shortest program that outputs a description
>> of the hypothesis.
>
> It's Solomonoff's universal induction, a much earlier result. Hutter
> generalized Solomonoff's induction to decision-making and proved some
> new results, but the idea of simple hypotheses prior and proof that it
> does good at learning are Solomonoff's.
>
> See ( http://www.scholarpedia.org/article/Algorithmic_probability )
> for introduction.
>
> --
> Vladimir Nesov
> [EMAIL PROTECTED]
> http://causalityrelay.wordpress.com/
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to