On Sun, Dec 7, 2008 at 7:59 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> I think my criticism of Hutter's theorem may not have been that
> strong.  I do think that Hutter's theorem may shed some light on why
> the problem is difficult.  More importantly it helps us to think
> outside the box.  For instance, it might be the case that an effective
> AI program cannot be completely defined.  It might need to be
> constantly changing, in that the program itself can never be defined.
> I am not saying that is the case, just that it is a possibility.
>
> But, in one sense a general AI program is not going to typically halt.
>  It just keeps going until someone shuts it off.  So perhaps the
> halting problem is fly in the ointment.  On the other hand, the
> halting problem does hinge around the question whether a function can
> be defined, and this issue is most definitely relevant to the problem.
>
> Whether or not an effective AGI program can be defined is not a
> feasible present-day computational problem.  So in that sense the
> halting problem is relevant. The question of whether or not an AGI
> program is feasible is a problem for higher intelligence, not present
> day computer intelligence.
>

Was this text even supposed to be coherent?

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to