On Thu, May 29, 2008 at 8:08 PM, John G. Rose <[EMAIL PROTECTED]> wrote:
>
> If an agent is shielded from memories about the processes going on in its
> own mind that are related to itself, if it is unaware of itself within its
> environment is it impossible for it to learn? Does it have to know about
> itself at all to be intelligent? Why enable consciousness within an
> intelligent system when it just needs to do knowledge work and not be
> concerned with itself and related goals such as self-preservation,
> self-improvement, etc..
>

And if a spherical cow is suspended in a vacuum for 1 billion years,
will it dream of intelligent robots? I have a distinct impression that
people participating in this thread want to shroud themselves and
others in opaque and hopeless mystery, instead of trying to find the
right answers for the right questions.

Note that I didn't use the "self" thingie at all in my previous comment.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to