Terren,

> I don't think any kind of algorithmic approach, which is to say, un-embodied, 
> will ever result in conscious intelligence. But an embodied agent that is 
> able to construct ever-deepening models of its experience such that it 
> eventually includes itself in its models, well, that is another story.

I don't see why an un-embodied system couldn't successfully use the
concept of self in its models. It's just another concept, except that
it's linked to real features of the system.

> We may argue about whether consciousness (mindfulness) is necessary for 
> general intelligence. I think it is, and that informs much of my perspective.

General intelligence can IMO be demonstrated even when the system
under evaluation doesn't [ATM] understand particular concepts like
"self" and even if it doesn't [ATM] have the ability to perceive a
relationship between self and its actual environment (=stuff often
associated with consciousness). In fact, it can know relatively
little. Let's say I need to cut a bread, but don't have a knife. I
only have a few other tools, one of which (let's call it "T2") has
similar parameters to a knife. Even though this particular AGI never
heard about any of those other tools being used for cutting bread (and
is not self-aware in any sense), it still can (when asked for advice)
make a reasonable suggestion to try the "T2" (because of the
similarity) = coming up with a novel idea & demonstrating general
intelligence.

Regards,
Jiri Jelinek


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to