Jiri,

I think where you're coming from is a perspective that doesn't consider or 
doesn't care about the prospect of a conscious intelligence, an awake being 
capable of self reflection and free will (or at least the illusion of it).

I don't think any kind of algorithmic approach, which is to say, un-embodied, 
will ever result in conscious intelligence. But an embodied agent that is able 
to construct ever-deepening models of its experience such that it eventually 
includes itself in its models, well, that is another story. I think btw that is 
a valid description of humans.

We may argue about whether consciousness (mindfulness) is necessary for general 
intelligence. I think it is, and that informs much of my perspective. When I 
say something like "mindless automaton", I'm implicitly suggesting that it 
won't be intelligent in a general sense, although it could be in a narrow sense 
(like a chess program).

Terren


--- On Thu, 8/28/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> From: Jiri Jelinek <[EMAIL PROTECTED]>
> Subject: Re: [agi] How Would You Design a Play Machine?
> To: agi@v2.listbox.com
> Date: Thursday, August 28, 2008, 10:39 PM
> Terren,
> 
> >is not embodied at all, in which case it is a mindless
> automaton
> 
> Researchers and philosophers define mind and intelligence
> in many
> different ways = their classifications of particular AI
> systems
> differ. What really counts though are problem solving
> abilities of the
> system. Not how it's labeled according to a particular
> definition of
> mind.
> 
> > So much talk about Friendliness implies that the AGI
> will have no ability to choose its own goals.
> 
> Developer's choice.. My approach:
> Main goals - definitely not;
> Sub goals - sure, with restrictions though.
> 
> >It seems that AGI researchers are usually looking to
> create clever slaves.
> 
> We are talking about our machines.
> What else are they supposed to be?
> 
> >clever slaves. That may fit your notion of general
> intelligence, but not mine.
> 
> To me, general intelligence is a cross-domain ability to
> gain
> knowledge in one context and correctly apply it in another
> [in terms
> of problem solving]. The source of the primary goal(s)
> (/problem(s) to
> solve) doesn't (from my perspective) have anything to
> do with the
> level of system's intelligence. It doesn't make it
> more or less
> intelligent. It's just a separate thing. The system
> gets the initial
> goal [from whatever source] and *then* it's time to
> apply its
> intelligence.
> 
> Regards,
> Jiri Jelinek
> 
> 
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to