> Ben> My feeling on dog-level intelligence is that the *cognition*
> aspects of
> dog-level intelligence are really easy, but the perception and action
> components are significantly difficult and subtle.
>
> 1.) I thought it was well established that by now that perception & action
> are *integral* aspects of human and/or dog-level intelligence (as
> appropriate to AGI). You can dramatically reduce the
> 'resolution'/ capacity
> (especially for proof-of-concept prototypes), but AGI without them seems
> misguided.

Yes, I agree with this statement.  I didn't mean to imply otherwise.  The
first-version Novamente will have perception and action mechanisms, but its
sensors and actuators will be much simpler than those of a dog.

> 2.) If 'dog-level intelligence' is so simple, why has no-one come near to
> achieving it? I see it as a crucial sub-set of higher-level intelligence
> (and thus our shared AGI ambitions).

I didn't say "dog-level intelligence" was simple, I said "dog-level
cognition" was simple.

But it is defined in terms of dog-level perception and action, which are not
simple.

>
> Ben> In other words, once a dog's brain has produced abstract patterns not
> tied to particular environmental stimuli, the stuff it does with these
> patterns is probably not all that fancy.  But the dog's brain is
> really good
> at recognizing and enacting complex patterns, and doing this recognizing &
> enacting in a coordinated way.
>
> Producing abstract patterns from stimuli and being 'really good at
> recognizing and enacting complex patterns' is a core AGI requirement - the
> basis for all higher-level ability.

I agree that it is a core AGI requirement, but not that it is the basis for
all higher-level ability.  To me, it is one among several foundational
abilities needed for higher-level cognitive ability.

>
> Ben> Peter Voss's (www.adaptiveai.com) approach to AI aims to emulate
> biological evolution on Earth, in the sense that it wants to start with a
> dog-level brain (very roughly speaking) and then incrementally build more
> cognition on top of this.  This is a reasonable approach, to be sure.
>
> I would not characterize our approach as 'emulating biological
> evolution'. I
> believe that roughly dog-level intelligence is the right level to aim at,
> because it includes much of the fundamental cognition needed for AGI while
> eliminating the 'distractions' of language, abstract thinking, and formal
> logic. (I see these as inappropriate problems to focus on at this stage -
> especially if they *not* perception based. The cart before the horse - for

OK, sorry if I mis-phrased.  Your approach is closer to biological evolution
than the Novamente approach, but that doesn't mean it "emulates biological
evolution" in a strict sense; my wording was probably too strong.

I don't think that non-perception-based abstract thinking is a distraction
for AI development.  I think that human abstract thought is more closely
tied to perception and action than abstract thought really has to be.

> Ben> But if I had to make a guess, I'd say this approach should probably
> begin with robotics, with real sensors and actuators and a system embodied
> in a real physical environment.  I am skeptical that simplistic simulated
> worlds
> provide enough richness to support development of robust dog-level
> intelligence... as perception and action oriented as dog
> intelligence is...
>
> I don't see any problem with using virtual environments for testing &
> proving basic abilities - one  can get an enormous amount of
> complexity out
> of them these days. But in any case, our framework (and actual testing)
> seamlessly integrates virtual and real-world perception/ action.

I'm a bit skeptical of purely virtual worlds, in terms of the degree of
richness of experience they offer an AI system.  But then, this complaint
may not hold in 5 years time, what with the pace of progress...

However, today it's quite possible to put together a pretty cheap testing
framework involving manipulable camera eyes, robot arms, remote-controlled
buggies and the like -- and this is moving toward what I'd consider a viable
a degree of richness, for a perception/action-centric approach to AGI
development....  These things are all surprisingly affordable these days --
hooking them up to PC's is a pain but not a huge expense and doesn't require
much special expertise.

I can see how your A2I2 testing framework could be made in such a way as to
support such doohickeys when you feel they're useful in your testing
process.

So I think we probably don't agree on this aspect of things, just on the
more fundamental issue of the value of work on abstract cognition that is
not founded directly on perception/action.

My view is that you're taking a bio-morphic perspective on the mind, whereas
by envisioning a form of abstract cognition different from
strongly-perception/action-derived abstract cognition, I'm looking to create
a more novel and fundamentall digital form of mind.

-- Ben


-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to