Derek,

Thanks for your thoughts. V. welcome. 

It’s impossible to reply properly in a brief note, which is all I have time for 
now. But there is a coherent philosophy behind the ideas I’m putting forward – 
and I’m v. confident it provides the broad way forward for developmental 
robotics and AGI robotics.

The part you’re referring to, is that graphics/icons/flluid schemas  - of which 
there are myriad external physical examples in our culture  - are the basic 
form of concepts/language  – and are not just lines/outlines on a page or in 
the brain’s processing, but simultaneously lines/outlines of action and 
thought.  GO TO THE KITCHEN is represented in the brain in form of outlines, 
which then form the basis for an AGI/robot to pursue a line of action.  Study 
how we graphically represent these concepts – study pictograms, ideograms and 
diagrams – and you get an idea of how the brain must work.

Re the order of development, we have to realise that reflective thought – wh. 
is what most AI-ers begin with – is something that comes much later in 
development. First we have to build a robot that can act in the world   -  a 
robot that could reflect on, let alone discuss, its past actions, is a target 
wa-a-a-a-y into the future. The general assumption in AI is that intelligence 
begins with intellectual reflection. No it begins with physical action, and 
solving the immediate practical problems involved. Intellectual reflection can 
only flow from it, not the other way round – and the whole of evolution is 
testimony to that.

From: Derek Zahn 
Sent: Thursday, January 24, 2013 4:38 PM
To: AGI 
Subject: RE: [agi] The many different types of embodiment

Personally, I think this (along with epigenetic robotics) has the most promise 
of any academic AI study area.  I certainly don't think it is the only way to 
achieve significant success; my preference for this kind of approach is purely 
pragmatic.  I'm sure it is possible to hand-code a sufficiently broad and 
robust foundational concept set to serve as the basis for general-purpose 
conceptual modeling of the world, but it seems really hard to me, since those 
basic concepts appear to be quite complicated and not very penetrable by 
introspection.

And if a conceptual system needs to be learned nearly "from scratch", the 
question is how.  I think a simulated world could work fine, but coming up with 
a simulator providing the richness of detail needed seems way harder than 
building a decent robot -- which is not to say that building good robots is 
easy -- just that high-fidelity virtual reality is really hard.

Learning from scratch using just text might be possible but I don't see how to 
do it.  "Grounding problem" is a glib reason given for why, but it pays to be 
more specific than that... So much human thought is based on metaphorical 
projection of spatial concepts, which we acquire from interaction with physical 
objects in physical space, and their causal interactions  -- and on those terms 
they seem to only make sense with respect to world-modeling methods that are 
quite different from the logical (etc) abstract world models we build on top of 
the physical.  

It is kind of mysterious to me why this works so well... Why should useful 
abstract conceptual domains map so well onto concrete physical conceptual 
domains?  I call this "the unreasonable effectiveness of metaphor", and hope 
someday to understand it.  A lot of it comes from common properties of 
causality; a lot comes from the also somewhat mysterious effectiveness of 
mathematics; and a lot comes from the way we cherry-pick abstractions we are 
capable of effectively thinking about in those terms... But there is a deeper 
issue here too, though it is rather orthogonal to AGI per se.

My own slow and intermittent work involves exploring these types of issues so 
you could probably put me in the developmental robotics camp, although I find 
such research to be fraught with the same potential for methodological 
boneheadedness as other approaches to AI -- it's just harder to see them in 
foresight than in hindsight.  When reading papers about this work, be vigilant 
for oversimplifications, questions of "ability to scale" (see e.g. Subsumption 
for past failures in sexy robotics efforts that overlook or hand-wave on such 
questions) -- as well as the usual problems of researchers (subconsciously) 
filling in more than is really there, and avoiding or downplaying  the 
difficult questions.

So I'm building a robot platform... but so far I have just been assembling and 
upgrading a CNC machine shop to use for that purpose.

>From a software perspective, I am designing and coding visual and spatial 
>modalities that will hopefully be reasonably congruent with the ones in our 
>brains and bodies.  Dev Robotics people sometimes say things like:

"... the kinds of categories and concepts a robot may develop through its 
interactions with the environment are likely to be quite different from our 
own." (from the paper you linked).

But that seems like a poor attitude to me, and such differences should be 
minimized IMO to the extent possible if we want a "robot baby" to grow up 
capable of effectively absorbing knowledge from our vast cultural ideosphere 
(Internet etc), which is expressed in idiosyncratically human idioms.  That 
knowledge base took a hundred billion human lifetimes to build (both world 
knowledge and more importantly useful ways to think) and if we expect an AGI to 
rediscover all of it on its own... well that seems rather ambitious to me.

It will probably be at least another year before I have this sensory-robotic 
platform built and coded -- and I expect I will need a similar development 
effort devoted to "low level" motor systems, if for no other reason than the 
apparent link in humans between those motor systems and abstract thought 
processes involving action (procedures, planning, etc, and maybe even "events").

Most of my limited "thinking" and reading time is spent on two areas, which the 
platform is supposed to help me explore:

1) What do "concepts" need to do, and how do they arise from experience / 
training?  Blending, categorizing, etc, etc -- we have a distressingly large 
number of cognitive processes involved with building and using concepts... What 
commonalities do they share?  What clues have we found in the brain?  How could 
the relatively short evolutionary path from non-generally-intelligent animals 
to us have created so many uniquely human mental abilities?  And, related, 
which concept-related features are present in other animals?  Stuff like 
that.... Call it a requirements specification for a conceptual modeling machine.

2) How do precepts lead to concepts?  Especially, how does embodied experience 
lead to learning the basic concept inventory (image schemas and similar 
concepts)?  A fair bit has been written about this but not very much that is 
specific and coherent.

Mike, as you can see, I'm sympathetic to many of your basic opinions about what 
is important, and I wish that instead of attacking "AGIers" with annoying 
insults and tossing out neologisms-of-the-week, you would put together a few 
details about what you think.  "Icons are fluid" doesn't say as much as you 
seem to think...

Anyway, this went on way too long for anybody to actually read I'm sure, but I 
think it's nice when folks on this list write a little bit about what they are 
working on once in a while...

Derek Zahn


--------------------------------------------------------------------------------
From: [email protected]
To: [email protected]
Subject: Re: [agi] The many different types of embodiment
Date: Thu, 24 Jan 2013 09:32:40 +0000


Wasn’t saying it’s new – just that it would be nice to have some comments on it 
and related disciplines – and for people to stop even considering 
“developmental AI” (vs robotics) as a possibility rather than a totally 
outdated impossible fantasy.

From: Piaget Modeler 
Sent: Thursday, January 24, 2013 1:36 AM
To: AGI 
Subject: RE: [agi] The many different types of embodiment

What is new about developmental robotics (or developmental AI) for that matter? 
 

Nothing new there.  To me it is the goal.

~PM


------------------------------------------------------------------------------------------------------------------------------------------------

> From: [email protected]
> To: [email protected]
> Subject: Re: [agi] The many different types of embodiment
> Date: Wed, 23 Jan 2013 23:37:52 +0000
> 
> Doesn't sound any clearer.
> 
> How about following developmental robotics rather than evo devo universe? 
> That seems to be the field inspiring a lot of European research.
> 
> http://www.psy.cmu.edu/~rakison/meedenconnsci06.pdf
> http://rossdawsonblog.com/weblog/archives/2011/03/developmental-robotics-the-cute-baby-robot-who-will-grow-up-to-be-just-like-you.html
> http://www.psy.cmu.edu/~rakison/meedenconnsci06.pdf
> 
> I esp. liked the premise of the above paper:
> 
> "we shall eliminate direct programming from consideration; it is still the
> most effective method for solving a particular task quickly, but seems 
> unlikely ever to lead to
> open-ended, general-purpose behaviour"
> 
> Any comments on this and related fields? Epigenetic robotics et al?
> 


      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to