Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn derekz...@msn.com wrote:

  Ben:

  Right.  My intuition is that we don't need to simulate the dynamics
  of fluids, powders and the like in our virtual world to make it adequate
  for teaching AGIs humanlike, human-level AGI.  But this could be
  wrong.

 I suppose it depends on what kids actually learn when making cakes,
 skipping rocks, and making a mess with play-dough.  Some might say that if
 they get conservation of mass and newton's law then they skipped all the
 useless stuff!



OK, but those some probably don't include any preschool teachers or
educational theorists.

That hypothesis is completely at odds with my own intuition from having
raised 3 kids and spent probably hundreds of hours helping out in daycare
centers, preschools, kindergartens, etc.

Apart from naive physics, which is rather well-demonstrated not to be
derived in the human mind/brain from basic physical principles, there is a
lot of learning about planning, scheduling, building, cooperating ...
basically, all the stuff mentioned in our AGI Preschool paper.

Yes, you can just take a robo-Cyc type approach and try to abstract, on
your own, what is learned from preschool activities and code it into the AI:
code in Newton's laws, axiomatic naive physics, planning algorithms, etc.
My strong prediction is you'll get a brittle AI system that can at best be
tuned into adequate functionality in some rather narrow contexts.



 But in the case where we are trying to roughly follow stages of human
 development with goals of producing human-like linguistic and reasoning
 capabilities, I very much fear that any significant simplification of the
 universe will provide an insufficient basis for the large sensory concept
 set underlying language and analogical reasoning (both gross and fine).
 Literally, I think you're throwing the baby out with the bathwater.  But, as
 you say, this could be wrong.



Sure... that can't be disproven right now, of course.

We plan to expand the paper into a journal paper where we argue against this
obvious objection more carefully -- basically arguing why the virtual-world
setting provides enough detail to support the learning of the critical
cognitive subcomponents of human intelligence.  But, as with anything in
AGI, even the best-reasoned paper can't convince a skeptic.




 It's really the only critique I have of the AGI preschool idea, which I do
 like because we can all relate to it very easily.  At any rate, if it turns
 out to be a valid criticism the symptom will be that an insufficiently rich
 set of concepts will develop to support the range of capabilities needed and
 at that point the simulations can be adjusted to be more complete and
 realistic and provide more human sensory modalities.  I guess it will be
 disappointing if building an adequate virtual world turns out to be as
 difficult and expensive as building high quality robots -- but at least it's
 easier to clean up after cake-baking.


Well, it's completely obvious to me, based on my knowledge of virtual worlds
and robotics, that building a high quality virtual world is orders of
magnitude easier than making a workable humanoid robot.

*So* much $$ has been spent on humanoid robotics before, by large, rich and
competent companies, and they still suck.It's just a very hard problem,
with a lot of very hard subproblems, and it will take a while to get worked
through.

On the other hand, making a virtual world such as I envision, is more than a
spare-time project, but not more than the project of making a single
high-quality video game.  It's something that any one of these big Japanese
companies could do with a tiny fraction of their robotics budgets.  The
issue is a lack of perceived cool value and a lack of motivation.

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel

 It's an interesting idea, but I suspect it too will rapidly break down.
 Which activities can be known about in a rich, better-than-blind-Cyc way
 *without* a knowledge of objects and object manipulation? How can an agent
 know about reading a book,for example,  if it can't pick up and manipulate a
 book? How can it know about adding and subtracting, if it can't literally
 put objects on top of each other, and remove them?  We humans build up our
 knowledge of the world objects/physics up from infancy.  Science also
 insists that all formal scientific knowledge of  the world  - all scientific
 disciplines - must be ultimately physics/objects-based.  Is there really an
 alternative?


And  just to be clear: in the AGI Preschool world I envision, picking up and
manipulating and stacking objects, and so forth, *would* be possible.  This
much is not hard to achieve using current robot-simulator tech.

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
I agree, but the good news is that game dev advances fast.

So, my plan with the AGI Preschool would be to build it in an open platform
such as OpenSim, and then swap in better and better physics engines as they
become available.

Some current robot simulators use ODE and this seems to be good enough to
handle a lot of useful robot-object and object-object interactions, though I
agree it's limited.

Still, making a dramatically better physics engine -- while a bunch harder
than making a nice AGI preschool using current virtual worlds and physics
engines -- is still a way, way easier problem than making a highly
functional (in terms of sensors and actuators) humanoid robot.

Also, the advantages of working in a virtual rather than physical world
should not be overlooked.  The ability to run tests over and over again, to
freely vary parameters and so forth, is pretty nice ... also the ability to
run 1000s of tests in parallel without paying humongous bucks for a fleet of
robots...

ben

On Sat, Dec 20, 2008 at 8:43 AM, Derek Zahn derekz...@msn.com wrote:


 Oh, and because I am interested in the potential of high-fidelity physical
 simulation as a basis for AI research, I did spend some time recently
 looking into options.  Unfortunately the results, from my perspective, were
 disappointing.

 The common open-source physics libraries like ODE, Newton, and so on, have
 marginal feature sets and frankly cannot scale very well performance-wise.
 Once I even did a little application whose purpose was to see whether a
 human being could learn to control an ankle joint to compensate for an
 impulse event and stabilize a simple body model (that is, to make it not
 fall over) by applying torques to the ankle.  I was curious to see (through
 introspection) how humans learn to act as process controllers.
 http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It
 wasn't a very good test of the question so I didn't really get a
 satisfactory answer.  I did discover, though, that a game built around more
 appealing cases of the player learning to control physics-inspired processes
 could be quite absorbing.

 Beyond that, the most promising avenue seems to be physics libraries tied
 to graphics hardware being worked on by the hardware companies to help
 sell their stream processors.  The best example is Nvidia, who bought PhysX
 and ported it to their latest cards, giving a huge performance boost.  Intel
 has bought Havok and I can only imagine that they are planning on using that
 as the interface to some Larrabee-based physics engine.  I'm sure that ATI
 is working on something similar for their newer (very impressive) stream
 processing cards.

 At this stage, though, despite some interesting features and leaping
 performance, it is still not possible to do things like get realistic sensor
 maps for a simulated soft hand/arm, and complex object modifications like
 bending and breaking are barely dreamed of in those frameworks.  Complex
 multi-body interactions (like realistic behavior when dropping or otherwise
 playing with a ring of keys or realistic baby toys) have a long ways to go.

 Basically, I fear those of us who are interested in this are just waiting
 to ride the game development coattails and it will be a few years at least
 until performance that even begins to interest me will be available.

 Just my opinions on the situation.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn


 Some might say that if they get conservation of mass 
 and newton's law then they skipped all the useless stuff!
 OK, but those some probably don't include any preschool 
 teachers or educational theorists. That hypothesis is completely at odds 
 with my own intuition 
 from having raised 3 kids and spent probably hundreds of hours 
 helping out in daycare centers, preschools, kindergartens, etc.
 
Sorry, that was just kind of a joke.  Probably nobody actually has the opinion 
I was lampooning though I do see similar things said sometimes, as if inferring 
minimum-description-length root level reductionisms is a realistic approach to 
learning to deal with the world.  It might even be true, but the humor was 
supposed to be to juxtapose that idea with the AGI preschool.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 Well, it's completely obvious to me, based on my knowledge of virtual worlds
 and robotics, that building a high quality virtual world is orders of
 magnitude easier than making a workable humanoid robot.

I guess that depends on what you mean by high quality and
workable. Why does a robot have to be humanoid, BTW? I'd like a
robot that can make me a cup of tea, I don't particularly care if it
looks humanoid (in fact I suspect many humans would have less
emotional resistance to a robot that didn't look humanoid, since it's
more obviously a machine).

 On the other hand, making a virtual world such as I envision, is more than a
 spare-time project, but not more than the project of making a single
 high-quality video game.

GTA IV cost $5 million, so we're not talking about peanuts here.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Ben: Right.  My intuition is that we don't need to simulate the dynamics of 
fluids, powders and the like in our virtual world to make it adequate for 
teaching AGIs humanlike, human-level AGI.  But this could be wrong.I suppose 
it depends on what kids actually learn when making cakes, skipping rocks, and 
making a mess with play-dough.  Some might say that if they get conservation of 
mass and newton's law then they skipped all the useless stuff!
 
I think I agree with the plausibility of something you have said many times:  
that there may be many paths to AGI that are not similar at all to human 
development -- abstract paths to modelling the universe, teasing meaning from 
sheer statistics of the chinese/chinese dictionary of the raw html internet, 
who knows what.
 
But in the case where we are trying to roughly follow stages of human 
development with goals of producing human-like linguistic and reasoning 
capabilities, I very much fear that any significant simplification of the 
universe will provide an insufficient basis for the large sensory concept set 
underlying language and analogical reasoning (both gross and fine).  Literally, 
I think you're throwing the baby out with the bathwater.  But, as you say, this 
could be wrong.
 
It's really the only critique I have of the AGI preschool idea, which I do like 
because we can all relate to it very easily.  At any rate, if it turns out to 
be a valid criticism the symptom will be that an insufficiently rich set of 
concepts will develop to support the range of capabilities needed and at that 
point the simulations can be adjusted to be more complete and realistic and 
provide more human sensory modalities.  I guess it will be disappointing if 
building an adequate virtual world turns out to be as difficult and expensive 
as building high quality robots -- but at least it's easier to clean up after 
cake-baking.
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Well, it's completely obvious to me, based on my knowledge of virtual
 worlds
  and robotics, that building a high quality virtual world is orders of
  magnitude easier than making a workable humanoid robot.

 I guess that depends on what you mean by high quality and
 workable. Why does a robot have to be humanoid, BTW? I'd like a
 robot that can make me a cup of tea, I don't particularly care if it
 looks humanoid (in fact I suspect many humans would have less
 emotional resistance to a robot that didn't look humanoid, since it's
 more obviously a machine).



It doesn't have to be humanoid ... but apart from rolling instead of
walking,
I don't see any really significant simplifications obtainable from making it
non-humanoid.

Grasping and manipulating general objects with robot manipulators is
very much an unsolved research problem.  So is object recognition in
realistic conditions.

So, to make an AGI robot preschool, one has to solve these hard
research problems first.

That is a viable way to go if one's not in a hurry --
but anyway in the robotics context any talk
of preschools is drastically premature...




  On the other hand, making a virtual world such as I envision, is more
 than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.


Right, but that is way cheaper than making a high-quality humanoid robot

Actually, $$ aside, we don't even **know how** to make a decent humanoid
robot.

Or, a decently functional mobile robot **of any kind**

Whereas making a software based AGI Preschool of the type I described is
clearly
feasible using current technology, w/o any research breakthroughs

And I'm sure it could be done for $300K not $5M using OSS and non-US
outsourced labor...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Oh, and because I am interested in the potential of high-fidelity physical 
simulation as a basis for AI research, I did spend some time recently looking 
into options.  Unfortunately the results, from my perspective, were 
disappointing.
 
The common open-source physics libraries like ODE, Newton, and so on, have 
marginal feature sets and frankly cannot scale very well performance-wise.  
Once I even did a little application whose purpose was to see whether a human 
being could learn to control an ankle joint to compensate for an impulse event 
and stabilize a simple body model (that is, to make it not fall over) by 
applying torques to the ankle.  I was curious to see (through introspection) 
how humans learn to act as process controllers.  
http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It 
wasn't a very good test of the question so I didn't really get a satisfactory 
answer.  I did discover, though, that a game built around more appealing cases 
of the player learning to control physics-inspired processes could be quite 
absorbing.
 
Beyond that, the most promising avenue seems to be physics libraries tied to 
graphics hardware being worked on by the hardware companies to help sell 
their stream processors.  The best example is Nvidia, who bought PhysX and 
ported it to their latest cards, giving a huge performance boost.  Intel has 
bought Havok and I can only imagine that they are planning on using that as the 
interface to some Larrabee-based physics engine.  I'm sure that ATI is working 
on something similar for their newer (very impressive) stream processing cards.
 
At this stage, though, despite some interesting features and leaping 
performance, it is still not possible to do things like get realistic sensor 
maps for a simulated soft hand/arm, and complex object modifications like 
bending and breaking are barely dreamed of in those frameworks.  Complex 
multi-body interactions (like realistic behavior when dropping or otherwise 
playing with a ring of keys or realistic baby toys) have a long ways to go.
 
Basically, I fear those of us who are interested in this are just waiting to 
ride the game development coattails and it will be a few years at least until 
performance that even begins to interest me will be available.
 
Just my opinions on the situation.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Mike Tintner


Bob:  Even with crude or no real
simulation ability in an environment such as Second Life, using some
simple symbology to stand for puck up screwdriver you can still try
to tackle problems such as autobiographical memory - how does the
agent create a coherent story out of a series of activities, and how
can it use that story in future to improve its skills or communication
effectiveness.

It's an interesting idea, but I suspect it too will rapidly break down. 
Which activities can be known about in a rich, better-than-blind-Cyc way 
*without* a knowledge of objects and object manipulation? How can an agent 
know about reading a book,for example,  if it can't pick up and manipulate a 
book? How can it know about adding and subtracting, if it can't literally 
put objects on top of each other, and remove them?  We humans build up our 
knowledge of the world objects/physics up from infancy.  Science also 
insists that all formal scientific knowledge of  the world  - all scientific 
disciplines - must be ultimately physics/objects-based.  Is there really an 
alternative? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-20 Thread John G. Rose
 From: Mike Tintner [mailto:tint...@blueyonder.co.uk]
 
 Sound silly? Arguably the most essential requirement for a true human-
 level
 GI is to be able to consider any object whatsoever as a thing. It's a
 cognitively awesome feat . It means we can conceive of literally any
 thing
 as a thing - and so bring together, associate and compare immensely
 diverse objects such as, say, an amoeba, a bus, a car, a squid, a poem,
 a
 skyscraper, a box, a pencil, a fir tree, the number 1...
 
 Our thingy capacity makes us supremely adaptive. It means I can set
 you a
 creative problem like go and get me some *thing* to block this doorway
 [or
 hole] and you can indeed go and get any of a vastly diverse range of
 appropriate objects.
 
 How are we able to conceive of all these forms as things? Not by any
 rational means, I suggest, but by the imaginative means of drawing them
 all
 mentally or actually as similar adjustable gloops or blobs.
 
 Arnheim provides brilliant evidence for this:
 
 a young child in his drawings uses circular shapes to represent almost
 any
 object at all: a human figure, a house, a car, a book, and even the
 teeth of
 a saw, as can be seen in Fig x, a drawing by a five year old. It would
 be a
 mistake to say that the child neglects or misrepresents the shape of
 these
 objects. Only to adult eyes is he picturing them as round. Actually,
 intended roundness does not exist before other shapes, such as
 straightness
 or angularity are available to the child. At the stage when he begins
 to
 draw circles, shape is not yet differentiated. The circle does not
 stand for
 roundness but for the more general quality of thingness - that is,
 for the
 compactness of a solid object as distinguished from the nondescript
 ground.
 [Art and Visual Perception]
 

Even for things and objects the mathematics is inherent. There is
plurality, partitioning, grouping, attributes.. interrelatedness. Is a wisp
of smoke a thing, or a wave on the ocean, or a sound echoing through the
mountains. Is everything one big thing?

Perhaps creativity involves zeroing out from the precise definition of
things in order to make their interrelatedness less restricting. Can't
find a solution to those complex problems when you are stuck in all the
details, you can't' rationalize your way out of the rules as there may be a
non-local solution or connection that needs to be made. 

The young child is continuously exercising creativity as things are blobs or
circles and creativity combined with trial and error rationalizes things
into domains and rules...

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 It doesn't have to be humanoid ... but apart from rolling instead of
 walking,
 I don't see any really significant simplifications obtainable from making it
 non-humanoid.

I can think of several. For example, you could give it lidar to
measure distances with -- this could then be used as input to its
vision system making it easier for the robot to tell which objects are
near or far. Instead of binocular vision, it could have 2 video
cameras. It could have multiple ears, which would help it tell where a
sound is coming from.

The the best of my knowledge, no robot that's ever been used for
anything practical has ever been humanoid.

 Grasping and manipulating general objects with robot manipulators is
 very much an unsolved research problem.  So is object recognition in
 realistic conditions.

What sort of visual input do you plan to have in your virtual environment?

 So, to make an AGI robot preschool, one has to solve these hard
 research problems first.

 That is a viable way to go if one's not in a hurry --
 but anyway in the robotics context any talk
 of preschools is drastically premature...


  On the other hand, making a virtual world such as I envision, is more
  than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.

 Right, but that is way cheaper than making a high-quality humanoid robot

Is it? I suspect one with tracks, two robotic arms, various sensors
for light and sound, etc, could be made for less than $10,000 -- this
would be something that could move around and manipulate a blocks
world. My understanding is that all, or nearly all, the difficulty
comes in programming it. Which is where AI comes in.

 Actually, $$ aside, we don't even **know how** to make a decent humanoid
 robot.

 Or, a decently functional mobile robot **of any kind**

Is that because of hardware or software issues?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Derek Zahn derekz...@msn.com:
 Ben:

 Right.  My intuition is that we don't need to simulate the dynamics
 of fluids, powders and the like in our virtual world to make it adequate
 for teaching AGIs humanlike, human-level AGI.  But this could be
 wrong.

 I suppose it depends on what kids actually learn when making cakes, skipping
 rocks, and making a mess with play-dough.

I think that the important cognitive abilities involved are at a
simpler level than that.

Consider an object, such as a sock or a book or a cat. These objects
can all be recognised by young children, even though the visual input
coming from trhem chasnges from what angle they're viewed at. More
fundamentally, all these objects can change shape, yet humans can
still effortlessly recognise them to be the same thing. And this
ability doesn't stop with humans -- most (if not all) mammalian
species can do it.

Until an AI can do this, there's no point in trying to get it to play
at making cakes, etc.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
Well, there is massively more $$ going into robotics dev than into AGI dev,
and no one seems remotely near to solving the hard problems

Which is not to say it's a bad area of research, just that it's a whole
other huge confusing RD can of worms

So I still say, the choices are

-- virtual embodiment, as I advocate

-- delay working on AGI for a decade or so, and work on robotics now instead
(where by robotics I include software work on low-level sensing and actuator
control)

Either choice makes sense but I prefer the former as I think it can get us
to the end goal faster.

About the adequacy of current robot hardware -- I'll tell you more in 9
months or so ... a project I'm collaborating on is going to be using AI
(including OpenCog) to control a Nao humanoid robot.  We'll have 3 of them,
they cost about US$14K each or so.   The project is in China but I'll be
there in June-July to play with the Naos and otherwise collaborate on the
project.

My impression is that with a Nao right now, camera-eye sensing is fine so
long as lighting conditions are good ... audition is OK in the absence of
masses of background noise ... walking is very awkward and grasping is
possible but limited

The extent to which the limitations of current robots are hardware vs
software based is rather subtle, actually.

In the case of vision and audition, it seems clear that the bottleneck is
software.

But, with actuation, I'm not so sure.  The almost total absence of touch and
kinesthetics in current robots is a huge impediment, and puts them at a huge
disadvantage relative to humans.  Things like walking and grasping as humans
do them rely extremely heavily on both of these senses, so in trying to deal
with this stuff without these senses (in any serious form), current robots
face a hard and odd problem...

ben

On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  It doesn't have to be humanoid ... but apart from rolling instead of
  walking,
  I don't see any really significant simplifications obtainable from making
 it
  non-humanoid.

 I can think of several. For example, you could give it lidar to
 measure distances with -- this could then be used as input to its
 vision system making it easier for the robot to tell which objects are
 near or far. Instead of binocular vision, it could have 2 video
 cameras. It could have multiple ears, which would help it tell where a
 sound is coming from.

 The the best of my knowledge, no robot that's ever been used for
 anything practical has ever been humanoid.

  Grasping and manipulating general objects with robot manipulators is
  very much an unsolved research problem.  So is object recognition in
  realistic conditions.

 What sort of visual input do you plan to have in your virtual environment?

  So, to make an AGI robot preschool, one has to solve these hard
  research problems first.
 
  That is a viable way to go if one's not in a hurry --
  but anyway in the robotics context any talk
  of preschools is drastically premature...
 
 
   On the other hand, making a virtual world such as I envision, is more
   than a
   spare-time project, but not more than the project of making a single
   high-quality video game.
 
  GTA IV cost $5 million, so we're not talking about peanuts here.
 
  Right, but that is way cheaper than making a high-quality humanoid robot

 Is it? I suspect one with tracks, two robotic arms, various sensors
 for light and sound, etc, could be made for less than $10,000 -- this
 would be something that could move around and manipulate a blocks
 world. My understanding is that all, or nearly all, the difficulty
 comes in programming it. Which is where AI comes in.

  Actually, $$ aside, we don't even **know how** to make a decent humanoid
  robot.
 
  Or, a decently functional mobile robot **of any kind**

 Is that because of hardware or software issues?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Robots with a sense of touch [WAS Re: [agi] AGI Preschool....]

2008-12-20 Thread Richard Loosemore

Philip Hunt wrote:

2008/12/20 Ben Goertzel b...@goertzel.org:

Well, there is massively more $$ going into robotics dev than into AGI dev,
and no one seems remotely near to solving the hard problems

Which is not to say it's a bad area of research, just that it's a whole
other huge confusing RD can of worms

So I still say, the choices are

-- virtual embodiment, as I advocate

-- delay working on AGI for a decade or so, and work on robotics now instead
(where by robotics I include software work on low-level sensing and actuator
control)

Either choice makes sense but I prefer the former as I think it can get us
to the end goal faster.


That makes sense


But, with actuation, I'm not so sure.  The almost total absence of touch and
kinesthetics in current robots is a huge impediment, and puts them at a huge
disadvantage relative to humans.


Good point.

I wonder how easy it would be to provide a robot with a sensor that
gives a sense of touch? maybe something the thickness of a sheet of
paper, with horizontal and vertical wires criss-crossing it, and the
wires not electrically connected would work, if there was a difference
in capacitance when the wires where further apart or closer together.



How about:

http://www.geekologie.com/2006/06/nanoparticles_give_robots_prec.php

or

http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=163701010




Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel


 Consider an object, such as a sock or a book or a cat. These objects
 can all be recognised by young children, even though the visual input
 coming from trhem chasnges from what angle they're viewed at. More
 fundamentally, all these objects can change shape, yet humans can
 still effortlessly recognise them to be the same thing. And this
 ability doesn't stop with humans -- most (if not all) mammalian
 species can do it.

 Until an AI can do this, there's no point in trying to get it to play
 at making cakes, etc.



Well, it seems to me that current virtual worlds are just fine for exploring
this kind of vision processing

However, I have long been perplexed at the obsession with so many AI folks
with vision processing.

I mean: yeah, it's important to human intelligence, and some aspects of
human cognition are related to human visual perception

But, it's not obvious to me why so many folks think vision is so critical to
AI, whereas other aspects of human body function are not.

For instance, the yogic tradition and related Eastern ideas would suggest
that *breathing* and *kinesthesia* are the critical aspects of mind.
Together with touch, kinesthesia is what lets a mind establish a sense of
self, and of the relation between self and world.

In that sense kinesthesia and touch are vastly more fundamental to mind than
vision.  It seems to me that a mind without vision could still be a
basically humanlike mind.  Yet, a mind without touch and kinesthesia could
not, it would seem, because it would lack a humanlike sense of its own self
as a complex dynamic system embedded in a world.

Why then is there constant talk about vision processing and so little talk
about kinesthetic and tactile processing?

Personally I don't think one needs to get into any of this sensorimotor
stuff too deeply to make a thinking machine.  But, if you ARE going to argue
that sensorimotor aspects are critcial to humanlike AI because they're
critical to human intelligence, why harp on vision to the exclusion of other
things that seem clearly far more fundamental??

Is the reason just that AI researchers spend all day staring at screens and
ignoring their physical bodies and surroundings?? ;-)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 However, I have long been perplexed at the obsession with so many AI folks
 with vision processing.

I wouldn't say I'm obsessed with it. On its own vision processing
does nothing, the same as all other input processing -- its only when
a brain/AI used that processing to create output that it is actually
doing any work.

Theimportnat thing about vision, IMO, is not vision itself, but the
way that vision interfaces with a mind's model of the world. And
vision isn't really that different in principle from the other sensory
modalities that a human or animal has -- they are all inputs, that go
to building a model of the world, through which the organism makes
decisions.

 But, it's not obvious to me why so many folks think vision is so critical to
 AI, whereas other aspects of human body function are not.

I don't think any human body functions are critical to AI. IMO it's a
perfectly valid approach to AI to build programs that deal with
digital symbolic information -- e.g. programs like copycat or eurisko.

 For instance, the yogic tradition and related Eastern ideas would suggest
 that *breathing* and *kinesthesia* are the critical aspects of mind.
 Together with touch, kinesthesia is what lets a mind establish a sense of
 self, and of the relation between self and world.

Kinesthesia/touch/movement are clearly important sensory modalities in
mammals, given that they are utterly fundamental to moving around in
the world. Breathing less so -- I mean you can do it if you're
unconscious or brain dead.

 Why then is there constant talk about vision processing and so little talk
 about kinesthetic and tactile processing?

Possibly because people are less conscious of it than vision.

 Personally I don't think one needs to get into any of this sensorimotor
 stuff too deeply to make a thinking machine.

Me neither. But if the thinking machine is to be able to solve certain
problems (when connected to a robot body, of course) it will have to
have sophisticated systems to handle touch, movement and vision. By
certain problems I mean things like making a cup of tea, or a cat
climbing a tree, or a human running over uneven ground.

 But, if you ARE going to argue
 that sensorimotor aspects are critcial to humanlike AI because they're
 critical to human intelligence, why harp on vision to the exclusion of other
 things that seem clearly far more fundamental??

Say I asked you to imagine a cup.

(Go on, do it now).

Now, when you imagined the cup, did you imagine what it looks like, or
what it feels like to the touch. For me, it was the former. So I don't
think touch is clearly more fundamental, in terms of how it interacts
with our internal model of the world, than vision is.

 Is the reason just that AI researchers spend all day staring at screens and
 ignoring their physical bodies and surroundings?? ;-)

:-)

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-20 Thread Charles Hixson

Ben Goertzel wrote:


Hi,



Because some folks find that they are not subjectively
sufficient to explain everything they subjectively experience...

That would be more convincing if such people were to show evidence
that they understand what algorithmic processes are and can do.
 I'm almost tempted to class such verbalizations as meaningless
noise, but that's probably too strong a reaction.



Push comes to shove, I'd have to say I'm one of those people.
But you aren't one who asserts that that *IS* the right answer.  Big 
difference.  (For that matter, I have a suspicion that there are 
non-algorithmic aspects to consciousness.  But I also suspect that they 
are implementation details.)

...



ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now 
https://www.listbox.com/member/archive/rss/303/ | Modify 
https://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com