Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
Mike, 
 

Six  2003
Seven  1996
Eight 2001
Eight and a half 
 
Good point with the movies, only a hardcore movie fan would make that 
association early in his trials to figure out the pattern as movie dates.  In 
this case you gave a hint, such a hint would tell the system to widen its 
attention spotlight to inlude movies, so entertainment, events, celebration, 
etc would come under attention based on what structure the movie concept's 
parent has in its domain content.
 
Thinking imaginatively to find hard solutions as you say, is possible with this 
system, by telling it to think outside the box to other domains and it can 
learn this pattern of domain- hopping based on the reward of a success or being 
authorized to value cross-domain attention search.  Thinking for the system is: 
shifting its attention to different regions (with the 4 domains), sizing and 
orienting the attention scale, and setting the focus depth (of details);  it 
can then read the contents of what comes up from that region and Compare, 
Contrast, Combine it to anyalyze or synthesize it. Thinking bigger or narrower 
is almost literal.
 
Like humans, this system stops a behavior (e.g, stops searching) because it 
runs out of motivation value, not ideas to search. Many systems known or 
described can lend themself to brute force thinking unsure of a solution, this 
structure allows it to do it elegantly using human-centric concept domains 
first (easier for us to communicate to it this way by saying build a damn 
good engine as human do vs 0010101101 or any other non-human language).  
 
It can and does re-write the concepts and content in its domain as it learns, 
but it started with the domains humans give it, e.g., I knew what movies were 
by having live in a number of situations where this concept was built up, so 
that later, I can learn about independent films and live performances or new 
types of entertainment thta gives similar or unfamiliar emotions.
 
Further rational
1) What humans do: have a biased (value system) that makes sense relative to 
our biological architecture;   Generate all human knowledge in this 
representation structure (natural language, ambiguous, low logic language).
 
2) What an early AGI can do: learn the human-bias by having a similar 
architecture which includes the value bias for pattern humans seek. Obtain as 
much of the recorded knowledge in the world from humans. Generate more, 
faster, new and better knowlege. Better is because it knows our value system 
and as well knows humans enough to convince them in a diccussion unlike most of 
us, that better is what it wants us to do(very bad!).
 
For natural language processing, humans readily communicate in song and poems, 
and understand them.  Many songs and poems do not make any logical sense, and 
few songs have wording order and story elements that are reasonable.  The model 
makes sense by looking for patterns where humans do, in the beats (situational 
border that structure all input) and the value (emotional meaning) of the 
song/poems content.
 
Hope some of this helps
Robert
 


--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 11:38 PM



Robert,
 
Thanks for your detailed, helpful replies. I like your approach of operating in 
multiple domains for problemsolving. But if the domains are known beforehand, 
then it's not truly creative problemsolving - where you do have to be prepared 
to go in search of the appropriate domains - and thus truly cross domains 
rather than simply combining preselected ones. I gave you a perhaps 
exaggerated example just to make the point. You had to realise that the correct 
domain to solve my problem was that of movies - the numbers were the titles of 
movies and the dates they came out. If you're dealing with real world rather 
than just artificial creative problems like our two, you may definitely have to 
make that kind of domain switch - solving any scientific detective problem, 
say, like that of binding in the brain, may require you to think in a 
surprising, new domain, for which you will have to search long and hard (and 
possibly without end).






Mike,
Very good choice.
 
 But the system always *knows* these domains beforehand  - and that it must 
 consider them in any problem?
 
 
YES  the domains content structure is what you mean, are the human-centric 
ones provided by living a childs life loading the value system with biases such 
as humans are warm and candy is really sweet.  By further being pushed thru 
western culture grade level curriculum we value the visual features symbols 
2003 and 1996 as numbers, then as dates.  The content models (concept 
patterns) are build up from

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-29 Thread Robert Swaine
 
The paper as a link instead of attachment:
http://mindsoftbioware.com/yahoo_site_admin/assets/docs/Swaine_R_Story_Understander_Model.36375123.pdf
 
The paper gives a quick view of the Human-centric representation and behavioral 
systems approach for problem-solving, reasoning as giving meaning (human 
values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.
 
cheers,
Robert


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Mike Tintner
Robert,

What kind of problems have you designed this to solve? Can you give some 
examples?
  Robert:

A brief paper on an AGI system for human-level  ...had only 2 pages to 
fit in.

If you are working on a system, you probably hope it will one day help 
design a better world, better tools, better inventions.  The better is a 
subjective human value.  A place for or human-like representation of  at least 
rough, general human values  (bias, likes) in the AGI is essential.

The paper give a quick view of the Human-centric representation and 
behavioral systems approach for problem-solving, reasoning as giving meaning 
(human values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.

Happy Holidays,
Robert

...all the human values were biased, unlike the very objective AGI 
systems designed on the Mudfish's home planet; AGI systems that objectively 
knew that sticky mud is beautiful,  large oceans of gooey mud..how enchanting!  
Pure clean water, now that's fishy!
   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Robert Swaine
Mike,
 
Mike wrote:
What kind of problems have you designed this to solve? Can you give some 
examples?
 
Natural language understanding, path finding, game playing
 
Any problems that can be represented as a situation in the four component 
domain (value - role - relation -  feature models) can be 3-C (compared, 
contrast, combined) to give a resulting situation (frame pattern).  What is 
combined/compared/or contrast?:  only the regions under attention, including 
its focus detail level are examined.  What is placed and represented in the 
regions determines what component can be 3-C analyzed... as a general computing 
paradigm using 3-C (AND - OR - NOT).
 
 
Example:
Here's a pattern example you may not have seen before, but by 3C you discover 
the pattern and how to make an example:
 
As spoken aloud:
five and nine    [is]   fine
two and six [is]   twix
five and seven  [is]   fiven
 
Take the five and seven = fiven.
when the system compares the resultant of fiven to five ..the result is 
that five is at the start of the situation.
When it compares fiven and seven... the result is that ven is at the end 
position.
 
resulting situation PATTERN = 
[situation 1 ][ focus inward ] [ start-position ]    combined with 
[situation 2 ][ focus inward ] [ end position  ]
(Spatial and sequence positions are a key part of the representation system)
 
How was the correct (reasoning) method chosen?
This result was was by comparison; it could have been by contrasting.  All 
three Compare, Contrast and Combine happen symultaneously.  The winner is 
whichever resulting situation makes sense to the system has the most activation 
in the value area (some direct or indirect value from past experience or value 
given by the authority system in the value region: e.g. fearful or attractive 
spectrum).
 
How was the correct region and focus detail level chosen?
The attention region in the example was on the sound region, the focus detail 
was on the phoneme level (syllable), it could have looked for patterns in the 
number values or the emotions related to each word, or the letter patterns,  or 
hand motions, eye position when spoken, etc).  The regions are biased by the 
value system's current index (amygdala/septum analog): e.g. when you see five 
the quantity region will be given a lower threshold, and the focus level 
associated will give the content on the 1 - 10 scale. The index region weights 
are re-organized only by stronger reward/failure (authority system), 3-C 
results can on the index changing the content connections weights.
 
Now you compare apples to oranges for an encore; what do you get? a color, a 
taste, a mass, a new fruit..your attention determines te result
 
All regions are being matched for patterns in the 2 primary index modules 
(action selection, emotional value,..others can be integrated seamlessly).
 
Five and seven is not fiven, it is twelve, but in this situation it makes 
sense to the circumstances.  Sense and meaning are contextual for the model, 
for humans.
 
 
Hope this sheds light. Detailed paper has been in the works.
Robert
 
--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 4:49 PM

Robert,
 
What kind of problems have you designed this to solve? Can you give some 
examples?
Robert:
 
A brief paper on an AGI system for human-level  ...had only 2 pages to fit in.
 
If you are working on a system, you probably hope it will one day help design a 
better world, better tools, better inventions.  The better is a subjective 
human value.  A place for or human-like representation of  at least rough, 
general human values  (bias, likes) in the AGI is essential.
 
The paper give a quick view of the Human-centric representation and behavioral 
systems approach for problem-solving, reasoning as giving meaning (human 
values) to stories and games...Indexing relations via spatially related 
registers is it's simulated substrate.
 
Happy Holidays,
Robert
 
...all the human values were biased, unlike the very objective AGI systems 
designed on the Mudfish's home planet; AGI systems that objectively knew that 
sticky mud is beautiful,  large oceans of gooey mud..how enchanting!  Pure 
clean water, now that's fishy!
  


agi | Archives  | Modify Your Subscription  


--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Mike Tintner
Robert:
Example:
Here's a pattern example you may not have seen before, but by 3C you discover 
the pattern and how to make an example:

As spoken aloud:
five and nine[is]   fine
two and six [is]   twix
five and seven  [is]   fiven


Robert,

So, if I understand, you're designing a system to deal with problems concerning 
objects, which have multiple domain associations.  For example, words as above 
are associated with their sounds, letter patterns, and perhaps meanings. But 
the system always *knows* these domains beforehand  - and that it must consider 
them in any problem?

It couldn't say find the pattern to a problem like:

Six  2003
Seven  1996
Eight 2001
Eight and a half   ? 

where it wouldn't know any domain relevant to solving the problem, and would 
first have to *find* the appropriate domain?. (In creative, human-level 
intelligence problems you often have to do this).


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Robert Swaine
Mike,
Very good choice.
 
 But the system always *knows* these domains beforehand  - and that it must 
 consider them in any problem?
 
 
YES  the domains content structure is what you mean, are the human-centric 
ones provided by living a childs life loading the value system with biases such 
as humans are warm and candy is really sweet.  By further being pushed thru 
western culture grade level curriculum we value the visual features symbols 
2003 and 1996 as numbers, then as dates.  The content models (concept 
patterns) are build up from any basic feature to form instance from the basic 
content of the four domain, such as dates of leap years, century marks, 
millenium or anniversary.
 
problems more like:
  --  ice cream favorite red happee  -- 
What this group of words means has everything to do with what the reader knows 
and values beforehands.  And what he values will determine what his attention 
is on, the food, the emotions, the color, the positions; or how deep the focus 
is: on the entire situation (sentence), a group of them, a single word or a 
letter.  Humans value from the top so we'll likely think of cherry ice cream 
before we see:
    the occurance pattern of letter e in every word in that 'sentence' above.
 
 
 
Good choice for your problem: 
Six  2003
Seven  1996
Eight 2001
Eight and a half   ?  (i see a number of patterns, such as 00 99, 
multiply, add word to end - but haven't gotten the complete formula)
 
For the system, it is biased; it make sense for itself, it's internal value.
 
The answer the system chooses is the one that makes sense to what it knows and 
values.  Sure, it can and will be used as a general pattern mining by comparing 
and contrasting within lines, line-to-line, number-to-text, text-to-number, 
date-to-word, month-to-number, middle-part to end, end-to-end, etc, until a 
resulting comparison yeilds a pattern that it values (from experience or being 
told).  However, the value system controlling attention prevents any 
combinatorial explosion - animals only search through the models that have 
value (indirectly or directly) to the problem situation, thus limiting the 
total gueses we could even make (it looks for patterns it already knows).
 
To solve problems it has not been taught or can't see a pattern for:
 
1) If self-motivated because a reward/avoidance is strong:
Keeps looking for patterns 3-C by persiting in its behavior (doing the same ol 
thing) and fail.
If a value happens to occur in one of the result when it kept going, it will 
see that something was different.  It has acces to its own actions (role and 
relation domain) and this different action stands out (auto-contrast) and 
become of greater value due to the associated difference (non-failure). It 
keeps trying until the motivation runs out (energy level decays) or other value 
or past experiences exceeds its model of how long it should take..
 
2)  Instructed how to solve it by trying x, y or x.
 Wden your attention,  expand your focus - then it has a larger set of 
regions to try and find a pattern it values.  If set, it can examine regions of 
the instruction (x, y , and z) and see what was different from what it was 
trying (if the comparision yeilds a high enough value, it will try those as 
well). Try going left and up O.K.  auto-contrast I was trying only up: the 
difference is to add one more direction; I can try left and up and back etc..
 
 

Creativity and reason come from the  3-C mechanism
 
Creativity in the model is to combine any sets of domain content and give it a 
respective value from its experience and domain models. 
 
Example: Combine the form of a computer mouse, the look of diamonds, the 
function of a steering wheel, with the feel of leather: what do you get?  Focus 
on each region and combine, then e-valuate (compare it to objects, functions).  
What's your result?
Models in my experience say that it's a luxury-car controller; while you might 
say it would be something in an art galleryy, etc (art, value without 
function/role).
 
 
Anyway, Bens, pre-school for AGI is one of the means to bias such a system with 
experience and human values; another way is to try to properly represent human 
experience (static and dynamic) and then essentially implanting memories and 
experience instead of just declarative facts.
 
Robert
 

--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

From: Mike Tintner tint...@blueyonder.co.uk
Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] 
AGI Preschool: sketch of an evaluation framework for early stage AGI systems 
aimed at human-level, roughly humanlike AGI
To: agi@v2.listbox.com
Date: Sunday, December 28, 2008, 8:38 PM





Robert: 
Example:
Here's a pattern example you may not have seen before, but by 3C you discover 
the pattern and how to make an example:
 
As spoken aloud:
five and nine    [is]   fine
two and six [is]   twix
five and seven  [is]   fiven
 
 
Robert,
 
So

Re: Human-centric AGI approach-paper (was Re: Indexing and Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-28 Thread Mike Tintner
 it to objects, 
functions).  What's your result?
Models in my experience say that it's a luxury-car controller; while 
you might say it would be something in an art galleryy, etc (art, value without 
function/role).


Anyway, Bens, pre-school for AGI is one of the means to bias such a 
system with experience and human values; another way is to try to properly 
represent human experience (static and dynamic) and then essentially implanting 
memories and experience instead of just declarative facts.

Robert


--- On Sun, 12/28/08, Mike Tintner tint...@blueyonder.co.uk wrote:

  From: Mike Tintner tint...@blueyonder.co.uk
  Subject: Re: Human-centric AGI approach-paper (was Re: Indexing and 
Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI 
systems aimed at human-level, roughly humanlike AGI
  To: agi@v2.listbox.com
  Date: Sunday, December 28, 2008, 8:38 PM


  Robert: 
  Example:
  Here's a pattern example you may not have seen before, but by 3C you 
discover the pattern and how to make an example:

  As spoken aloud:
  five and nine[is]   fine
  two and six [is]   twix
  five and seven  [is]   fiven


  Robert,

  So, if I understand, you're designing a system to deal with problems 
concerning objects, which have multiple domain associations.  For example, 
words as above are associated with their sounds, letter patterns, and perhaps 
meanings. But the system always *knows* these domains beforehand  - and that it 
must consider them in any problem?

  It couldn't say find the pattern to a problem like:

  Six  2003
  Seven  1996
  Eight 2001
  Eight and a half   ? 

  where it wouldn't know any domain relevant to solving the problem, 
and would first have to *find* the appropriate domain?. (In creative, 
human-level intelligence problems you often have to do this).

--
agi | Archives  | Modify Your Subscription   
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn derekz...@msn.com wrote:

  Ben:

  Right.  My intuition is that we don't need to simulate the dynamics
  of fluids, powders and the like in our virtual world to make it adequate
  for teaching AGIs humanlike, human-level AGI.  But this could be
  wrong.

 I suppose it depends on what kids actually learn when making cakes,
 skipping rocks, and making a mess with play-dough.  Some might say that if
 they get conservation of mass and newton's law then they skipped all the
 useless stuff!



OK, but those some probably don't include any preschool teachers or
educational theorists.

That hypothesis is completely at odds with my own intuition from having
raised 3 kids and spent probably hundreds of hours helping out in daycare
centers, preschools, kindergartens, etc.

Apart from naive physics, which is rather well-demonstrated not to be
derived in the human mind/brain from basic physical principles, there is a
lot of learning about planning, scheduling, building, cooperating ...
basically, all the stuff mentioned in our AGI Preschool paper.

Yes, you can just take a robo-Cyc type approach and try to abstract, on
your own, what is learned from preschool activities and code it into the AI:
code in Newton's laws, axiomatic naive physics, planning algorithms, etc.
My strong prediction is you'll get a brittle AI system that can at best be
tuned into adequate functionality in some rather narrow contexts.



 But in the case where we are trying to roughly follow stages of human
 development with goals of producing human-like linguistic and reasoning
 capabilities, I very much fear that any significant simplification of the
 universe will provide an insufficient basis for the large sensory concept
 set underlying language and analogical reasoning (both gross and fine).
 Literally, I think you're throwing the baby out with the bathwater.  But, as
 you say, this could be wrong.



Sure... that can't be disproven right now, of course.

We plan to expand the paper into a journal paper where we argue against this
obvious objection more carefully -- basically arguing why the virtual-world
setting provides enough detail to support the learning of the critical
cognitive subcomponents of human intelligence.  But, as with anything in
AGI, even the best-reasoned paper can't convince a skeptic.




 It's really the only critique I have of the AGI preschool idea, which I do
 like because we can all relate to it very easily.  At any rate, if it turns
 out to be a valid criticism the symptom will be that an insufficiently rich
 set of concepts will develop to support the range of capabilities needed and
 at that point the simulations can be adjusted to be more complete and
 realistic and provide more human sensory modalities.  I guess it will be
 disappointing if building an adequate virtual world turns out to be as
 difficult and expensive as building high quality robots -- but at least it's
 easier to clean up after cake-baking.


Well, it's completely obvious to me, based on my knowledge of virtual worlds
and robotics, that building a high quality virtual world is orders of
magnitude easier than making a workable humanoid robot.

*So* much $$ has been spent on humanoid robotics before, by large, rich and
competent companies, and they still suck.It's just a very hard problem,
with a lot of very hard subproblems, and it will take a while to get worked
through.

On the other hand, making a virtual world such as I envision, is more than a
spare-time project, but not more than the project of making a single
high-quality video game.  It's something that any one of these big Japanese
companies could do with a tiny fraction of their robotics budgets.  The
issue is a lack of perceived cool value and a lack of motivation.

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel

 It's an interesting idea, but I suspect it too will rapidly break down.
 Which activities can be known about in a rich, better-than-blind-Cyc way
 *without* a knowledge of objects and object manipulation? How can an agent
 know about reading a book,for example,  if it can't pick up and manipulate a
 book? How can it know about adding and subtracting, if it can't literally
 put objects on top of each other, and remove them?  We humans build up our
 knowledge of the world objects/physics up from infancy.  Science also
 insists that all formal scientific knowledge of  the world  - all scientific
 disciplines - must be ultimately physics/objects-based.  Is there really an
 alternative?


And  just to be clear: in the AGI Preschool world I envision, picking up and
manipulating and stacking objects, and so forth, *would* be possible.  This
much is not hard to achieve using current robot-simulator tech.

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
I agree, but the good news is that game dev advances fast.

So, my plan with the AGI Preschool would be to build it in an open platform
such as OpenSim, and then swap in better and better physics engines as they
become available.

Some current robot simulators use ODE and this seems to be good enough to
handle a lot of useful robot-object and object-object interactions, though I
agree it's limited.

Still, making a dramatically better physics engine -- while a bunch harder
than making a nice AGI preschool using current virtual worlds and physics
engines -- is still a way, way easier problem than making a highly
functional (in terms of sensors and actuators) humanoid robot.

Also, the advantages of working in a virtual rather than physical world
should not be overlooked.  The ability to run tests over and over again, to
freely vary parameters and so forth, is pretty nice ... also the ability to
run 1000s of tests in parallel without paying humongous bucks for a fleet of
robots...

ben

On Sat, Dec 20, 2008 at 8:43 AM, Derek Zahn derekz...@msn.com wrote:


 Oh, and because I am interested in the potential of high-fidelity physical
 simulation as a basis for AI research, I did spend some time recently
 looking into options.  Unfortunately the results, from my perspective, were
 disappointing.

 The common open-source physics libraries like ODE, Newton, and so on, have
 marginal feature sets and frankly cannot scale very well performance-wise.
 Once I even did a little application whose purpose was to see whether a
 human being could learn to control an ankle joint to compensate for an
 impulse event and stabilize a simple body model (that is, to make it not
 fall over) by applying torques to the ankle.  I was curious to see (through
 introspection) how humans learn to act as process controllers.
 http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It
 wasn't a very good test of the question so I didn't really get a
 satisfactory answer.  I did discover, though, that a game built around more
 appealing cases of the player learning to control physics-inspired processes
 could be quite absorbing.

 Beyond that, the most promising avenue seems to be physics libraries tied
 to graphics hardware being worked on by the hardware companies to help
 sell their stream processors.  The best example is Nvidia, who bought PhysX
 and ported it to their latest cards, giving a huge performance boost.  Intel
 has bought Havok and I can only imagine that they are planning on using that
 as the interface to some Larrabee-based physics engine.  I'm sure that ATI
 is working on something similar for their newer (very impressive) stream
 processing cards.

 At this stage, though, despite some interesting features and leaping
 performance, it is still not possible to do things like get realistic sensor
 maps for a simulated soft hand/arm, and complex object modifications like
 bending and breaking are barely dreamed of in those frameworks.  Complex
 multi-body interactions (like realistic behavior when dropping or otherwise
 playing with a ring of keys or realistic baby toys) have a long ways to go.

 Basically, I fear those of us who are interested in this are just waiting
 to ride the game development coattails and it will be a few years at least
 until performance that even begins to interest me will be available.

 Just my opinions on the situation.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn


 Some might say that if they get conservation of mass 
 and newton's law then they skipped all the useless stuff!
 OK, but those some probably don't include any preschool 
 teachers or educational theorists. That hypothesis is completely at odds 
 with my own intuition 
 from having raised 3 kids and spent probably hundreds of hours 
 helping out in daycare centers, preschools, kindergartens, etc.
 
Sorry, that was just kind of a joke.  Probably nobody actually has the opinion 
I was lampooning though I do see similar things said sometimes, as if inferring 
minimum-description-length root level reductionisms is a realistic approach to 
learning to deal with the world.  It might even be true, but the humor was 
supposed to be to juxtapose that idea with the AGI preschool.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 Well, it's completely obvious to me, based on my knowledge of virtual worlds
 and robotics, that building a high quality virtual world is orders of
 magnitude easier than making a workable humanoid robot.

I guess that depends on what you mean by high quality and
workable. Why does a robot have to be humanoid, BTW? I'd like a
robot that can make me a cup of tea, I don't particularly care if it
looks humanoid (in fact I suspect many humans would have less
emotional resistance to a robot that didn't look humanoid, since it's
more obviously a machine).

 On the other hand, making a virtual world such as I envision, is more than a
 spare-time project, but not more than the project of making a single
 high-quality video game.

GTA IV cost $5 million, so we're not talking about peanuts here.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Ben: Right.  My intuition is that we don't need to simulate the dynamics of 
fluids, powders and the like in our virtual world to make it adequate for 
teaching AGIs humanlike, human-level AGI.  But this could be wrong.I suppose 
it depends on what kids actually learn when making cakes, skipping rocks, and 
making a mess with play-dough.  Some might say that if they get conservation of 
mass and newton's law then they skipped all the useless stuff!
 
I think I agree with the plausibility of something you have said many times:  
that there may be many paths to AGI that are not similar at all to human 
development -- abstract paths to modelling the universe, teasing meaning from 
sheer statistics of the chinese/chinese dictionary of the raw html internet, 
who knows what.
 
But in the case where we are trying to roughly follow stages of human 
development with goals of producing human-like linguistic and reasoning 
capabilities, I very much fear that any significant simplification of the 
universe will provide an insufficient basis for the large sensory concept set 
underlying language and analogical reasoning (both gross and fine).  Literally, 
I think you're throwing the baby out with the bathwater.  But, as you say, this 
could be wrong.
 
It's really the only critique I have of the AGI preschool idea, which I do like 
because we can all relate to it very easily.  At any rate, if it turns out to 
be a valid criticism the symptom will be that an insufficiently rich set of 
concepts will develop to support the range of capabilities needed and at that 
point the simulations can be adjusted to be more complete and realistic and 
provide more human sensory modalities.  I guess it will be disappointing if 
building an adequate virtual world turns out to be as difficult and expensive 
as building high quality robots -- but at least it's easier to clean up after 
cake-baking.
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Well, it's completely obvious to me, based on my knowledge of virtual
 worlds
  and robotics, that building a high quality virtual world is orders of
  magnitude easier than making a workable humanoid robot.

 I guess that depends on what you mean by high quality and
 workable. Why does a robot have to be humanoid, BTW? I'd like a
 robot that can make me a cup of tea, I don't particularly care if it
 looks humanoid (in fact I suspect many humans would have less
 emotional resistance to a robot that didn't look humanoid, since it's
 more obviously a machine).



It doesn't have to be humanoid ... but apart from rolling instead of
walking,
I don't see any really significant simplifications obtainable from making it
non-humanoid.

Grasping and manipulating general objects with robot manipulators is
very much an unsolved research problem.  So is object recognition in
realistic conditions.

So, to make an AGI robot preschool, one has to solve these hard
research problems first.

That is a viable way to go if one's not in a hurry --
but anyway in the robotics context any talk
of preschools is drastically premature...




  On the other hand, making a virtual world such as I envision, is more
 than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.


Right, but that is way cheaper than making a high-quality humanoid robot

Actually, $$ aside, we don't even **know how** to make a decent humanoid
robot.

Or, a decently functional mobile robot **of any kind**

Whereas making a software based AGI Preschool of the type I described is
clearly
feasible using current technology, w/o any research breakthroughs

And I'm sure it could be done for $300K not $5M using OSS and non-US
outsourced labor...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Derek Zahn

Oh, and because I am interested in the potential of high-fidelity physical 
simulation as a basis for AI research, I did spend some time recently looking 
into options.  Unfortunately the results, from my perspective, were 
disappointing.
 
The common open-source physics libraries like ODE, Newton, and so on, have 
marginal feature sets and frankly cannot scale very well performance-wise.  
Once I even did a little application whose purpose was to see whether a human 
being could learn to control an ankle joint to compensate for an impulse event 
and stabilize a simple body model (that is, to make it not fall over) by 
applying torques to the ankle.  I was curious to see (through introspection) 
how humans learn to act as process controllers.  
http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It 
wasn't a very good test of the question so I didn't really get a satisfactory 
answer.  I did discover, though, that a game built around more appealing cases 
of the player learning to control physics-inspired processes could be quite 
absorbing.
 
Beyond that, the most promising avenue seems to be physics libraries tied to 
graphics hardware being worked on by the hardware companies to help sell 
their stream processors.  The best example is Nvidia, who bought PhysX and 
ported it to their latest cards, giving a huge performance boost.  Intel has 
bought Havok and I can only imagine that they are planning on using that as the 
interface to some Larrabee-based physics engine.  I'm sure that ATI is working 
on something similar for their newer (very impressive) stream processing cards.
 
At this stage, though, despite some interesting features and leaping 
performance, it is still not possible to do things like get realistic sensor 
maps for a simulated soft hand/arm, and complex object modifications like 
bending and breaking are barely dreamed of in those frameworks.  Complex 
multi-body interactions (like realistic behavior when dropping or otherwise 
playing with a ring of keys or realistic baby toys) have a long ways to go.
 
Basically, I fear those of us who are interested in this are just waiting to 
ride the game development coattails and it will be a few years at least until 
performance that even begins to interest me will be available.
 
Just my opinions on the situation.
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Mike Tintner


Bob:  Even with crude or no real
simulation ability in an environment such as Second Life, using some
simple symbology to stand for puck up screwdriver you can still try
to tackle problems such as autobiographical memory - how does the
agent create a coherent story out of a series of activities, and how
can it use that story in future to improve its skills or communication
effectiveness.

It's an interesting idea, but I suspect it too will rapidly break down. 
Which activities can be known about in a rich, better-than-blind-Cyc way 
*without* a knowledge of objects and object manipulation? How can an agent 
know about reading a book,for example,  if it can't pick up and manipulate a 
book? How can it know about adding and subtracting, if it can't literally 
put objects on top of each other, and remove them?  We humans build up our 
knowledge of the world objects/physics up from infancy.  Science also 
insists that all formal scientific knowledge of  the world  - all scientific 
disciplines - must be ultimately physics/objects-based.  Is there really an 
alternative? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 It doesn't have to be humanoid ... but apart from rolling instead of
 walking,
 I don't see any really significant simplifications obtainable from making it
 non-humanoid.

I can think of several. For example, you could give it lidar to
measure distances with -- this could then be used as input to its
vision system making it easier for the robot to tell which objects are
near or far. Instead of binocular vision, it could have 2 video
cameras. It could have multiple ears, which would help it tell where a
sound is coming from.

The the best of my knowledge, no robot that's ever been used for
anything practical has ever been humanoid.

 Grasping and manipulating general objects with robot manipulators is
 very much an unsolved research problem.  So is object recognition in
 realistic conditions.

What sort of visual input do you plan to have in your virtual environment?

 So, to make an AGI robot preschool, one has to solve these hard
 research problems first.

 That is a viable way to go if one's not in a hurry --
 but anyway in the robotics context any talk
 of preschools is drastically premature...


  On the other hand, making a virtual world such as I envision, is more
  than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.

 Right, but that is way cheaper than making a high-quality humanoid robot

Is it? I suspect one with tracks, two robotic arms, various sensors
for light and sound, etc, could be made for less than $10,000 -- this
would be something that could move around and manipulate a blocks
world. My understanding is that all, or nearly all, the difficulty
comes in programming it. Which is where AI comes in.

 Actually, $$ aside, we don't even **know how** to make a decent humanoid
 robot.

 Or, a decently functional mobile robot **of any kind**

Is that because of hardware or software issues?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Derek Zahn derekz...@msn.com:
 Ben:

 Right.  My intuition is that we don't need to simulate the dynamics
 of fluids, powders and the like in our virtual world to make it adequate
 for teaching AGIs humanlike, human-level AGI.  But this could be
 wrong.

 I suppose it depends on what kids actually learn when making cakes, skipping
 rocks, and making a mess with play-dough.

I think that the important cognitive abilities involved are at a
simpler level than that.

Consider an object, such as a sock or a book or a cat. These objects
can all be recognised by young children, even though the visual input
coming from trhem chasnges from what angle they're viewed at. More
fundamentally, all these objects can change shape, yet humans can
still effortlessly recognise them to be the same thing. And this
ability doesn't stop with humans -- most (if not all) mammalian
species can do it.

Until an AI can do this, there's no point in trying to get it to play
at making cakes, etc.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
Well, there is massively more $$ going into robotics dev than into AGI dev,
and no one seems remotely near to solving the hard problems

Which is not to say it's a bad area of research, just that it's a whole
other huge confusing RD can of worms

So I still say, the choices are

-- virtual embodiment, as I advocate

-- delay working on AGI for a decade or so, and work on robotics now instead
(where by robotics I include software work on low-level sensing and actuator
control)

Either choice makes sense but I prefer the former as I think it can get us
to the end goal faster.

About the adequacy of current robot hardware -- I'll tell you more in 9
months or so ... a project I'm collaborating on is going to be using AI
(including OpenCog) to control a Nao humanoid robot.  We'll have 3 of them,
they cost about US$14K each or so.   The project is in China but I'll be
there in June-July to play with the Naos and otherwise collaborate on the
project.

My impression is that with a Nao right now, camera-eye sensing is fine so
long as lighting conditions are good ... audition is OK in the absence of
masses of background noise ... walking is very awkward and grasping is
possible but limited

The extent to which the limitations of current robots are hardware vs
software based is rather subtle, actually.

In the case of vision and audition, it seems clear that the bottleneck is
software.

But, with actuation, I'm not so sure.  The almost total absence of touch and
kinesthetics in current robots is a huge impediment, and puts them at a huge
disadvantage relative to humans.  Things like walking and grasping as humans
do them rely extremely heavily on both of these senses, so in trying to deal
with this stuff without these senses (in any serious form), current robots
face a hard and odd problem...

ben

On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  It doesn't have to be humanoid ... but apart from rolling instead of
  walking,
  I don't see any really significant simplifications obtainable from making
 it
  non-humanoid.

 I can think of several. For example, you could give it lidar to
 measure distances with -- this could then be used as input to its
 vision system making it easier for the robot to tell which objects are
 near or far. Instead of binocular vision, it could have 2 video
 cameras. It could have multiple ears, which would help it tell where a
 sound is coming from.

 The the best of my knowledge, no robot that's ever been used for
 anything practical has ever been humanoid.

  Grasping and manipulating general objects with robot manipulators is
  very much an unsolved research problem.  So is object recognition in
  realistic conditions.

 What sort of visual input do you plan to have in your virtual environment?

  So, to make an AGI robot preschool, one has to solve these hard
  research problems first.
 
  That is a viable way to go if one's not in a hurry --
  but anyway in the robotics context any talk
  of preschools is drastically premature...
 
 
   On the other hand, making a virtual world such as I envision, is more
   than a
   spare-time project, but not more than the project of making a single
   high-quality video game.
 
  GTA IV cost $5 million, so we're not talking about peanuts here.
 
  Right, but that is way cheaper than making a high-quality humanoid robot

 Is it? I suspect one with tracks, two robotic arms, various sensors
 for light and sound, etc, could be made for less than $10,000 -- this
 would be something that could move around and manipulate a blocks
 world. My understanding is that all, or nearly all, the difficulty
 comes in programming it. Which is where AI comes in.

  Actually, $$ aside, we don't even **know how** to make a decent humanoid
  robot.
 
  Or, a decently functional mobile robot **of any kind**

 Is that because of hardware or software issues?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel


 Consider an object, such as a sock or a book or a cat. These objects
 can all be recognised by young children, even though the visual input
 coming from trhem chasnges from what angle they're viewed at. More
 fundamentally, all these objects can change shape, yet humans can
 still effortlessly recognise them to be the same thing. And this
 ability doesn't stop with humans -- most (if not all) mammalian
 species can do it.

 Until an AI can do this, there's no point in trying to get it to play
 at making cakes, etc.



Well, it seems to me that current virtual worlds are just fine for exploring
this kind of vision processing

However, I have long been perplexed at the obsession with so many AI folks
with vision processing.

I mean: yeah, it's important to human intelligence, and some aspects of
human cognition are related to human visual perception

But, it's not obvious to me why so many folks think vision is so critical to
AI, whereas other aspects of human body function are not.

For instance, the yogic tradition and related Eastern ideas would suggest
that *breathing* and *kinesthesia* are the critical aspects of mind.
Together with touch, kinesthesia is what lets a mind establish a sense of
self, and of the relation between self and world.

In that sense kinesthesia and touch are vastly more fundamental to mind than
vision.  It seems to me that a mind without vision could still be a
basically humanlike mind.  Yet, a mind without touch and kinesthesia could
not, it would seem, because it would lack a humanlike sense of its own self
as a complex dynamic system embedded in a world.

Why then is there constant talk about vision processing and so little talk
about kinesthetic and tactile processing?

Personally I don't think one needs to get into any of this sensorimotor
stuff too deeply to make a thinking machine.  But, if you ARE going to argue
that sensorimotor aspects are critcial to humanlike AI because they're
critical to human intelligence, why harp on vision to the exclusion of other
things that seem clearly far more fundamental??

Is the reason just that AI researchers spend all day staring at screens and
ignoring their physical bodies and surroundings?? ;-)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 However, I have long been perplexed at the obsession with so many AI folks
 with vision processing.

I wouldn't say I'm obsessed with it. On its own vision processing
does nothing, the same as all other input processing -- its only when
a brain/AI used that processing to create output that it is actually
doing any work.

Theimportnat thing about vision, IMO, is not vision itself, but the
way that vision interfaces with a mind's model of the world. And
vision isn't really that different in principle from the other sensory
modalities that a human or animal has -- they are all inputs, that go
to building a model of the world, through which the organism makes
decisions.

 But, it's not obvious to me why so many folks think vision is so critical to
 AI, whereas other aspects of human body function are not.

I don't think any human body functions are critical to AI. IMO it's a
perfectly valid approach to AI to build programs that deal with
digital symbolic information -- e.g. programs like copycat or eurisko.

 For instance, the yogic tradition and related Eastern ideas would suggest
 that *breathing* and *kinesthesia* are the critical aspects of mind.
 Together with touch, kinesthesia is what lets a mind establish a sense of
 self, and of the relation between self and world.

Kinesthesia/touch/movement are clearly important sensory modalities in
mammals, given that they are utterly fundamental to moving around in
the world. Breathing less so -- I mean you can do it if you're
unconscious or brain dead.

 Why then is there constant talk about vision processing and so little talk
 about kinesthetic and tactile processing?

Possibly because people are less conscious of it than vision.

 Personally I don't think one needs to get into any of this sensorimotor
 stuff too deeply to make a thinking machine.

Me neither. But if the thinking machine is to be able to solve certain
problems (when connected to a robot body, of course) it will have to
have sophisticated systems to handle touch, movement and vision. By
certain problems I mean things like making a cup of tea, or a cat
climbing a tree, or a human running over uneven ground.

 But, if you ARE going to argue
 that sensorimotor aspects are critcial to humanlike AI because they're
 critical to human intelligence, why harp on vision to the exclusion of other
 things that seem clearly far more fundamental??

Say I asked you to imagine a cup.

(Go on, do it now).

Now, when you imagined the cup, did you imagine what it looks like, or
what it feels like to the touch. For me, it was the former. So I don't
think touch is clearly more fundamental, in terms of how it interacts
with our internal model of the world, than vision is.

 Is the reason just that AI researchers spend all day staring at screens and
 ignoring their physical bodies and surroundings?? ;-)

:-)

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/19 Ben Goertzel b...@goertzel.org:

 What I'd like to see is a really  nicely implemented virtual world
 preschool for AIs ... though of course building such a thing will be a lot
 of work for someone...

Why a virtual world preschool and not a real one?

A virtual world, if not programmed accurately, may be subtly
differernet from the real world, so that for example an AGI is capable
of picking up and using a screwdriver in the virtual world but not
real real world, because the real world is more complex.

If you want your AGI to be able to use a screwdriver, you probably
need to train it in the real world (at least some of the time).

If you don't care whether your AGI can use a screwdriver, why have one
in the virtual world?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Well, there is a major question whether one can meaningfully address AGI via
virtual-robotics rather than physical-robotics

No one can make a convincing proof either way right now

But, it's clear that if one wants to go the physical-robotics direction, now
is not the time to be working on preschools and cognition.  In that case, we
need to be focusing on vision and grasping and walking and such.

OTOH, if one wants to go the virtual-robotics direction (as is my
intuition), then it is possible to bypass many of the lower-level
perception/actuation issues and focus on preschool-level learning, reasoning
and conceptual creation.

And there's no need to write a paper on the eventual possibility of putting
robots in real preschools: that's obvious.  But it's also far beyond the
scope of contemporary robots, as would be univerally.  Whereas virtual
preschool is not as *obviously* far beyond the scope of contemporary AGI
designs (at least according to some experts, like me), which is what makes
it more interesting in the present moment...

ben g

-- Ben G

On Fri, Dec 19, 2008 at 5:12 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/19 Ben Goertzel b...@goertzel.org:
 
  What I'd like to see is a really  nicely implemented virtual world
  preschool for AIs ... though of course building such a thing will be a
 lot
  of work for someone...

 Why a virtual world preschool and not a real one?

 A virtual world, if not programmed accurately, may be subtly
 differernet from the real world, so that for example an AGI is capable
 of picking up and using a screwdriver in the virtual world but not
 real real world, because the real world is more complex.

 If you want your AGI to be able to use a screwdriver, you probably
 need to train it in the real world (at least some of the time).

 If you don't care whether your AGI can use a screwdriver, why have one
 in the virtual world?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Derek Zahn

Hi Ben.
 
 OTOH, if one wants to go the virtual-robotics direction (as is my intuition), 
 then it is possible to bypass many of the lower-level perception/actuation 
 issues and focus on preschool-level learning, reasoning and conceptual 
 creation.
 
And yet, in your paper (which I enjoyed), you emphasize the importance of not 
providing
a simplistic environment (with the screwdriver example).  Without facing the 
low-level 
sensory world (either through robotics or through very advanced simulations 
feeding 
senses essentially equivalent to those of humans), I wonder if a targeted 
human-like 
AGI will be able to acquire the necessary concepts that children absorb and use 
as much o
f the metaphorical basis for their thought -- slippery, soft, hot, hard, rough, 
sharp, and on 
and on.
 
I assume you have some sort of middle ground in mind... what's your thinking 
about
how much you can cheat in this way (beyond that of what is conveniently 
doable 
I mean)?
 
Thanks!
 
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
It's a hard problem, and the answer is to cheat as much as possible, but
not any more so.

We'll just have to feel this out via experiment...

My intuition is that current virtual worlds and game worlds are too crude,
but current robot simulators are not.

I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
doubt one needs bodies with detailed internal musculature ... but I think
one does need basic Newtonian physics and the ability to use tools, break
things in half (but not necessarily realistic cracking behavior), balance
things and carry them and stack them and push them together Lego-like and so
forth...

I could probably frame a detailed argument as to WHY I think the line should
be drawn right there, in terms of the cognitive tasks supported by this
level of physics simulation.  That would be an interesting followup paper, I
guess.

The crux of the argument would be that all the basic tasks required in an
AGI Preschool could be sensibly formulated using only this level of physics
simulation, in a way that doesn't involve cheating... (but the proper
contextualization formalization of doesn't involve cheating would require
some thought)

ben


On Fri, Dec 19, 2008 at 7:54 PM, Derek Zahn derekz...@msn.com wrote:

  Hi Ben.

  OTOH, if one wants to go the virtual-robotics direction (as is my
 intuition),
  then it is possible to bypass many of the lower-level
 perception/actuation
  issues and focus on preschool-level learning, reasoning and conceptual
 creation.

 And yet, in your paper (which I enjoyed), you emphasize the importance of
 not providing
 a simplistic environment (with the screwdriver example).  Without facing
 the low-level
 sensory world (either through robotics or through very advanced simulations
 feeding
 senses essentially equivalent to those of humans), I wonder if a targeted
 human-like
 AGI will be able to acquire the necessary concepts that children absorb and
 use as much o
 f the metaphorical basis for their thought -- slippery, soft, hot, hard,
 rough, sharp, and on
 and on.

 I assume you have some sort of middle ground in mind... what's your
 thinking about
 how much you can cheat in this way (beyond that of what is conveniently
 doable
 I mean)?

 Thanks!


 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
 doubt one needs bodies with detailed internal musculature ... but I think
 one does need basic Newtonian physics and the ability to use tools, break
 things in half (but not necessarily realistic cracking behavior), balance
 things and carry them and stack them and push them together Lego-like and so
 forth...

Needs for what purpose? I can see three uses for a virtual world:

1. to mimic the real world accurately enough that the AI can use the
virtual world instead, and by using it become proficient in dealing
with the real world, because it is cheaper than a real world.
Obviously to program a virtual world this real is a big up-front
investment, but once the investment is made, such a world may well be
cheaper and easier to use than our real one.

2. to provide a useful bridge between humans and the AGI, i.e. the
virtual world will be similar enough to the real world that humans
will have a common frame of reference with the AGI.

3. to provide a toy domain for the AI to think about and become
proficient in. (Of course there's no reason why a toy domain needs to
be anything like a virtual world, it could for example be a software
modality that can see/understand source code as easily and fluently
as humans interprete visual input.)

AIUI you're mostly thinking in terms of 2 or 3. Fair comment?

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 8:42 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
  doubt one needs bodies with detailed internal musculature ... but I think
  one does need basic Newtonian physics and the ability to use tools, break
  things in half (but not necessarily realistic cracking behavior), balance
  things and carry them and stack them and push them together Lego-like and
 so
  forth...

 Needs for what purpose? I can see three uses for a virtual world:

 1. to mimic the real world accurately enough that the AI can use the
 virtual world instead, and by using it become proficient in dealing
 with the real world, because it is cheaper than a real world.
 Obviously to program a virtual world this real is a big up-front
 investment, but once the investment is made, such a world may well be
 cheaper and easier to use than our real one.


I think this will come along as a side-effect of achieving the other goals,
to some extent.  But it's not my main goal, no.




 2. to provide a useful bridge between humans and the AGI, i.e. the
 virtual world will be similar enough to the real world that humans
 will have a common frame of reference with the AGI.



Yes...
to allow the AGI to develop progressively greater intelligence
in a manner that humans can easily comprehend, so that we can
easily participate and encourage its growth (via teaching and via
code changes, knowledge entry, etc.)



 3. to provide a toy domain for the AI to think about and become
 proficient in.


Not just to become proficient in the domain, but become proficient
in general humanlike cognitive processes.

The point of a preschool is that it's designed to present all important
adult human cognitive processes in simplified forms.


 (Of course there's no reason why a toy domain needs to
 be anything like a virtual world, it could for example be a software
 modality that can see/understand source code as easily and fluently
 as humans interprete visual input.)

 AIUI you're mostly thinking in terms of 2 or 3. Fair comment?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Derek Zahn derekz...@msn.com:

 And yet, in your paper (which I enjoyed), you emphasize the importance of
 not providing
 a simplistic environment (with the screwdriver example).  Without facing the
 low-level
 sensory world (either through robotics or through very advanced simulations
 feeding
 senses essentially equivalent to those of humans), I wonder if a targeted
 human-like
 AGI will be able to acquire the necessary concepts that children absorb and
 use as much o
 f the metaphorical basis for their thought -- slippery, soft, hot, hard,
 rough, sharp, and on and on.

Evolution has equipped humans (and other animals) have a good
intuitive understanding of many of the physical realities of our
world. The real world is not just slippery in the physical sense, it's
slippery in the non-literal sense too. For example, I can pick up an
OXO cube (a solid object), crush it so it become powder, pour it into
my stew, and stir it in so it dissolves. My mind can easily and
effortlessly track that in some sense its the same oxo cube and in
another sense it isn't.

Another example: my cat can distinguish between surfaces that are safe
to sit on, and others that are too wobbly, even if they look the same.

An animals intuitive physics is a complex system. I expect that in
humans a lot of this machinery isd re-used to create intelligence. (It
may be true, and IMO probably is true, that it's not necessary to
re-create this machinery to make an AGI).


-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:


 3. to provide a toy domain for the AI to think about and become
 proficient in.

 Not just to become proficient in the domain, but become proficient
 in general humanlike cognitive processes.

 The point of a preschool is that it's designed to present all important
 adult human cognitive processes in simplified forms.

So it would be able to transfer its learning to the real world and
(when given a robot body) be able to go into a kitchen its never seen
before and make a cup of tea? (In other words, will the simulation be
deep enough to allow that).

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Right.  My intuition is that we don't need to simulate the dynamics
of fluids, powders and the like in our virtual world to make it adequate
for teaching AGIs humanlike, human-level AGI.  But this could be
wrong.

It also could be interesting to program an artificial chemistry that
emulated certain aspects of real chemistry -- not to be realistic, but
to have enough complexity to be vaguely analogous.

After all, I mean: preschoolers have fun and learn a lot mixing flour and
butter and
eggs and so forth, but how realistic does the physics of such things really
have to be to
give a generally comparable learning experience???

ben



 Evolution has equipped humans (and other animals) have a good
 intuitive understanding of many of the physical realities of our
 world. The real world is not just slippery in the physical sense, it's
 slippery in the non-literal sense too. For example, I can pick up an
 OXO cube (a solid object), crush it so it become powder, pour it into
 my stew, and stir it in so it dissolves. My mind can easily and
 effortlessly track that in some sense its the same oxo cube and in
 another sense it isn't.

 Another example: my cat can distinguish between surfaces that are safe
 to sit on, and others that are too wobbly, even if they look the same.

 An animals intuitive physics is a complex system. I expect that in
 humans a lot of this machinery isd re-used to create intelligence. (It
 may be true, and IMO probably is true, that it's not necessary to
 re-create this machinery to make an AGI).


 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Well, that's a really easy example, right?  For making tea, the answer
would probably be yes.

Baking a cake is a harder example.  An AGI trained in a virtual world could
certainly follow a recipe to make a passable cake.  But it would never learn
to be a **really good** baker in the virtual world, unless the virtual world
were fabulously realistic in its simulation (and we don't know how to make
it that good, right now).  Being a really good baker requires a lot of
intuition for subtle physical properties of ingredients, not just following
a recipe and knowing the primitive basics of naive physics...

ben g

On Fri, Dec 19, 2008 at 8:56 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
 
  3. to provide a toy domain for the AI to think about and become
  proficient in.
 
  Not just to become proficient in the domain, but become proficient
  in general humanlike cognitive processes.
 
  The point of a preschool is that it's designed to present all important
  adult human cognitive processes in simplified forms.

 So it would be able to transfer its learning to the real world and
 (when given a robot body) be able to go into a kitchen its never seen
 before and make a cup of tea? (In other words, will the simulation be
 deep enough to allow that).

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Philip Hunt
2008/12/20 Ben Goertzel b...@goertzel.org:

 Baking a cake is a harder example.  An AGI trained in a virtual world could
 certainly follow a recipe to make a passable cake.  But it would never learn
 to be a **really good** baker in the virtual world, unless the virtual world
 were fabulously realistic in its simulation (and we don't know how to make
 it that good, right now).  Being a really good baker requires a lot of
 intuition for subtle physical properties of ingredients, not just following
 a recipe and knowing the primitive basics of naive physics...

A sense of taste would probably help too.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Ahhh... ***that's*** why everyone always hates my cakes!!!  I never realized
you were supposed to **taste** the stuff ... I thought it was just supposed
to look funky after you throw it in somebody's face ;-)

On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Baking a cake is a harder example.  An AGI trained in a virtual world
 could
  certainly follow a recipe to make a passable cake.  But it would never
 learn
  to be a **really good** baker in the virtual world, unless the virtual
 world
  were fabulously realistic in its simulation (and we don't know how to
 make
  it that good, right now).  Being a really good baker requires a lot of
  intuition for subtle physical properties of ingredients, not just
 following
  a recipe and knowing the primitive basics of naive physics...

 A sense of taste would probably help too.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Although, I note, I know a really good baker who makes great cakes in spite
of the fact that she does not eat sugar and hence does not ever taste most
of the stuff she makes...

But she *used to* eat sugar, so to an extent she can go on memory

Sorta like how Beethoven kept composing after he went deaf, I suppose ;-)

On Fri, Dec 19, 2008 at 9:42 PM, Ben Goertzel b...@goertzel.org wrote:


 Ahhh... ***that's*** why everyone always hates my cakes!!!  I never
 realized you were supposed to **taste** the stuff ... I thought it was just
 supposed to look funky after you throw it in somebody's face ;-)


 On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Baking a cake is a harder example.  An AGI trained in a virtual world
 could
  certainly follow a recipe to make a passable cake.  But it would never
 learn
  to be a **really good** baker in the virtual world, unless the virtual
 world
  were fabulously realistic in its simulation (and we don't know how to
 make
  it that good, right now).  Being a really good baker requires a lot of
  intuition for subtle physical properties of ingredients, not just
 following
  a recipe and knowing the primitive basics of naive physics...

 A sense of taste would probably help too.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread J. Andrew Rogers


On Dec 19, 2008, at 6:43 PM, Ben Goertzel wrote:


Although, I note, I know a really good baker who makes great cakes  
in spite of the fact that she does not eat sugar and hence does not  
ever taste most of the stuff she makes...


But she *used to* eat sugar, so to an extent she can go on memory



Fortunately, baking is more about process control than flavor control.  
Unlike normal cooking, which is significantly fine-tuned by taste, the  
taste of baked goods is pretty invariant.  On the other hand, baking  
requires a lot of attention to detail and process precision that  
normal cooking does not.  Which is why I am merely an adequate baker  
instead of a great one. :-)


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com