I see no argument in your text against my main argumentation, that an AGI
should be able to learn chess from playing chess alone. This I call straw
man replies.

 

My main point against embodiment is just the huge effort for embodiment. You
could work for years with this approach and  a certain AGI concept until you
recognize that it doesn't work.

 

If you apply your AGI concept in a small and even not necessarily
AGI-complete domain you would come much faster to a benchmark whether your
concept is even worth to make difficult studies with embodiment.

 

Chess is a very good domain for this benchmark because it is very easy to
program and it is very difficult to outperform human intelligence in this
domain.

 

- Matthias

 

 

 

 

Von: David Hart [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 22. Oktober 2008 09:43
An: agi@v2.listbox.com
Betreff: Re: [agi] If your AGI can't learn to play chess it is no AGI

 

Matthias, 

You've presented a straw man argument to criticize embodiment; As a
counter-example, in the OCP AGI-development plan, embodiment is not
primarily used to provide domains (via artificial environments) in which an
AGI might work out abstract problems, directly or comparatively (not to
discount the potential utility of this approach in many scenarios), but
rather to provide an environment for the grounding of symbols (which include
concepts important for doing mathematics), similar to the way in which
humans (from infants through to adults) learn through play and also through
guided education.

'Abstraction' is so named because it involves generalizing from the
specifics of one or more domains (d1, d2), and is useful when it can be
applied (with *any* degree of success) to other domains (d3, ...). Virtual
embodied interactive learning utilizes virtual objects and their properties
as a way of generating these specifics for artificial minds to use to build
abstractions, to grok the abstractions of others, and ultimately to build a
deep understanding of our reality (yes, 'deep' in this sense is used in a
very human-mind-centric way).

Of course, few people claim that machine learning with the help of virtually
embodied environments is the ONLY way to approach building an AI capable of
doing and mathematics (and communicating with humans about mathematics), but
it is an approach that has *many* good things going for it, including
proving tractable via measurable incremental improvements (even though it is
admittedly still at a *very* early stage).

-dave

On Wed, Oct 22, 2008 at 4:20 PM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:

It seems to me that many people think that embodiment is very important for
AGI.

For instance some people seem to believe that you can't be a good
mathematician if you haven't made some embodied experience.

 

But this would have a rather strange consequence:

If you give your AGI a difficult mathematical problem to solve, then it
would answer:

 

"Sorry, I still cannot solve your problem, but let me walk with my body
through the virtual world. 

Hopefully, I will then understand your mathematical question end even more
hopefully I will be able to solve it after some further embodied
experience."

 

AGI is the ability to solve different problems in different domains. But
such an AGI would need to make experiences in domain d1 in order to solve
problems of domain d2. Does this really make sense, if every information
necessary to solve problems of d2 is in d2? I think an AGI which has to make
experiences in d1 in order to solve a problem of domain d2 which contains
everything to solve this problem is no AGI. How should such an AGI know what
experiences in d1 are necessary to solve the problem of d2?

 

In my opinion a real AGI must be able to solve a problem of a domain d
without leaving this domain if in this domain there is everything to solve
this problem.

 

>From this we can define a simple benchmark which is not sufficient for AGI
but which is *necessary* for a system to be an AGI system:

 

Within the domain of chess there is everything to know about chess. So if it
comes up to be a good chess player

learning chess from playing chess must be sufficient. Thus, an AGI which is
not able to enhance its abilities in chess from playing chess alone is no
AGI.  

 

Therefore, my first steps in the roadmap towards AGI would be the following:

1.       Make a concept for your architecture of your AGI

2.       Implement the software for your AGI

3.       Try if your AGI is able to become a good chess player from learning
in the domain of chess alone.

4.       If your AGI can't even learn to play good chess then it is no AGI
and it would be a waste of time to make experiences with your system in more
complex domains.

 

-Matthias

 

 

 

 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> Fehler! Es wurde kein
Dateiname angegeben.|  <https://www.listbox.com/member/?&;> Modify Your
Subscription

 <http://www.listbox.com> Fehler! Es wurde kein Dateiname angegeben.

 

  _____  


agi |  <https://www.listbox.com/member/archive/303/=now> Archives
<https://www.listbox.com/member/archive/rss/303/> |
<https://www.listbox.com/member/?&;
7> Modify Your Subscription

 <http://www.listbox.com> 

 




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to