Hi Ed,

As most already know, the problem I am trying to solve involves knowledge and 
skill acquisition to achieve AGI.  The proposed solution is a bootstrap English 
dialog system, backed by a knowledge base based upon OpenCyc and greatly 
elaborated with lexical information from WordNet, Wiktionary, and The CMU 
Pronouncing Dictionary.  A multitude of volunteers would subsequently mentor 
the many agents that will comprise Texai.

At some point, the various cognitive architectures you mentioned will achive 
natural language capability, so perhaps my approach is subsumed by them.  But 
my focus is on implementing this capability before any other, with the hope 
that it is optimal with respect to minimizing the amount of programming that I 
alone should perform.
 
-Steve


Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

----- Original Message ----
From: Ed Porter <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Saturday, April 19, 2008 10:35:43 AM
Subject: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

 WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

With the work done by Goertzel et al, Pei, Joscha Bach
<http://www.micropsi.org/> , Sam Adams, and others who spoke at AGI 2008, I
feel we pretty much conceptually understand how build powerful AGI's.  I'm
not necessarily saying we know all the pieces of the puzzle, but rather that
we know enough to start building impressive intelligences, and once we build
them we will be in a much better position to find out what are the other
missing conceptual pieces of the puzzle--- if any.

As I see it --- the major problem is in selecting from all we know, the
parts necessary to build a powerful artificial mind, at the scale needed, in
a way that works together well, efficiently, and automatically.  This would
include a lot of parameter tuning and determining of which competing
techniques for accomplishing the same end are most efficient at the scale
and in the context needed.  

But I don't see any major aspects of the problem that we don't already have
what appear to be good ways for addressing, once we have all the pieces put
together.

I ASSUME --- HOWEVER --- THERE ARE AT LEAST SOME SUCH MISSING CONCEPTUAL
PARTS OF THE PUZZLE --- AND I AM JUST FAILING TO SEE THEM.

I would appreciate it if those on this list could point out what significant
conceptual aspect of the AGI problem are not dealt with by a reasonable
synthesis drawn from works like that of Goertzel et al., Pei Wang, Joscha
Bach, and Stan Franklin --- other than the problems acknowledge above 

IT WOULD BE VALUABLE TO HAVE A DISCUSSION OF --- AND MAKE A LIST OF --- WHAT
--- IF ANY --- MISSING CONCEPTUAL PIECES EXIST IN AGI.  If there are any
such good list, please provide pointers to them.

I WILL CREATE A SUMMARIZED LIST OF ALL THE SIGNIFICANT MISSING PIECES OF THE
AGI PUZZLE THAT ARE SENT TO THE AGI LIST UNDER THIS THREAD NAME, WITH THE
PERSON SENDING EACH SUCH SUGGESTION WITH THE DATE OF THEIR POST IF IT
CONTAINS VALUABLE DESCRIPTION OF THE UNSOLVED PROBLEM INVOLVED NOT CONTAINED
IN MY SUMMARY --- AND I WILL POST IT BACK TO THE LIST.  I WILL TRY TO
COMBINE SIMILAR SUGGESTIONS WERE POSSIBLE TO MAKE THE LIST MORE CONCISE AND
FOCUSED

For purposes of creating this list of missing conceptual issues --- let us
assume we have very powerful hardware --- but hardware that is realistic
within at least a decade (1).  Let us also assume we have a good massively
parallel OS and programming language to realize our AGI concepts on such
hardware.  We do this to remove the absolute barriers to human-level
intelligent created by the limited hardware current AGI scientists have to
work with and to allow a systems to have the depth of representation and
degree of massively parallel inference necessary for human-level thought.

------------------------------------------
(1) Let us say the hardware has 100TB of RAM --- and theoretical values of
1000TOpp/sec --- 1000T random memory read or writes/sec -- and an
X-sectional band of 1T 64Byte Messages/ sec (with the total number of such
messages per second going up, the shorter the distance they travel within
the 100T memory space).  Assume in addition a tree net for global broadcast
and global math and control functions with a total latency to and from the
entire 100TBytes of several micro seconds. In Ten years such hardware may
sell for under two million dollars.  It is probably more than is needed for
human level AGI, but it gives us room to be inefficient, and significantly
frees us from having to think compulsively about locality of memory.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com







      
____________________________________________________________________________________
Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to