--- SUMMARY OF POSTS SO FAR RE --- WHAT ARE THE MISSING CONCEPTUAL PIECES IN
AGI? 
===================================
>===Matt Mahoney -- Sat 4/19/2008 12:10 PM ====>

- Lack of well defined goals.  What defines AGI?  A better spam filter?  A
robotic housemaid?  Automating all human labor?

- Inability to reverse engineer the human brain.  Why do we need 10^15
synapses to implement 10^9 bits of long term memory?

- Hardware cost.  It takes about a million PCs to simulate a human brain
sized neural network.

- Software cost.  Software does not obey Moore's law.  Why would training an
AI cost less than raising a child or training an employee?

===================================
>===Stephen Reed -- Sat 4/19/2008 12:39 PM ====> 
"knowledge and skill acquisition to achieve AGI.  The proposed solution is a
bootstrap English dialog system, backed by a knowledge base based upon
OpenCyc and greatly elaborated with lexical information from WordNet,
Wiktionary, and The CMU Pronouncing Dictionary." 

===================================
>===Charles D. Hixon -- Sat 4/19/2008 12:51 PM ====>
[missing ability to learn and use group of specialized ai's to function as
AGI -- my summary]

---"missing, or at least not emphasized, is that general intelligence is
inefficient compared to specialized techniques....(new para).... a good AGI
will have a rather large collection of specialized AIs that are a part of
it's toolbox.  When it encounters a new environment or problem, one of the
things it will be doing, as it solves the problem of how to deal with this
new problem, is build a specialized AI to handle dealing with that
problem....(new para) my expectation that this approach will be necessary
across a wide gamut of AGI designs, and that unitary minds will be scarce to
non-existent.

===================================
>===Ben Goertzel -- Sat 4/19/2008 5:03 PM ====>
"There are no apparent missing conceptual pieces in the Novamente
approach..."

"...there are certainly places where only a high-level conceptual design has
been sketched out (with an intuitive plausibility argument,...."

"Any one of these places could, after more implementation and
experimentation, get revealed to be **concealing** a conceptual problem that
isn't now apparent.  We'll discover that as we go."

===================================
>===Richard Loosemore == Sat 4/19/2008 7:57 PM ====>
"So, I am looking for your concrete reason (not gut instinct, but 
concrete reason) to claim that "we know enough ...etc.".
[Richard lists no major unsolved conceptual problems, but implies there are
many - my interpretation]


==================================================
==== ED PORTER ====> 
NONE OF THE ABOVE POINT OUT ANY MAJOR MISSING CONCEPTUAL UNDERSTANDINGS
NECESSARY FOR AGI. 
==================================================

That does not mean there are no issues or significant current tasks
remaining, but if, like me, one believes in the basic premise behind the
Novamente-like approaches, none of the above allegedly missing
understandings are the type of missing conceptual pieces I was asking about.
Let me address each of the above possible allegations of missing pieces,
quickly.

===== Re Matt Mahoney's comments ======
===================================
>====Matt Mahoney ====>"- Lack of well defined goals.  What defines AGI?  A
better spam filter?  A robotic housemaid?  Automating all human labor?"

====ED PORTER ====> The following is a pretty sufficient list of reasonably
well defined goals --- a machine with generally human-level capabilities in
each of the following: --- NL understanding and generation --- speech
recognition and generation --- visual recognition, scene understanding, and
imagination --- understanding and writing of computer programming ---
learning, appropriately recalling, and reasoning, predicting, and imagining
from experience learned from a combination of vision, hearing, other senses,
and the from reading or hearing declarative knowledge --- ability to deal
with knowledge base similar to and as large as humans --- creative and
aesthetic problem solving, planning, design, and artistic creation ---
appropriately learning, refining, scoring, and selecting of mental and
physical behaviors --- and all other desired traits of the human mind we
know.  

There has been substantial thinking about almost all of these types of
problems, and except for special front ends for sensory processing, it seems
a general Novamente-like system with the addition of Growing Neural Gas
mechanism for learning in vector spaces should be able handle all of this.  

>====Matt Mahoney ====>- Inability to reverse engineer the human brain.  Why
do we need 10^15 synapses to implement 10^9 bits of long term memory?

====ED PORTER ====> What we have learned from reverse engineering the human
mind has been very valuable, but it appears we already know enough to create
powerful mind, if not totally human-like ones --- although more information
would only help.

With regard to the 10^15 synapses, its believed that only about 1% of them
actual represent anything more than the potential to learn a new connections
synapse.  So it is arguable that the brain only has 10^13 active synapses.
It is my belief, based on just the type of representation I think is likely
to be needed provide the depth of grounded experiential world knowledge,
including the large vocabularly of patterns, and the appropriate linking
between them, that that 10^12 to 10^14 bytes would be required.  (It should
be noted, however, that a synapse in the brain not only represents one or
more weights, but also the equivalent of a pointer, which would normally
cost 4 to 8 bytes).

Matt is certainly right to suggest the sizing of the AGI problem remains
open --- but I don't see that a real conceptual problem.  We will most
probably start by building smaller systems, and keep increasing their size
until we get the desired performance.  Whether the computer to provide
human-level AGI will cost $3K to $10M in 10 years --- it is still going to
be a commercially valuable product.  Remember, when these machine have all
the powers of a human, they will be thousands to billions of times faster
than humans at many of other tasks, and the combination of these two
different types of capabilities, with a bandwidth between them much faster
than any human could have will make AGI's extremely valuable.  

Matt Mahoney ====>- Hardware cost.  It takes about a million PCs to simulate
a human brain sized neural network.

====ED PORTER ====> I have been saying not having mondo hardware is the most
important barrier to AI for almost 40 years.  But its not a conceptual
missing piece.  A lot of great engineering is required, but as I have
written in my Fri 4/18/2008 6:36 PM post, that barrier is almost certain to
come down drastically over the next decade.

>====Matt Mahoney ====>- Software cost.  Software does not obey Moore's law.
Why would training an AI cost less than raising a child or training an
employee?

====ED PORTER ====> I am not an expert on software costs.  But it is clear
industry is already generously funding research into OSs and tools for
programming massively parallel hardware.  Intel's business model depends on
it.  

But a key concept of AGI is that not that much code will be written by
humans, because a good AGI system, in effect does most of its own coding by
recording experience --- including behavioral experiences --- learning
generalizations --- learning what works and what doesn't --- and by other
means such as doing genetic searches.  

Ben Goertzel gave an estimate of what he though it would cost to program
Novamente, and I forget, but I think it was in the 5-20 million dollar range
(Ben correct me if I am wrong) using a significant percent of non-American
programmers to keep cost down.  Even if that estimate was one hundred times
too small -- its trivial compared to the benefit it is likely to produce.  

The cost of training proto-AGI's can be relatively minor because they can
learn in virtual words and from media on the web.  To properly train the
first human level AGI's, even if it costs many times that of training an
individual  human child, would be a trivial inexpense compared to the huge
potential economic benefit --- particularly because such an education, once
learned by the system, could be copied very inexpensively to thousands or
millions of other similar machines.

===== Re: Stephen Reed's comments ===== 
===================================
What Stephen was referred to does not appears to be a major missing
conceptual piece.  Instead, I see it as an important, currently missing
capability, which requires a lot of good thinking to create.  

According to a Novamente-type approach what Stephen is doing is something
that should be learned automatically by Novamente's general learning
algorthims once they have been written, de-bugged, tuned, running on
sufficiently large hardware, and properly trained up.

Since Novamente-like systems of the size necessary to have this power
probably won't be there for five or more years.  Stephen is providing a very
valuable service creating these tool with his own mind, and if they work
they will help provide great tools for many in the AGI field 

===== Re: Charles D. Hixon's comments ==== 
===================================
Charle's comments seem to say it is important for AGI's to be able to learn
to appropriately cooperatively use multiple different specialized AI,
because specialized AIs are so much more efficient.

There are multiple people who have worked on something like this.  For
example, google, "A Cognitive Substrate for Natural Language Understanding,"
by Nicholas L. Cassimatis et al., and "The Panalogy Architecture for
Commonsense Computing" by Push Singh.  A Novemente system would not only
develop multiple functionally different sub-intelligences using a fairly
uniform architecture, but it would also learn how to use them efficiently in
a coordinated way.

One of the key concepts of AGI as that a similar general architecture should
be able to handle many of the problems of building a human-level
intelligence, and that there are too many special areas of knowledge to have
programmers design all of them.

Although the human cortex is surprisingly uniform --- and the cortical,
basil ganglia, cerebellum, thalamus feedback loops connected mainly to the
frontal cortex is relatively uniform --- the human brain as a whole is far
from uniform.  It uses special purpose front ends for some of its sensory
perception, and presumably a human level Novamente system would as well.  .

The brain also has multiple special purpose modules So there is a possible
question as to what extra features a Novemente-like architecture would need
to have added to it to function at a human level.  But right now, except for
obvious ones, it is not clear there is any great conceptual question about
this.  If and when the need for such additions to Novemente-like
architectures arise --- they will probably be fairly straight forward to
solve.

===== Re: Ben Goertzel's comments ====== 
===================================
I agree with Ben's comments.  They point out there are no obvious major
conceptual missing parts, although there is a lot of good thinking,
engineering, tuning, and refining to be done.  Ben also points out we might
find major conceptual missing parts as we build Novamente-like systems based
on what we learn from experience the current conceptual architecture can't
handle.

===== Re: Richard Loosemore's comments = 
===================================
I copy my comments from my prior response to Richard's comments (if you read
that post no need to read further):

RICHARD,

I can't provide "concrete reasons" why Novamente and roughly similar
approaches will work --- precisely because they are designed to operate in
the realm Wolfram calls computationally irreducible --- meaning it cannot be
modeled properly by anything substantially less complex than itself.  Thus,
whether or not it will work cannot be formally proven in advance.  

I assume it would have been equally hard to provide "concrete reasons" why
Hofstadter's Copycat and his similar systems would work before they were
built.  But they did work.

Since computation irreducibility is something you make a big point about in
one of your own papers you frequently cite --- you --- of all people ---
should not deny the right to believe in approaches to AGI before they are
supported by concrete proof.

But I do have, what I consider, rational reasons for believing a
Novamente-like systems will work.  

One of them is that you have failed to describe to me any major conceptual
problem for the AGI community does not have what appear to be valid
approaches.

For more reasons why I believe a Novamente-like approach might work I copy
the following portion of a former post of from about 5 months ago ---
describing my understanding of how Novamente itself would probably work,
based on reading material from Novamente and my own other reading and
thinking.

=======

Novamente starts with a focus on "General Intelligence", which it defines as
"the ability to achieve complex goals in complex environments."  It is
focused on automatic, interactive learning, experiential grounding, self
understanding, and both conscious (focus-of-attention) and unconscious
(currently less attended) thought.

It records experience, finds repeated patterns in it, makes generalizations
and compositions out of such patterns -- all through multiple
generalizational and compositional levels -- based on spatial, temporal, and
learned-pattern-derived relationships.  It uses a novel form of inference,
firmly grounded in Bayesian mathematics, for deriving inferences from many
millions of activated patterns at once.  This provides probabilistic
reasoning much more powerful and flexible than any prior Bayesian
techniques.  

Patterns -- which can include behaviors (including mental behaviors) -- are
formed, modified, generalized, and deleted all the time.  They have to
complete for their computational resources and continued existence.  This
results in a self-organizing network of similarity, generalizational, and
compositional patterns and relationships, that all must continue to prove
their worth in a survival-of-the-fittest, goal-oriented,
experiential-knowledge ecology.  

Re-enforcement learning is used to weight patterns, both for general long
term and context specific importance, based on the direct or indirect roles
they have played in achieving the system's goals in the past.  These
indications of importance -- along with a deep memory for past similar
experiences, goals, contexts, and similar inferencing and learning patterns
-- significantly narrow and focus attention, avoiding the pitfalls of
combinatorial explosion, and resulting in context-appropriate decision
making.  Genetic learning algorithms, made more efficient by the system's
experience and probabilistic inferencing, give the system the ability to
learn new behaviors, classifiers, and creative ideas. 

Taken together all these features, and many more, will allow the system to
automatically learn, reason, plan, imagine, and create with a sophistication
and power never before possible -- not even for the most intelligent of
humans. 

Of course, it will take some time for the first such systems to learn the
important aspects of world knowledge.  Most of the valuable patterns in the
minds of such machines will come from machine learning and not human
programming.   Such learning can be greatly speed if such machines are
taught, at least partially, the way human children are.  But once knowledge
has been learned by such systems much of it can be quickly replicated into,
or shared between, other machines.

=======


-----Original Message-----
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Saturday, April 19, 2008 11:36 AM
To: agi@v2.listbox.com
Subject: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

With the work done by Goertzel et al, Pei, Joscha Bach
<http://www.micropsi.org/> , Sam Adams, and others who spoke at AGI 2008, I
feel we pretty much conceptually understand how build powerful AGI's.  I'm
not necessarily saying we know all the pieces of the puzzle, but rather that
we know enough to start building impressive intelligences, and once we build
them we will be in a much better position to find out what are the other
missing conceptual pieces of the puzzle--- if any.

As I see it --- the major problem is in selecting from all we know, the
parts necessary to build a powerful artificial mind, at the scale needed, in
a way that works together well, efficiently, and automatically.  This would
include a lot of parameter tuning and determining of which competing
techniques for accomplishing the same end are most efficient at the scale
and in the context needed.  

But I don't see any major aspects of the problem that we don't already have
what appear to be good ways for addressing, once we have all the pieces put
together.

I ASSUME --- HOWEVER --- THERE ARE AT LEAST SOME SUCH MISSING CONCEPTUAL
PARTS OF THE PUZZLE --- AND I AM JUST FAILING TO SEE THEM.

I would appreciate it if those on this list could point out what significant
conceptual aspect of the AGI problem are not dealt with by a reasonable
synthesis drawn from works like that of Goertzel et al., Pei Wang, Joscha
Bach, and Stan Franklin --- other than the problems acknowledge above 

IT WOULD BE VALUABLE TO HAVE A DISCUSSION OF --- AND MAKE A LIST OF --- WHAT
--- IF ANY --- MISSING CONCEPTUAL PIECES EXIST IN AGI.  If there are any
such good list, please provide pointers to them.

I WILL CREATE A SUMMARIZED LIST OF ALL THE SIGNIFICANT MISSING PIECES OF THE
AGI PUZZLE THAT ARE SENT TO THE AGI LIST UNDER THIS THREAD NAME, WITH THE
PERSON SENDING EACH SUCH SUGGESTION WITH THE DATE OF THEIR POST IF IT
CONTAINS VALUABLE DESCRIPTION OF THE UNSOLVED PROBLEM INVOLVED NOT CONTAINED
IN MY SUMMARY --- AND I WILL POST IT BACK TO THE LIST.  I WILL TRY TO
COMBINE SIMILAR SUGGESTIONS WERE POSSIBLE TO MAKE THE LIST MORE CONCISE AND
FOCUSED

For purposes of creating this list of missing conceptual issues --- let us
assume we have very powerful hardware --- but hardware that is realistic
within at least a decade (1).  Let us also assume we have a good massively
parallel OS and programming language to realize our AGI concepts on such
hardware.  We do this to remove the absolute barriers to human-level
intelligent created by the limited hardware current AGI scientists have to
work with and to allow a systems to have the depth of representation and
degree of massively parallel inference necessary for human-level thought.

------------------------------------------
(1) Let us say the hardware has 100TB of RAM --- and theoretical values of
1000TOpp/sec --- 1000T random memory read or writes/sec -- and an
X-sectional band of 1T 64Byte Messages/ sec (with the total number of such
messages per second going up, the shorter the distance they travel within
the 100T memory space).  Assume in addition a tree net for global broadcast
and global math and control functions with a total latency to and from the
entire 100TBytes of several micro seconds. In Ten years such hardware may
sell for under two million dollars.  It is probably more than is needed for
human level AGI, but it gives us room to be inefficient, and significantly
frees us from having to think compulsively about locality of memory.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

<<attachment: winmail.dat>>

Reply via email to