Ed Porter wrote:
RICHARD,

I can't provide "concrete reasons" why Novamente and roughly similar
approaches will work --- precisely because they are designed to operate in
the realm Wolfram calls computationally irreducible --- meaning it cannot be
modeled properly by anything substantially less complex than itself.  Thus,
whether or not it will work cannot be formally proven in advance.

Just a few small details:

As I understand it, Ben has argued vehemently that Novamente is not subject to the computational irreducibility (complex systems) issue.

And complexity does mean that "it cannot be modeled properly by anything substantially less complex than itself". What it does mean is that it cannot be explained in an "analytic" manner.

I would not ask anyone to formally prove that the systems at AGI 2008 will work (me, of all people!). Not formal proof, just something other than a hunch.


I assume it would have been equally hard to provide "concrete reasons" why
Hofstadter's Copycat and his similar systems would work before they were
built.  But they did work.

Not really germane: Copycat was unbelievably simple compared to AGI systems, and Hofstadter would nover have claimed ahead of time that it would do anything, because it was an experimental system.


Since computation irreducibility is something you make a big point about in
one of your own papers you frequently cite --- you --- of all people ---
should not deny the right to believe in approaches to AGI before they are
supported by concrete proof.

Quite the reverse: all of these people deny that the complexity issue is at all relevant to their systems. You cannot say, on their behalf, that complexity is an excuse for not being able to predict why the systems should work, while at the same time they protest that my complex systems analysis is wrong. ;-)


But I do have, what I consider, rational reasons for believing a
Novamente-like systems will work.
One of them is that you have failed to describe to me any major conceptual
problem for the AGI community does not have what appear to be valid
approaches.

Me?  What did my question have to do with me?  I asked about your optimism.

But since you mention me, I have (if I undertand your statement, which was a little confusingly worded) described a major, crippling reason why the AGI community does not have valid approaches.


For more reasons why I believe a Novamente-like approach might work I copy
the following portion of a former post of from about 5 months ago ---
describing my understanding of how Novamente itself would probably work,
based on reading material from Novamente and my own other reading and
thinking.

But this general description below does not really say why everything is on track to succeed.



Richard Loosemore





=======

Novamente starts with a focus on "General Intelligence", which it defines as
"the ability to achieve complex goals in complex environments."  It is
focused on automatic, interactive learning, experiential grounding, self
understanding, and both conscious (focus-of-attention) and unconscious
(currently less attended) thought.

It records experience, finds repeated patterns in it, makes generalizations
and compositions out of such patterns -- all through multiple
generalizational and compositional levels -- based on spatial, temporal, and
learned-pattern-derived relationships.  It uses a novel form of inference,
firmly grounded in Bayesian mathematics, for deriving inferences from many
millions of activated patterns at once.  This provides probabilistic
reasoning much more powerful and flexible than any prior Bayesian
techniques.
Patterns -- which can include behaviors (including mental behaviors) -- are
formed, modified, generalized, and deleted all the time.  They have to
complete for their computational resources and continued existence.  This
results in a self-organizing network of similarity, generalizational, and
compositional patterns and relationships, that all must continue to prove
their worth in a survival-of-the-fittest, goal-oriented,
experiential-knowledge ecology.
Re-enforcement learning is used to weight patterns, both for general long
term and context specific importance, based on the direct or indirect roles
they have played in achieving the system's goals in the past.  These
indications of importance -- along with a deep memory for past similar
experiences, goals, contexts, and similar inferencing and learning patterns
-- significantly narrow and focus attention, avoiding the pitfalls of
combinatorial explosion, and resulting in context-appropriate decision
making.  Genetic learning algorithms, made more efficient by the system's
experience and probabilistic inferencing, give the system the ability to
learn new behaviors, classifiers, and creative ideas.
Taken together all these features, and many more, will allow the system to
automatically learn, reason, plan, imagine, and create with a sophistication
and power never before possible -- not even for the most intelligent of
humans.
Of course, it will take some time for the first such systems to learn the
important aspects of world knowledge.  Most of the valuable patterns in the
minds of such machines will come from machine learning and not human
programming.   Such learning can be greatly speed if such machines are
taught, at least partially, the way human children are.  But once knowledge
has been learned by such systems much of it can be quickly replicated into,
or shared between, other machines.

=======

-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Saturday, April 19, 2008 7:57 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

Ed Porter wrote:
WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

With the work done by Goertzel et al, Pei, Joscha Bach
<http://www.micropsi.org/> , Sam Adams, and others who spoke at AGI 2008,
I
feel we pretty much conceptually understand how build powerful AGI's.  I'm
not necessarily saying we know all the pieces of the puzzle, but rather
that
we know enough to start building impressive intelligences

Ed, can you please specify *precisely* what, in the talks at AGI 2008, leads you to the conclusion that "we know enough to start building impressive intelligences"?

Some might say that everything shown at AGI 2008 could be interpreted as lots of people working on the problem, but that in ten years time those same people will come back and report that they are still working on the problem, but with no substantial difference to what they showed this year.

The reason that some people would say this is that if you went back ten years, you could find people achieving forms of AI that exhibited no *substantial* difference to anything shown at AGI 2008.

So, I am looking for your concrete reason (not gut instinct, but concrete reason) to claim that "we know enough ...etc.".



Richard Loosemore

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to