Josh,

Thank you for your reply, copied below.  It was – as have been many of
your posts – thoughtful and helpful.

I did have a question about the following section

“THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
(MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
METHODOLOGY OF CHOICE FOR ITS SAGES.”

“THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”

My question is: if a machine’s world model includes the system’s model of
itself and its own learned mental representation and behavior patterns, is
it not possible that modification of these learned representations and
behaviors could be enough to provide what you are talking about -- without
requiring modifying its code at some deeper level.

For example, it is commonly said that humans and their brains have changed
very little in the last 30,000 years, that if a new born from that age
were raised in our society, nobody would notice the difference.  Yet in
the last 30,000 years the sophistication of mankind’s understanding of,
and ability to manipulate, the world has grown exponentially.  There has
been tremendous changes in code, at the level of learned representations
and learned mental behaviors, such as advances in mathematics, science,
and technology, but there has been very little, if any, significant
changes in code at the level of inherited brain hardware and software.

Take for example mathematics and algebra.  These are learned mental
representations and behaviors that let a human manage levels of complexity
they could not otherwise even begin to.  But my belief is that when
executing such behaviors or remembering such representations, the basic
brain mechanisms involved – probability, importance, and temporal based
inference; instantiating general patterns in a context appropriate way;
context sensitive pattern-based memory access; learned patterns of
sequential attention shifts, etc. -- are all virtually identical to ones
used by our ancestors 30,000 years ago.

I think in the coming years there will be lots of changes in AGI code at a
level corresponding to the human inherited brain level.  But once human
level AGI has been created -- with what will obviously have to a learning
capability as powerful, adaptive, exploratory, creative, and as capable of
building upon its own advances at that of a human -- it is not clear to me
it would require further changes at a level equivalent to the human
inherited brain level to continue to operate and learn as well as a human,
any more than have the tremendous advances of human civilization in the
last 30,000 years.

Your implication that civilization had improved itself by moving “from
religion to philosophy to science” seems to suggest that the level of
improvement you say is needed might actually be at the level of learned
representation, including learned representation of mental behaviors.



As a minor note, I would like to point out the following concerning your
statement that:

“ALL AI LEARNING SYSTEMS TO DATE HAVE BEEN "WIND-UP TOYS" “

I think a lot of early AI learning systems, although clearly toys when
compared with humans in many respects, have been amazingly powerful
considering many of them ran on roughly fly-brain-level hardware.  As I
have been saying for decades, I know which end is up in AI -- its
computational horsepower. And it is coming fast.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 10:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!

> I have one major question for Josh.  You said
>
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
>
> Could you please elaborate on exactly what the “complex core of the
> whole problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge
is
seriously trying to design a 'never ending learning' machine." (Private
communication)

By which he meant what we tend to call "RSI" here. I think the "coming up
with
new representations and techniques" part is pretty straightforward, the
question is how to do it. Search works, a la a GA, if you have billions of

years and trillions of organisms to work with. I personally am too
impatient,
so I'd like to understand how the human brain does it in billions of
seconds
and 3 pounds of mush.

Another way to understand the problem is to say that all AI learning
systems
to date have been "wind-up toys" -- they could learn stuff in some small
space of possibilities, and then they ran out of steam. That's what
happened
famously with AM and Eurisko.

I conjecture that this will happen with ANY fixed learning process. That
means
that for RSI, the learning process must not only improve the world model
and
whatnot, but must improve (=> modify) *itself*. Kind of the way
civilization
has (more or less) moved from religion to philosophy to science as the
methodology of choice for its sages.

That, of course, is self-modifying code -- the dark place in a computer
scientist's soul where only the Kwisatz Haderach can look.   :^)

> Why for example would a Novamente-type system’s representations and
> techniques not be capable of being self-referential in the manner you
> seem to be implying is both needed and currently missing?

It might -- I think it's close enough to be worth the experiment.
BOA/Moses
does have a self-referential element in the Bayesian analysis of the GA
population. Will it be enough to invent elliptic function theory and
zero-knowledge proofs and discover the Krebs cycle and gamma-ray bursts
and
write Finnegan's Wake and Snow Crash? We'll see...

Josh

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49342602-539383

Reply via email to