>I'm not clear on why he thinks human level intelligence is
 >"understandable", or even what he means by this.

As Ben stated there are really two different issues of "understanding"
1. Understanding the code and process.
2. Understanding the intelligence (and actions taken by it)

1. Your right and it is known for most complex systems, Jets, computers, the 
internet, that no one person "knows" or understands everything about how they 
work.. but they do work and can be created by groups, and parts can be 
understood if studied.  A general understanding of what a jet is and does is 
understood.

2. However understanding the intelligence would seem on one level to be a 
Requirement for having a human level AGI.  IE if you have the AGI there and it 
is acting in a bizare and strange manner, and cant explain why, then we cannot 
really say it is a human level AGI.. we can only see that it is a machine that 
acts randomly... (we have enough of those already)
  But to understand the intelligence, and for the AGI to be useful in the 
world, it really does have the need to be able to explain itself.  It shoudl 
list and give reasons of why it wants to put a fin underneath the jet at a 45' 
angle, that it will increase stability, and show a graph or the math behind his 
justification.  Then this smaller subset problem can more easily be understood 
by a person or group of experts.  At some time an AGI may become so advanced 
that the reasons behind any one action are so complex that we cannot understand 
or follow it.
But at that point I believe it will have surpassed the label of "human level" 
and be something more.  Then we would have to trust the machines, based on past 
performance, or other procedures would be used. Or the machine may have to 
proove in test that its suggestions are good and work.

James Ratcliff


Eric Baum <[EMAIL PROTECTED]> wrote: 

Pei> According to my belief, the way to create AGI is to have a
Pei> general theory of intelligence, which should cover the common
Pei> principle under all kinds of intelligent systems, including human
Pei> intelligence, computer intelligence, etc., even alien
Pei> intelligence and superhuman AGI. Therefore, this theory should
Pei> also cover your AGI0 to AGIn.

According to my belief, that I also claim to have published a strong case
for, we have such a theory, in which the common principle underlying 
intelligence is that of Occam programs, that are computationally hard
to extract. (I don't mean a program in the Occam language, a program
constructed according to an extrapolated Occam's razor.)
Also according to this belief, "understanding" is comprised of having
such an Occam program that exploits underlying structure in order to
generalize. According to this belief, unfortunately, the Occam
program underlying our intelligence is itself unlikely to admit any more
compact Occam program understanding it, and thus may be inherently not
understandable.

According to this picture, if we can succeed in creating a Human Level
Intelligence (according to this picture, there roughly speaking
doesn't  exist any truly "general" intelligence) the way we will do 
that will be by building some structures/code that then computes and 
builds other structures/code that comprises the code of the Human Level
Intelligence. The actual Human Level Intelligence will likely not be
understandable in any meaningful sense.

Ben's comments, and to some extent his approach to AGI of building
code and then hoping when run it will produce a complex set of
patterns that do stuff seems somewhat related to this, except for
some reason he stipulates that human intelligence is understandable.
I'm not clear on why he thinks human level intelligence is
"understandable", or even what he means by this.

Richard> efforts (some people seem to think that there is something
Richard> inherently impossible about a human being able to design
Richard> something smarter than itself, but that idea is really just
Richard> science-fiction hearsay, not grounded in any real
Richard> limitations).

Well, no it is grounded in real limitations. I doubt, Richard, that
even you think you could "design" a human level intelligence by hand,
any more than you could personally design a mirage jet, the blueprints
for which filled a warehouse. At the very least you would want to use
a computer, and write code for the computer, and have the computer do
a lot of the design for you by running the code. At the end of that
process, you wouldn't necessarily "understand" much about how that
design worked. And if the very guts of the reason that design worked
are because it contains programs that were output by finding
approximate solutions to computationally intractable problems,
you'd be in real trouble.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to