RICHARD LOOSEMOORE====> There is a high prima facie *risk* that intelligence
involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect),


ED PORTER=====> Richard, "prima facie" means obvious on its face.  The above
statement and those that followed it below may be obvious to you, but it is
not obvious to a lot of us, and at least I have not seen (perhaps because of
my own ignorance, but perhaps not) any evidence that it is obvious.
Apparently Ben also does not find your position to be obvious, and Ben is no
dummy.

Richard, did you ever just consider that it might be "turtles all the way
down", and by that I mean experiential patterns, such as those that could be
represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
"all the way down".  In such a system each level is quite naturally derived
from levels below it by learning from experience.  There is a lot of dynamic
activity, but much of it is quite orderly, like that in Hecht-Neilsen's
Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
DISCONNECT" of the type you envision, i.e., one that is totally impossible
to architect in terms of until one totally explores global-local disconnect
space (just think how large an exploration space that might be).

So if you have prima facie evidence to support your claim (other than your
paper which I read which does not meet that standard), then present it.  If
you make me eat my words you will have taught me something sufficiently
valuable that I will relish the experience.

Ed Porter




-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 9:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Benjamin Goertzel wrote:
> On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> Benjamin Goertzel wrote:
>> [snip]
>>> And neither you nor anyone else has ever made a cogent argument that
>>> emulating the brain is the ONLY route to creating powerful AGI.  The
closest
>>> thing to such an argument that I've seen
>>> was given by Eric Baum in his book "What Is
>>> Thought?", and I note that Eric has backed away somewhat from that
>>> position lately.
>> This is a pretty outrageous statement to make, given that you know full
>> well that I have done exactly that.
>>
>> You may not agree with the argument, but that is not the same as
>> asserting that the argument does not exist.
>>
>> Unless you were meaning "emulating the brain" in the sense of emulating
>> it ONLY at the low level of neural wiring, which I do not advocate.
> 
> I don't find your nor Eric's nor anyone else's argument that
brain-emulation
> is the "golden path" very strongly convincing...
> 
> However, I found Eric's argument by reference to the compressed nature of
> the genome, more convincing than your argument via the hypothesis of
> irreducible emergent complexity...
> 
> Sorry if my choice of words was not adequately politic.  I find your
argument
> interesting, but it's certainly just as speculative as the various AGI
theories
> you dismiss....  It basically rests on a big assumption, which is that the
> complexity of human intelligence is analytically irreducible within
pragmatic
> computational constraints.  In this sense it's less an argument than a
> conjectural
> assertion, albeit an admirably bold one.

Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the 
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect), and

   2) Because of the unique and unusual nature of complexity there is 
only a vanishingly small chance that we will be able to find a way to 
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to 
ignore this risk and simply continue with an "engineering" approach 
(pretending that complexity is insignificant), then the *only* evidence 
we would ever get that irreducibility was preventing us from building a 
complete intelligence would be the fact that we would simply run around 
in circles all the time, wondering why, when we put large systems 
together, they didn't quite make it, and

   4) Therefore we need to adopt a "Precautionary Principle" and treat 
the problem as if irreducibility really is significant.


Whether you like it or not - whether you've got too much invested in the 
contrary point of view to admit it, or not - this is a perfectly valid 
and coherent argument, and your attempt to try to push it into some 
lesser realm of a "conjectural assertion" is profoundly insulting.




Richard Loosemore


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72150922-4a7746

<<attachment: winmail.dat>>

Reply via email to