----- Original Message ----
From: Starglider <[EMAIL PROTECTED]>
To: [email protected]
Sent: Wednesday, October 25, 2006 2:32:27 PM
Subject: Re: [singularity] Defining the Singularity

>All AGIs implemented on general purpose computers will have access to
>'conventional computing capability' unless (sucessfully) kept in a sandbox
>- and even then anything with a Turing-complete substrate has the potential
>to develop such capability internally. 'Access to' isn't the same thing as
>'augmented with' of course, but I'm not sure exactly what you mean by this
>(and I'd rather wait for you to explain than guess).

I was referring to one possible implementation of AGI consisting of part neural 
or brainlike implementation and part conventional computer (or network) to 
combine the strengths of both.  The architecture of this system would be that 
the neural part has the capability to write programs and run them on the 
conventional part in the same way that humans interact with computers.  This 
seems to me to be the most logical way to build an AGI, and probably the most 
dangerous  Vinge described four ways in which the Singularity can happen 
(quoting from http://mindstalk.net/vinge/vinge-sing.html)
 There may be developed computers that are "awake" and
              superhumanly intelligent.
             Large computer networks (and their associated users) may "wake
              up" as a superhumanly intelligent entity.
             Computer/human interfaces may become so intimate that users
              may reasonably be considered superhumanly intelligent.
             Biological science may provide means to improve natural
              human intellect.
Vinge listed these possibilities in order from least to most interaction with 
humans.  I believe that less interaction means less monitoring and control, and 
therefore greater possibility that something will go wrong.  As long as human 
brains remain an essential component of a superhuman intelligence, it seems 
less likely that this combined intelligence will destroy itself.  If AGI is 
external or independent of human existence, then there is a great risk.  But if 
you follow the work of people trying to develop AGI, it seems that is where we 
are headed, if they are successful.  We already have hybrid computational 
systems that depend on human cooperation with machines.  For some hybrid 
systems such as customer service, the airline reservation/travel agency system, 
or the management of large corporations, there is economic pressure to automate 
the human parts of the computation.

Consider this possibility.  We build an AGI that is part neural, part 
conventional computer, modeled after a system of humans with programming skills 
and a network of computers.  Even if you could prove friendliness (which you 
can't), then you still have the software engineering problem.  Program 
specifications are written in natural language, which is ambiguous, imprecise 
and incomplete.  People make assumptions.  People make mistakes.  Neural models 
of people will make mistakes.  Each time the AGI programs a more intelligent 
AGI, it will make programming errors.  Proving program correctness is 
equivalent to the halting problem, so the problem will not go away no matter 
how smart the AGI is.  Using heuristic methods won't help, because after the 
first cycle of the AGI programming itself, the level of sophistication of the 
software will be beyond our capability to understand it (or else we would have 
written it that way in the first place).  You will have no choice but to trust 
the AGI detect its own errors.
 
-- Matt Mahoney, [EMAIL PROTECTED]




-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to