=> Singularity automatic assumption here earlier (give me 1000 times more time 
than Einstein to think up Relativity Theory and I still couldn't; give me 1000 
times more data and I'll be seeing less, not more forest), let me add 
corollaries to/musings about Jef's argument:  (1) if (by force) we confine a 
super-AGI to a single problem situation or even our own limited environment for 
"long enough"  (ignore the ethical slavery aspect for a moment), won't it go 
crazy - just like many geniuses go crazy or at the very least very eccentric 
after a relatively short life of intensive intellectual creativity
For any advanced system such as this that we expect to interact and learn we 
would not be able to put it in a room alone and so "Go"
Given that, yes it would eventually devolve into some bad data and bad 
conclusions.
We will have to interact, guide and work with these inteligences, to insure 
they are not diverging down a different path.
A simple example of that is a path finding algorithm, if we see that it is 
trying to get to a northern city along a road, but going many miles south, for 
no reason, we can change things and say, hey, go this way instead, byt changing 
algorithms, or data or other direction.


 (2) will we recognise the difference between AGI genius and AGI craziness even 
at the early stage in its life - we hardly do recognize it in human geniuses 
(and remember that the parameters in a normal human only needs to be slightly 
off before (s)he is considered crazy - it'll be hard enough to get the 
parameters right for our human-level AGI)
This is why AGI's will need to have a high accountability, they will need to 
explain their reasons to humans and experts, and be able to justify why they 
suggest a certain route.
For instance, in the movie "Idiocracy" the future humans use Gatorade to water 
all the crops.
  When asked why they reply, "Cause its gots the stuff a body needs" and dont 
know anything other than that catch phrase.
  Crops may actually do well from the ingredients of Gatorade, but if a 
computer suggested this, and didnt have any explanation why, we would 
definitely think they were crazy, unless they had many experiments that showed 
good effects.


 (3) once/if it goes off in its own super-intelligence space (likely to be in 
intellectual domains such as maths) I doubt that we will ever be able to 
recognize what it does (try reading an advanced maths,  physics or 
theology/philosophy book)
Correct :}  Unfortunately, once it reaches some point in the future, it will 
suggest something, and explain it, but the explanation itself could be beyond 
our comprehension.  What we do at that point is unknown.

This type of work is being done with the Project Halo 
http://www.projecthalo.com,
Where the AI's had to pass a chemistry exam, but it was not enough that they 
could answer the questions correctly, which they did very well in, but thye had 
to explain their answers in words, concisely and easy to understand.  This 
prooved to be much harder, but they still passed an API Chemistry exam.


Working with my 4-year old daughter, I see her doing some very crazy things, 
but I know she is interacting with the world, and seeing what works and what 
doesnt.  She put a shirt over her nightgown, no harm no foul, didnt hurt or 
cause any trouble, but I had to tell her, no, you dont do that.
  I think that is the way much of the AGI learning will need to get down to.
If we can create a very basic framework that will allow complex interactions 
with the AGI and for the AGI to have great freedom to try things on its own, 
and be corrected, or other suggestions to be made to it, would allow it to grow 
naturally in the environment.

James Ratcliff


 Jean-Paul Van Belle


Jean-Paul Van Belle <[EMAIL PROTECTED]> wrote:    Since I voiced my concern 
with the AGI
  
    
 Department of Information Systems

 Email: [EMAIL PROTECTED]
 Phone: (+27)-(0)21-6504256
 Fax: (+27)-(0)21-6502280
 Office: Leslie Commerce 4.21


 
 >"Jef Allbright" [EMAIL PROTECTED]> 2007/04/15 21:40:06 >>
>While such a machine intelligence will quickly far exceed human
>capabilities, from its own perspective it will rapidly hit a wall due
>to having exhausted all opportunities for effective interaction with
>its environment.  It could then explore an open-ended possibility
>space à la schmidhuber, but such increasingly detached exploration
>will be increasingly detached from "intelligence" in an effective
>sense.
>>On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote:
>> However, to me "Singularity" is a stronger claim than "superhuman
>> intelligence". It implies that the intelligence of AI will increase
>> exponentially, to a point that is shorter than what we can perceive or
>> understand. That is what I'm not convinced.


---------------------------------
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


_______________________________________
James Ratcliff - http://falazar.com
Looking for something...
       
---------------------------------
Ahhh...imagining that irresistible "new car" smell?
 Check outnew cars at Yahoo! Autos.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to