RE: [agi] Real world effects on society after development of AGI????

2004-01-18 Thread Yan King Yin
From: Brad Wyble [EMAIL PROTECTED]

  1) AI is a tool and we're the user, or
  2) AI is our successor and we retire, or
  3) The Friendliness scenario, if it's really feasible.
 
 This collapse of a huge spectrum of possibilities into three
 human-society-based categories isn't all that convincing to me...

Yes, a list like this should always include  

4) Something else

Yeah, I have missed some possibilities. It is possible
to create an AI that is like a friend, and it's also
possible to consolidate resources to create one FAI
for a group of people rather than fAIs for individuals.
So the question is what are the merits of each of these
options?

The simpler alternative is to make utility AIs (UAI)
that solve specific problems and have rigid goal
structures, eg an AI to design flying cars, or design
anti-ageing, etc. I tend to favor this view.

One can even ask a UAI about how to maximize happiness
or something similar. It seems that UAIs are all we
need. There has got to be some categories of problems
that an FAI/fAI is better suited to solve?

YKY



Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-15 Thread Yan King Yin
From: Brad Wyble [EMAIL PROTECTED]

Phillip wrote:
 
 The significant acceleration of the mentation rate is only possible with 
 the introduction of 'Lamarkian' upgrading of the mentation systems eg. 
 the introduction of AGI technology either as new AGI entities or as 
 augmentation of the human brain.  I think most people now expect the 
 former to emerge well before the latter.

I think this is a difficult question. Robin Henson spent 9 years
studying AI and he's of the view that uploading will come first.
I studied uploading for a while, and I also find uploading to be
extremely difficult =(

(Uploading is difficult mainly because we have to figure out all
biochemical nauses before we can be very confident the upload is
indeed human. Now sure what's the main difficulty in AGI...)

Well there's a possible hybrid situation, in which a person sits atop a 
network of sophisticated tools that vastly increases their effective 
intelligence (ie Gargoyle).  This situation exists right now, but it's 
possible that advances in HCI will enable computer aides to do much better.  

I think this is a much more convincing scenario than any
superintelliegent scenarios. Eliezer is the only person
I know who's studying the Friendliness seriously, and I don't
think he can even state the problem precisely. It's insanely
difficult trying to deal with an SI.

 This means that the majority of humans will be left behind in the 
 acceleration of the mentation rate - at least for a while.

Or it means that all of us will be left behind forever.

1) AI is a tool and we're the user, or
2) AI is our successor and we retire, or
3) The Friendliness scenario, if it's really feasible.

YKY



Get advanced SPAM filtering on Webmail or POP Mail ... Get Lycos Mail!
http://login.mail.lycos.com/r/referral?aid=27005

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-15 Thread Brad Wyble
Ben Wrote:

  1) AI is a tool and we're the user, or
  2) AI is our successor and we retire, or
  3) The Friendliness scenario, if it's really feasible.
 
 This collapse of a huge spectrum of possibilities into three
 human-society-based categories isn't all that convincing to me...
 

Yes, a list like this should always include  

4) Something else

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-14 Thread Brad Wyble
On Tue, 13 Jan 2004, deering wrote:

 Brad, I completely agree with you that the computer/human crossover
 point is meaningless and all the marbles are in the software engineering
 not the hardware capability.  I didn't emphasize this point in my
 argument because I considered it a side issue and I was trying to keep
 the email from being any longer than necessary.  But even when someone
 figures out how to write the software of the mind, you still need the
 machine to run it on.  I believe in the creative ability of the whole
 AGI research ecosystem to be able to deliver the software when the
 hardware is available.  I believe that the human mind is capable of
 solving this design/engineering problem, and will solve it at the
 earliest opportunity presented by hardware availability.


You seem to contradict yourself, saying first that the hardware crossover 
point is meaningless, then implying that we'll solve the design problem at 
the first opportunity.  

I won't reiterate my stance again, you know what it is :)


 
 Regarding nanotechnology development, I think we are approaching
 nano-assembly capability much faster than you seem to be aware.  Check
 out the nanotech news
 
 http://nanotech-now.com/

Being able to make these bits n bobs in the lab is a different problem 
than having autonomous little nanorobots doing it.  You then have problems 
of power distribution, intelligent coordination, heat dissipation. It's 
quite a ways off in my opinion.

Again, I'm not sure whether this or AGI will come first, they are both 'H' 
hard.  

 
 Regarding science, Yes, turtles all the way down.  Probably.  But atoms
 are so handy.  Everything of any usefulness is made of atoms.  To go
 below atoms to quarks and start manipulating them and making stuff other
 than the 92 currently stable atoms has such severe theoretical obstacles
 that I can't imagine solving them all.  Granted, I may be lacking
 imagination, or maybe I just know too much about quarks to ignore all
 the practical problems.  Quarks are not particles.  You can't just pull
 them apart and start sticking them together any way you want.  Quarks
 are quantified characteristics of the particles they make up.  We have
 an existence proof that you can make neat stuff out of atoms.  Atoms are
 stable.  Quarks are more than unstable, they don't even have a separate
 existence.  I realize that my whole argument has one great big gaping
 hole, We don't know what we don't know.  Okay, but what I do know
 about quarks leads me to believe that we are not going to have quark
 technology.  On a more general vein, we have known for some time that
 areas of scientific research are shutting down.  Mechanics is finished.  
 Optics is finished.  Chemistry is finished.  Geology is basically
 finished.  We can't predict earthquakes but that's not because we don't
 know what is going on.  Metrology we understand be can't calculate, not
 science's fault.  Oceanography, ecology, biology, all that is left to
 figure out is the molecular biology and they are done.  Physics goes on,
 and on, and on, but to no practical effect beyond QED and that is all
 about electrons and photons and how they interact with atoms, well
 roughly.

Perhaps some of them have evolved into different kinds of science that you 
no longer recognize as such.  That's not the same thing as shutting down.


 
 
 I don't expect this clarification to change your mind.  I think we are
 going to have to agree to disagree and wait and see.
 

Yes indeedy. :)


 
 See you after the Singularity.
 

Ah ah, but the Singularity says you can't make that prediction :)

-Brad

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread Brad Wyble
On Mon, 12 Jan 2004, deering wrote:
 
 Brad, you are correct.  The definition of the Singularity cited by
 Verner Vinge is the creation of greater-than-human intelligence.  And
 his quite logical contention is that if this entity is more intelligent
 than us, we can't possibly predict what it will think or do, hence the
 incomprehensibility.  Many people subscribe to this statement as if it
 were scripture, not me.  A few years later, Ray Kurzweil noticed that
 the advancement of knowledge in molecular biotechnology and
 miniaturization of electrical and mechanical systems were graphing
 tracks that closely matched the graphs for the advancement of
 computational capacity.  It appears from the graph data that
 computational capacity of desktop computers will surpass human brains at

I keep making this point as often as it has to be made: surpassing the 
computational capacity of the brain is not even close to sufficient to 
develop AGI.  The hard part, the real limitation, is the engineering of 
the type that Ben's doing.  

Software engineering will be our biggest hurdle for decades after we cross 
the brain CPU barrier.


 about the same time as miniaturization reaches positional molecular
 assembly and knowledge of molecular biotechnology reach completeness.  
 If you think about it, the fact that these three areas of technological
 advancement are tracking together toward specific goals is not
 surprising.  They are all very closely tied to each other.  The advance
 of miniaturization of electrical and mechanical systems are producing
 the tools for the investigation of living organisms at the molecular
 level.  It is also producing the hardware for the advancement of
 computational capacity.  The more powerful computers are providing the
 control systems for the automation of molecular biotechnology speeding
 up the assimilation of knowledge.  Computers and nanotechnology are
 progressing lockstep; scientists need more powerful computers to advance
 nanotechnology, computers need more miniaturized circuits to become more
 powerful.  And molecular biotechnology is dependent on both the
 advancement of computational capacity and the advancement of nanotech
 tools.  So is born the concept of the three technology Singularity.


A good point, however nanomanufacturing has some special challenges, as 
does mind design.  One is likely much harder than the other, I just don't 
know which.

 
 1.  Intelligence will top out at levels of great efficiency, accuracy,
 and speed; and the best types of thought processes will be similar to
 ways of thinking used by our best geniuses, a mode of thought that is
 not beyond our comprehension, just merely beyond our perfect execution.


It will be beyond your comprehension.  I don't know about you, but I 
cannot comprehend the way hardcore theoretical mathematicians think about 
equations.  I feel like a cat staring at its master.

In the same way, you spend most of your day thinking about topics that are 
utterly incomprehensible to people 100 years old.   

So it will be with your children and theirs.

 
 2.  Physical technology will reach a limit at the complete control of
 the positioning of atoms in fabrication, maintenance, and functioning,
 including molecular scaled robots and machinery.

You sound like physicists 100 years ago who thought that the proton, 
neurtron and electron were the end of the road.  Why is it so hard to 
imagine that we can put quantum particles to use?  Or that there are not 
layer upon layer of sub-quantum particles?  

I think It's turtles all the way down, and so far History has proved me 
right.  

 
 3.  Science will reach a limit with the completion of cataloging and
 understanding of all molecular processes in living organisms.
 

This is largely irrelevant.   Science is so much more than cataloging 
living organisms.  It is completely open-ended.  


 When I say limits, I don't mean that we will stop innovating, merely
 that we will have all of the basic knowledge and capability we are ever
 going to have, and what is left is art.  Sure we will still be inventing
 better mouse traps, but not whole new areas of science.
 

Sorry.  I can't see how you've demonstrated this.

 Given these limits, the Singularity becomes very comprehensible.  We
 know what basic capabilities we will have.  We can plan how we want to
 use them.  We get to decide what principles our society will be based
 on, and how we will implement them.
 

I couldn't disagree more.

 Okay, you can start laughing now.


Just shaking my head in wonder is all :)


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-13 Thread deering



Brad, I completely agree with you that the 
computer/human crossover point is meaningless and all the marbles are in the 
software engineering not the hardware capability. I didn't emphasize this 
point in my argument because I considered it a side issue and I was trying to 
keep the email from being any longer than necessary. But even when someone 
figures out how to write the software of the mind, you still need the machine to 
run it on. I believe in the creative ability of the whole AGI research 
ecosystem to be able to deliver the software when the hardware is 
available. I believe that the human mind is capable of solving this 
design/engineering problem, and will solve it at the earliest opportunity 
presented by hardware availability. 

Regarding nanotechnology development, I think we are approaching nano-assembly capability much 
faster than you seem to be aware. Check out the nanotech news 


http://nanotech-now.com/

Regarding science, Yes, turtles all the way 
down. Probably. But atoms are so handy. Everything of any 
usefulness is made of atoms. To go below atoms to quarks and start 
manipulating them and making stuff other than the 92 currently stable atoms has 
such severe theoretical obstacles that I can't imagine solving them all. 
Granted, I may be lacking imagination, or maybe I just know too much about 
quarks to ignore all the practical problems. Quarks are not 
particles. You can't just pull them apart and start sticking them together 
any way you want. Quarks are quantified characteristics of the particles 
they make up. We have an existence proof that you can make neat stuff out 
of atoms. Atoms are stable. Quarks are more than unstable, they 
don't even have a separate existence. I realize that my whole argument has 
one great big gaping hole, "We don't know what we don't know." Okay, but 
what I do know about quarks leads me to believe that we are not going to have 
quark technology. On a more general vein, we have known for some time that 
areas of scientific research are shutting down. Mechanics is 
finished. Optics is finished. Chemistry is finished. Geology 
is basically finished. We can't predict earthquakes but that's not because 
we don't know what is going on. Metrology we understand be can't 
calculate, not science's fault. Oceanography, ecology, biology, all that 
is left to figure out is the molecular biology and they are done. Physics 
goes on, and on, and on, but to no practical effect beyond QED and that is all 
about electrons and photons and how they interact with atoms, well 
roughly.


I don't expect this clarification to change your 
mind. I think we are going to have to agree to disagree and wait and 
see.


See you after the Singularity.


Mike Deering.



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Ben Goertzel

 I think that creating AGIs is only half the job.  The other half is
 organising their successful introduction into society.  I would strongly
 recommend that once the coding side of AGI development is looking
 good that *all* the parties engaged in creating AGIs ensure that
 effective efforts are made to manage the introduction of AGIs into
 society. And I think the AGIs should be engaged in this task too.

A lot of this has to do with who is funding the development of the initial
AGI's.  If they are developed with funding from a certain branch of
government or industry, this biases the nature of the AGI's initial forays
into society...

For example, consider the two scenarios where AGI's are developed by

a) the US Army
b) Sony's toy division

In the one case, AGI's are introduced to the world as super-soldiers (or
super virtual fighter pilots, super strategy analyzers,etc.); in the other
case, as robot companions for their children...

Depending on how fast AGI intelligence accelerates, this may or may not make
a difference in terms of the ultimate role of AGI in society.  If there's a
slow takeoff, then it probably won't make a big difference, because there
will be time for AGI to infuse through society one way or another.  If
there's a fast takeoff, then it may  make a big difference, and the nature
of the socialization the AGI gets will be quite different in case b from
case a.

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Hi Ben,

 For example, consider the two scenarios where AGI's are developed by
 a) the US Army
 b) Sony's toy division
 
 In the one case, AGI's are introduced to the world as super-soldiers (or
 super virtual fighter pilots, super strategy analyzers,etc.); in the other
 case, as robot companions for their children...
 
  the nature of the socialization the AGI gets will be quite different
 in case b from case a. 

The Sony option is starting to look good! :)

Better in fact than working as the manager of the computer players in 
most advanced computer games since so many of these games are no 
more peaceful than the US Army!

If AGIs get involved in running aspects of computer games, my feeling 
is the that the games they contribute to would have to be chosen *very* 
carefully - unless AGIs have a brilliant capacity to stop the work they do 
from significantly reshaping their ethics.  Maybe instilling this capacity 
is one essential general element in the implementation of friendliness 
regardless of what work they do.  The implementation of this capacity 
might need to be quite subtle since AGIs would need to be able to learn 
and refine their ethics in the light of experience and yet certain types of 
work that violate their ethics shouldn't result in the emergence of 
unfriendliness.  (I think some AGIs will be able to get work as ethics 
counsellors to their AGI colleagues!  In fact it could be a growth 
industry.)

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering



How long the transition from the emergence of AGI 
to full integration into society is debatable. If the transition is 
deformed by interference from government then things could get really screwed 
up, but I think there is at least an even chance that they will let it develop 
according to free economic forces. For the remainder of this message I 
will assume a natural free development.


The first stage will be the initial training with 
basic knowledge including specific information about the human 
environment. This training stage will not need to be repeated with each 
new AGI as computers can copy their information.

The second stage will be replacement of workers 
with robots. All jobs are susceptible to replacement by robots. I 
hope you have a fat 401K or other assets, you're going to need it. The 
cost of all products and services will drop precipitously. The cost of a 
product or service consists of the labor cost, plus the cost of the machinery 
used in its production amortized over the total number of products 
produced. The reason electronic equipment has dropped is due to the 
increase in automation of the factories. When robots take all the jobs in 
the factories labor costs will drop to zero leaving the equipment cost. 
The equipment cost consists of the cost of the raw materials plus the labor 
costs to convert them into equipment. The labor costs will disappear 
leaving only the raw materials cost. The cost of the raw materials is 
primarily the labor cost to extract it from the ground or recycle it from the 
dump. If you look at the whole economy from the mine to Wal-Mart you find 
that labor makes up almost all the cost. And as more manufacturing 
capacity is built, the cost of production drops. Theseare the basics 
of the 'abundance economy'. Obviously our current social structure will 
need to make significant adjustments in the transition to the 'abundance 
economy'.

The last stage is the integration of super-human 
AGI into government and decision making positions at the top of the societal 
control structures.


Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread deering



Well, the third stage requires a higher level of 
intelligence by the AGI's than the second stage, so, if the advancement of AGI 
intelligence from human-level to super-human levels is rapid, then yes, it is 
possible that the integration of AGI's into top level decision making positions 
could occur before the replacement of all workers, which involve mid and 
low-level decision making. 

Mike Deering.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




Re: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Why not get a few AGIs jobs working on modelling of the widespread 
introduction of AGIs - under a large number of scenario conditions to 
find the transition paths that don't result in mayhem and chaos - for us 
humans and for them too.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Ben Goertzel

Philip,

I think that modeling of transition scenarios could be interesting, but I
also think we need to be clear about what its role will be: a stimulant to
thought about transition scenarios.  I think it's extremely unlikely that
such models are going to be *accurate* in any significant sense.  Current
economic models are notoriously ineffective, and the things they're modeling
are a LOT simpler and better-understood.  It's true that in the future we'll
have better computers and early-AGI's to help with the modeling but -- even
so --

ben g


 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Philip Sutton
 Sent: Sunday, January 11, 2004 8:06 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Real world effects on society after development of
 AGI


 Hi Mike/Owen,

 I was quite serious about the need to carefully model lots of transition
 stratgies for the introduction of AGIs.

 I've been interested in economic systems modelling for years and my
 sense is that our current discussion is missing huge elements of
 completeness and connectivity between issues.  This is illustrative of
 the limitations of humans (including me!) when dealing with complex
 systems.

 If we work with the early AGIs to model large numbers of transition
 scenarios we will end up, I anticipate, with a much more robust idea of
 what might happen and what might be better ways to go forward - ie.
 most bearable along the way and most likely to arrive at future
 condiitons that are widely seen to be desirable.

 I'm not suggesting that we cut off discussion on this topic.  I
 think it is
 one of the most critical questions we could be discussing.  But I think
 we need to treat the issue even more seriously by increasing the
 resources we bring to it.

 Cheers, Philip

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]




---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Ben Goertzel

Brad,

Regarding the Singularity, I personally view the sort of discussion we've
been having as a discussion about the late pre-Singularity period.

Regarding AGIs' gradual ascendance to superiority over humans  My guess
is that AGIs will first attain superiority over humans in specialized
domains like scientific theorizing, economic planning, military strategy,
airplane piloting, etc.  They will initially be very expensive, and not
usable for the vast majority of jobs for this reason.  So initially they
will be superior to humans in many senses, but also more expensive and
rarer.  Then (at some rate, which is hard to determine) they will get
cheaper and will eliminate more and more human jobs.

In this scenario, one question is whether the scientist/engineer/planner
AGIs will first

a) come up with cheap ways to mass-manufacture AGIs, cheap nanomanufacturing
of other commodities, etc.

b) come up with ways to make themselves supersupersuperintelligent, hence
triggering the real Singularity

or whether these two will happen essentially simultaneously.

The hard takeoff scenario assumes the two will happen roughly
simultaneously, shortly after the first roughly-human-level AGI appears.
I'm not so sure

-- Ben G

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Behalf Of Brad Wyble
 Sent: Sunday, January 11, 2004 7:37 PM
 To: [EMAIL PROTECTED]
 Subject: Re: [agi] Real world effects on society after development of
 AGI


 Firstly, I think this discussion is confounding the two issues of
 AGI development
 with full scale nanomanufacturing.  The former may precipitate
 the latter,
 but they will likely be separated by a number of decades in
 either case.

 Secondly, as for the replacement of all jobs by AGI, if you're of the
 mindset that
 human intelligence will be outclassed by AGI's that rapidly, then we are
 probably in bad shape no matter how you slice it.  AGI's will empower the
 few to control the mighty to an unprecedented degree with the usual
 unfortunate consequences for the masses that come with unchecked power.


 Myself, I think it's going to be a far more gradual affair, and that it
 will be quite awhile before AGI's become as good as we hope they
 will be.
 There's a huge gray area between Baby AGI and human equivalence and the
 transition from one to the other will not be instantaneous.  And what the


 And finally, some elements of the belief in the singularity have religious
 undertones to an unsettling degree.  By this, I mean a belief in a utopian
 future based more on faith and desire than objective foresight.

 If memory serves, the singularity, as originally proposed, only says that
 it will be impossible to predict what will happen afterwards.  This is
 precisely the opposite of what I'm seeing bandied about in this
 discussion with the singularity being cited as the precipitating event.




 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Real world effects on society after development of AGI????

2004-01-11 Thread Philip Sutton
Ben,

 I think that modeling of transition scenarios could be interesting,
 but I also think we need to be clear about what its role will be: a
 stimulant to thought about transition scenarios.  I think it's
 extremely unlikely that such models are going to be *accurate* in any
 significant sense. 

I completely agree.  It's not predictive power in the crystal ball sense 
that I'm after but the ability to think through consequences and develop 
backcasting strategies (how to make preferred furures possible) in a 
much more complex way that is nevertheless manageable and 
effective.  Also the ability to consider masses of scenarios I think is 
important.

It might also be important to be able to do this kind of 
modelling/thinking in a way that people can join in as within-model' 
agents.  eg. via a hybrid modelling/role play process.  Then we can tap 
some of the unpredicatable creativity of people but hold the whole 
process together in a coherent way using the special capablities of 
AGIs.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]