Indeed, displacement of the human labor force began since the beginning
of the industrial revolution (if not before).  This is the definition of
technology.  And, indeed, the jump form a labor-based to an
automation-based economy would entail a necessary paradigm shift on a
number of levels: economic, sociological, psychological, ethical...

 

My concern is that, if humanity is suddenly presented with the strong
possibility of such change as a result of, say, an announcement of the
Turing Test being passed, the resulting shock could produce a reaction
which inhibits the move toward comprehensive automation.

 

This is why I, for one, am not in favor of these discussions being
carried on in private only by the initiated, interested, or
self-appointed.  This whole issue is too "big" for any small group, and
they don't own it (though they may think they do.)  The public needs to
be exposed, perhaps over a gradual period of time.  The realization of
civil rights, for instance, was not achieved in a short period, but,
rather, over a long interval characterized by struggle, debate, and,
finally, acceptance (grudging acceptance for many).

 

At any rate, as you point out, there doesn't seem of to be any
discussion of any transitional plan, either organizationally or
psychologically, from one economic paradigm to another.  What happens
when an irresistible force meets an immovable object?  I would rather
not find out, but remove the object(s) instead, which will entail a
major undertaking that is yet to be initiated.

 

Jon 

 

________________________________

 

Today, May 29, 2007, 3 hours ago | [EMAIL PROTECTED]

<http://www.listbox.com/member/archive/11983/2007/20070529113615:56792D0
E-0DFA-11DC-89E0-EB86945AD530/> 

The rubber has already hit the road. Automation and computer
displacement of jobs is an old story. The real challenge in my mind is
how the world at large will shift to a post-scarcity economics and at
what point. More importantly what are the least disruptive and most
beneficial steps along the way. Clearly we cannot jump to livable
financial and material benefits to all regardless of employment in one
great leap. So what is the grading along the way?



________________________________


From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 10:02 AM
To: [email protected]
Subject: RE: [singularity] The humans are dead...

 

Sorry, me again.  I was thinking specifically along the lines a movement
which could present to humanity the (potential) benefits of an automated
world where, among other things, wage slavery and its resulting
inequities and hardships are abolished and supplanted by machines (to
use the most general term) which monitor and take care of humanity.

 

My concern, as a futurist, is that if the discussion is framed in terms
of immediate economic dislocation (which it often seems to be), then
there will more likely be a less-than-positive reaction from John Q.
Public, who will view machines as an immediate threat.

 

If, therefore, an automated future is going to have any chance in the
political arena (where it will eventually end up, sooner or later), then
perhaps Futurists need to pre-empt the Luddites (and pseudo-Luddites)
somehow, before the rubber hits the road.

 

Jon

 

-----Original Message-----
From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, May 29, 2007 9:15 AM
To: [email protected]
Subject: RE: [singularity] The humans are dead...

 

Is a broad-based political/social movement to (1) raise consciousness
regarding the potential of A.I. and its future implications and to, in
turn, (2) stimulate public discussion about this whole issue possible at
this time?  Or is there simply too much disagreement (or, at Ben put it,
too much disregard for the "geeks") for this to be possible?  (Please,

please, thoughts anyone?) 

 

Jon

 

-----Original Message-----

From: Keith Elis [mailto:[EMAIL PROTECTED] 

Sent: Monday, May 28, 2007 9:19 PM

To: [email protected]

Subject: RE: [singularity] The humans are dead...

 

 

Ben Goertzel wrote:

 

> Right now, no one cares what a bunch of geeks and freaks 

> say about AGI and the future of humanity.

> 

> But once a powerful AGI is actually created by person X, the prior 

> mailing list posts of X are likely to be scrutinized, and 

> interpreted by people whose points of view are as far from

> transhumanism as you can possibly imagine ... but who

> may have plenty of power in the world... 

 

This understanding of the world we live in is where my posts on this

topic originate.

 

But, Ben is being optimistic. I would suggest the situation is even

worse. If person X actually comes up with some exciting result short of

real AGI, and this gets widespread media attention, the same scrutiny

will begin in earnest. If you're person X, and you haven't been careful,

you might not get the chance to complete your work the way you want to. 

 

Regardless, the powers that be won't have to go sifting through

thousands of emails either. Think of all the people you've disagreed

with over the years. Think of all the people who could be jealous of

your achievement. Sour grapes and outright dislike will see to it that

your views on *many* things get to the wrong people. 

 

Keith

 

 

-----

This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:

http://v2.listbox.com/member/?&;

 

-----

This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:

http://v2.listbox.com/member/?&;

________________________________

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

________________________________

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to