Keith,
Shane, you might not believe this, but I'm on your side.
You might be on my side, but are you on humanities side?
What I mean is: Sure, if I avoid debates about issues that
I think are going to be very important then that might save
my skin in the future if somebody wants to take my
On 5/29/07, Shane Legg wrote:
snip
But then what happens? Potentially very important issues,
indeed probably the most important ones since they are
likely to be some of the most scary, disappear out of the
scope of open discussion. Instead these issues get worked
through in private behind
Keith Elis wrote:
Richard Loosemore wrote:
Your email could be taken as threatening to set up a website
to promote
violence against AI researchers who speculate on ideas that, in your
judgment, could be considered scary.
I'm on your side, too, Richard.
I understand this, and I
Is a broad-based political/social movement to (1) raise consciousness
regarding the potential of A.I. and its future implications and to, in
turn, (2) stimulate public discussion about this whole issue possible at
this time? Or is there simply too much disagreement (or, at Ben put it,
too much
To clarify, I meant too much disagreement internally (within the A.I.
community) or too much disregard for the geeks externally (in the world
at large).
Jon
-Original Message-
From: Jonathan H. Hinck [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 9:15 AM
To:
Ben Goertzel wrote:
But once a powerful AGI is actually created by person X, the prior
mailing list posts of X are likely to be scrutinized, and
interpreted by people whose points of view are as far from
transhumanism as you can possibly imagine ... but who
may have plenty of power in the
Sorry, me again. I was thinking specifically along the lines a movement
which could present to humanity the (potential) benefits of an automated
world where, among other things, wage slavery and its resulting
inequities and hardships are abolished and supplanted by machines (to
use the most
While I have my own doubts about Eliezer's approach and likelihood of
success and about the extent of his biases and limitations, I don't
consider it fruitful to continue to bash Eliezer on various lists
once you feel seriously slighted by him or convinced that he is
hopelessly mired or
On 5/29/07, Stathis Papaioannou [EMAIL PROTECTED] wrote:
On 29/05/07, Jef Allbright [EMAIL PROTECTED] wrote:
I. Any instance of rational choice is about an agent acting so as to
promote its own present values into the future. The agent has a model
of its reality, and this model will
Indeed, displacement of the human labor force began since the beginning
of the industrial revolution (if not before). This is the definition of
technology. And, indeed, the jump form a labor-based to an
automation-based economy would entail a necessary paradigm shift on a
number of levels:
Jon, regarding your politics post -
My impression is that, as a general principle, proposals for
radical change, of almost any kind, are not well-received by
the general public, and that such change is more likely to
occur if it's ideology, presentation, and development are
broken into
On 5/29/07, Richard Loosemore [EMAIL PROTECTED] wrote:
I know of people from outside these lists who have taken a look at some
of Eliezer's writings. These people would go much further than I would:
they think he is an insane, ill-informed megalomaniac who is able to
distract people from his
On May 29, 2007, at 11:36 AM, Jonathan H. Hinck wrote:
Indeed, displacement of the human labor force began since the
beginning of the industrial revolution (if not before). This is the
definition of technology. And, indeed, the jump form a labor-based
to an automation-based economy
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote:
But does there need to be consensus among the experts for a public
issue to be raised? Regarding other topics that have been on the
public discussion palate for awhile, how often has this been the
case? Perhaps with regard to issues
14 matches
Mail list logo