--- Eugen Leitl <[EMAIL PROTECTED]> wrote:

> On Thu, Jun 07, 2007 at 07:24:32AM -0700, Michael
> Anissimov wrote:
> 
> > >You've been sounding like a broken record for a
> while. It's because
> > >speed kills. What or who is doing the killing is
> not important.
> > 
> > Who needs politeness or respect for your fellow
> man when we can make
> > ourselves feel great by putting others down?
> 
> That's not an argument. That rapid environmental
> changes are dangerous
> is an argument. You need an argument to refute that
> argument.

Rapid environmental changes are scary, but they're
only dangerous if uncontrolled. Eg, flying from Sweden
to Mexico is a very rapid environment change, but it's
not dangerous. Where would the danger come from?

> > >Dude, I-current wouldn't trust me a picometer if
> I was much, much
> > >smarter. Neither should you.
> > 
> > People are suspicious of outsiders, including
> superintelligence.  This
> 
> We don't have to suspect that evolutionary dynamics
> is full of extinctions,
> we *know* it. If you think you can sustainably strip
> darwinian regime
> I'd like to see an argument how you propose to do
> that.

Evolutionary dynamics is not how an AI operates;
observations about how many species evolution does or
does not kill off therefore do not apply.

> > is nothing new.  So superintelligence will have to
> prove itself out in
> > the real world.
> 
> I think people who have a good chance of
> precipitating a hard takeoff
> runaway are dangerous, and need watching. As long as
> people are bipedal
> primates, the dynamics should be s l o w. Slowing
> things down is a
> hard problem, but this doesn't mean we shouldn't
> try.

If you try to slow world technology development,
unless you have a heck of a lot of resources (eg,
control of a world government), you are doomed to
failure because someone else is going to develop the
technology very quickly and use it to overpower you.

> > >Or we ourselves will evaporate, together with a
> few cm of Earth regolith.
> > >Sorry, I'd rather not take the chances.
> > 
> > If AI is likely to come first, as I believe, then
> it's in our best
> > interests to make it as friendly as possible.  Not
> dismiss the problem
> 
> How do you define friendly? I keep asking this
> question, and I keep
> asking it for a very good reason. Once you give me
> your definition, 
> I will explain the reasons.

See Collective Extrapolated Volition at
http://sl4.org/wiki/CoherentExtrapolatedVolition. It's
very long for a reason (the question is complicated).

> > because we're suspicious of all AI.
> 
> I'm not dismissing it because I'm suspicious, I'm
> dismissing it because
> people who keep repeating the 'friendly friendly
> friendly' mantra are
> dangerously deluded, and need a reality check.
>  
> > >I'm sorry, I'm not religious. Try the next door
> down the hall.
> > 
> > All I said was that superintelligence could be
> wiser and more
> > charismatic than human beings.  If you disagree,
> you must believe that
> 
> Wise and charismatic are always relative. I don't
> believe that 
> *you* can make superintelligences which are are wise
> and charismatic
> against bipedal primates. (The *you* includes anyone
> who walks
> on two legs, not just Michael A.). 

Any successful superintelligence project will almost
certainly be a group effort. I agree that nobody could
do it alone; the project is simply too large.

> I'm also not interested in repeated assertions. I'm
> interested in how you
> can make it so, spelled out formally. Your first
> step is describing
> 'friendly' in a formal system, constructively. Your
> second step is
> using that constructive definition as a source of
> development 
> constraints. Your third step is building an
> open-ended supercritical
> seed which utilizes results from your third step,
> asserting insertion
> into the 'friendly' behaviour space region target,
> while maintaining a 
> sufficient fitness delta to anything else which is
> not you. 
> (While you're at that, I could use an answer to
> P=NP, too -- shouldn't
> take you more than a minute).

Right, because I'm sure you can do the equivalent for
whatever system you're arguing for. Seriously- if
you're good enough to specify how the system would
work, in full technical detail, shouldn't you be
working on it instead of arguing over a mailing list?
It is kind of important.

> If you can make a good case even for the first step,
> I'm willing
> to listen. If you can't make even that first step, I
> continue to 
> point and laugh. You can continue to pout, but this
> doesn't
> make your case any stronger.
> 
> > humans are the wisest and most charismatic
> possible beings in the
> 
> Do you understand the difference between kinetic and
> thermodynamic
> reaction control? I'm only interested in kinetic
> bottlenecks, because
> it's the only one that counts.
> 
> > universe.  Who's the religious one here?  Who's
> being the Copernicus
> > and who's being the Church?  The universe does not
> revolve around
> > humans.  We do not have the monopoly on morality
> or cleverness.  This
> > is Transhumanism 101.
> 
> This is not arguments. This is waffle. (I agree that
> humanity
> is a random achor, but I happen to be a member of
> that set, and
> as long as I and my kids are that, I can't help
> about that
> particular bias. If we all are dead the point is
> moot anyway).
>  
> > >Why the animal chauvinism, sheep? You've never
> met an intelligent human,
> > >so why are you judging them? Oh, wait...
> > 
> > This is bullying rhetoric, and it's uncalled for. 
> And if calling
> > someone on being a bully makes me a wuss, so be
> it.
> 
> I see I'm being misunderstood. My point was that
> iterated interactions
> between very asymmetrical players have no measurable
> payoffs for the 
> bigger player.

Because you're defining "payoff" in a very
conventional sense- more money, or more power, or more
social recognition. Keep in mind that you can design
the AI goal system to do literally anything. And of
course, in keeping with your standards, I expect to
see a full mathematical proof of this, with
"interaction", "player", and "payoff" all rigorously
defined.

> Because of this the biosphere only
> gives, and the humans 
> only take.

No other species in the world even *attempts* to
maintain the biosphere, since no other species even
knows what the heck a biosphere is. At least some of
us make the attempt. Do you think that termites, given
the technological power to eat all the wood in the
world, would not eat for fear of damaging the
ecosystem?

> With bigger players than us, we only get
> a chance to see how
> the receiving end of habitat destruction looks like.
> It's not personal,
> but it still kills fine.

By any chance, do you support the Voluntary Human
Extinction Movement?

> When you've got your points one, two and three done,
> let me know. 
> I'm the first person to admit I don't know
> everything.
> 
> -- 
> Eugen* Leitl <a href="http://leitl.org";>leitl</a>
> http://leitl.org
>
______________________________________________________________
> ICBM: 48.07100, 11.36820 http://www.ativel.com
> http://postbiota.org
> 8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443
> 8B29 F6BE
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;
> 



      
____________________________________________________________________________________
Luggage? GPS? Comic books? 
Check out fitting gifts for grads at Yahoo! Search
http://search.yahoo.com/search?fr=oni_on_mail&p=graduation+gifts&cs=bz

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to