Tom McCabe wrote:
--- Samantha  Atkins <[EMAIL PROTECTED]> wrote:


Out of the bazillions of possible ways to configure
matter only a ridiculously tiny fraction are more intelligent than a cockroach. Yet it did not take any grand design effort upfront to arrive at a world overrun when beings as intelligent as ourselves.

The four billion years of evolution doesn't count as a
"grand design effort"?

Not in the least. No designer. The point being that good/interesting outcome occur without conscious heavy design. Remember the context was a claim that "Friendly" AI could only arise by such conscious design.
So how does your argument show that Friendly AI (at least relatively Friendly) can only be arrived at by intense up front Friendly design?

Baseline humans aren't Friendly; this has been
thoroughly proven already by the evolutionary
psychologists. If I were to propose an alternative to
CEV that included as many extraneous,
evolution-derived instincts as humans have for an FAI,
you would all (correctly) denounce me as nuts.

Play with me. We don't know what "Friendly" is beyond doesn't destroy all humans almost immediately. If you think Friendly means way "nicer" than humans then you get in even more swampy territory. And again, the point wasn't about humans being Friendly in the first place.


For a rather stupid unlimited optimization process
this might be the case but that is a pretty weak notion of an AGI.

How intelligent the AGI is isn't correlated with how
complicated the AGI's supergoal is.
I would include in intelligence being able to reason about consequences and implications. You seem to be speaking of something much more robotic and to my thinking much less intelligent. In particular it seems to be missing much self-reflection.

A very intelligent
AI my have horrendously complicated proximal goals
(subgoals), but they still serve the supergoal even
after the AGI has become vastly more intelligent than
us. And I strongly suspect that even most horrendously
complicated supergoals will result in more
energy/matter/computing power being seen as desirable.
Without being put into context?  I would consider that not very intelligent.

If A and B are very unlikely then major effort
toward A and B are unlikely to bear fruit in time to halt existential risk outside AGI that we are already prone to, especially including being of too limited effective intelligence without AGI. MNT by itself would be the end of old age, physical scarcity and most diseases relatively quickly.

It would also be the end of us relatively quickly. If
you can make a supercar with MNT, you can make a
supertank. If you can make an electromagnetic
Earth-based space launch system, you can make an
electromagnetic rail gun, and so forth. To quote
Albert Einstein: "I know not with what weapons WWIII
will be fought, but WWIV will be fought with sticks
and stones."

Here we go with the assertion of the only true way again. It no more follows than MNT - FAI = Certain Doom than it followed that Nuclear Bomb - FAI => Certain Doom. And MNT has a lot more positive potential like raising the standard of living to far more than subsistence standards all over earth, ending aging, curing all diseases and so on. I think we might be a little busy with the Golden Age to be in a hurry to do ourselves in.

 It would also give us the means, if we
have sufficient intelligence, to combat many other existential risks. But ultimately we are limited by available intelligence. In a faster and more complex world wielding greater and greater powers having intelligence capped by no AGI is a very serious existential
threat.

So, er, you agree with me?
I agree about the importance of vastly more intelligence, I don't agree that pre-computing how to make it "Friendly" is tractable. In the short run Golden Age phenomenon should follow MNT for a while if we are lucky even without AGI.

Serious enough that I believe it is very suboptimal for high powered brilliant researchers to be chasing an impossible or very
highly unlikely goal.

If we do get powerful, superintelligent AGI, scenario
B is mandatory if we aren't going to be blown to bits
with scenario A being highly desirable for extra
safety. Even if we incur a 90% chance of death through
nanowar if we have to wait another decade for the
necessary research, it's better than a 99.99999999999%
chance of getting turned into paperclips.

You have no valih means to make such a laughably exact statement and the "paperclip" argument is way dated.

- s


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to