Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:

Tom McCabe wrote:
--- Samantha  Atkins <[EMAIL PROTECTED]> wrote:

Out of the bazillions of possible ways to
configure
matter only a ridiculously tiny fraction are more intelligent
than
a cockroach. Yet it did not take any grand design effort upfront
to
arrive at a world overrun when beings as intelligent as ourselves.
The four billion years of evolution doesn't count
as a
"grand design effort"?
Not in the least.  No designer.

Evolution proves that design doesn't require a
designer, not a conscious one, anyway.

The point being
that good/interesting outcome occur without conscious heavy design.

Agreed.
Remember the context was a claim that "Friendly" AI could only arise by such
conscious design.

Also agreed. However, an intelligent system in general
requires some kind of optimization process for design,
which was the original point.

So how does your argument show that Friendly AI (at least
relatively
Friendly) can only be arrived at by intense up front Friendly
design?
Baseline humans aren't Friendly; this has been
thoroughly proven already by the evolutionary
psychologists. If I were to propose an alternative
to
CEV that included as many extraneous,
evolution-derived instincts as humans have for an
FAI,
you would all (correctly) denounce me as nuts.

Play with me.  We don't know what "Friendly" is
beyond doesn't destroy all humans almost immediately.

We lack a rigorous technical understanding, but we all
have an intuitive one- a Friendly AI will act nice,
not cause us pain, not seize the entire universe for
itself, not act like a human bully, etc.

 If you think
Friendly means way "nicer" than humans then you get in even more swampy territory. And again, the point wasn't about humans being Friendly in the first place.

The original point was that an FAI could come about
through some process other than very careful
engineering, with humans as an example (we were
designed by evolution). My reply was that humans are
not Friendly.

For a rather stupid unlimited optimization
process
this might be the case but that is a pretty weak notion of an AGI.
How intelligent the AGI is isn't correlated with
how
complicated the AGI's supergoal is.
I would include in intelligence being able to reason
about consequences and implications.

Exactly. A simple-goal-system superintelligent AGI
might build a wonderfully complicated device, with
long chains of cause and effect and complex systems,
which turns the universe into cheesecake. The AGI is a
lot better than humans at seeing consequences and
implications- the AGI knows the device will turn the
universe into cheesecake, while a human wouldn't.

 You seem to be speaking of
something much more robotic and to my thinking much less intelligent. In particular it seems to be missing much self-reflection.

Most goal systems are naturally stable and will tend
to avoid self-reflection, because self-reflection
introduces the possibility of alteration, and a simple
goal system will see an agent with an altered goal
system as less desirable because it pursues new goals
instead of the current goals.

A very intelligent
AI my have horrendously complicated proximal goals
(subgoals), but they still serve the supergoal
even
after the AGI has become vastly more intelligent
than
us. And I strongly suspect that even most
horrendously
complicated supergoals will result in more
energy/matter/computing power being seen as
desirable.
Without being put into context?  I would consider
that not very intelligent.

Why not? Humans are intelligent, and due to our
intelligence, we have been very successful at
extracting energy, matter, and computing power from
lumps of rock. And this raw material has helped us
further our goals tremendously.

I am not sure you are capable of following an argument in a manner that makes it worth my while to continue.

- s

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to