> That *still* doesn't work.
>
> 1) "Hard-wired" rules are a pipe dream. It consists of mixing
> mechanomorphism ("machines only do what they're told to do") with
> anthropomorphism ("I wish those slaves down on the plantation would stop
> rebelling"). The only hard-wired level of organization is code, or, in a
> seed AI, physics. Once cognition exists it can no longer be usefully
> described using the adjective "hard-wired". This is like saying you can
> make Windows XP stable by hard-wiring it not to crash, presumably by
> including the precompilation statement #define BUGS 0.
I think Eliezer is being very clear and I agree wih him. Humans effectively have
hard-wired rules (heart beats faster when O2 is low) that can't really be overwritten
(yet), but it is a mistake to consider this an analogous situation.
Hyperintelligences, by the definition that will allow them to become such entities,
will be able to understand and modify themselves far more easily than we.
>
> 2) Any individual ethic that "cannot be overridden" - if we are speaking
> about a successfully implemented design property of the system, and not a
> mythical hardwiring - will never be any stronger, smarter, or more
> reliable than the frozen goal system of its creator as it existed at the
> time of producing that ethic. In particular, odd things start happening
> when you take an intelligence of order X and try to control it using goal
> patterns that were produced by an intelligence of order << X. You say
> "cannot be overridden", I hear "cannot be renormalized".
>
Agreed again, if a hyperintelligent system can modify itself down to the nano-level,
we are effectively attempting to out-smart it in the implementation of the three laws
that it will not *decide* to overwrite(because it certainly has the ability to do so).
We cannot succeed at this, by the definition of hyperintelligence that I think most
of us agree on.
> 3) A society of selfish AIs may develop certain (not really primatelike)
> rules for enforcing cooperative interactions among themselves; but you
> cannot prove for any entropic specification, and I will undertake to
> *disprove* for any clear specification, that this creates any rational
> reason to assign a greater probability to the proposition that the AI
> society will protect human beings.
>
No, you can't guarantee anything. But if AI's are in any way grounded in our
mentality, protective (as opposed to not anti-humanitarian) tendencies have a good
chance of evolving. This is a good argument for modeling AGI's on our brains. While
humans are not angels by any means, I do believe that we would, *as a community* look
after lesser beings once having achieved a comfortable level of need-fulfillment. I
say this by analogy to our current situation in which developed countries are starting
to show an interest in the preservation of the environment and endangered species,
even at great expense and inconvenience.
> 4) As for dependence on human suppliers, if you're talking about
> transhumans of any kind, AIs, uploads, what-have-you, transhumans
> dependent on a human economy is a pipe dream. (Order custom proteins from
> an online DNA synthesis and peptide sequencer; build nanotech; total time
> of dependence on human economy, 48 hours.)
Agree again.
Eliezer, I think your quest to provide a surefire guarantee against a singularity that
eliminates mankind is unfulfillable. We will be rolling the dice in creating AGI's
and the best we can do is load them.
-Brad
-------
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]