Bill Hibbard wrote:
I never said perfection, and in my book make it clear that
the task of a super-intelligent machine learning behaviors
to promote human happiness will be very messy. That's why
it needs to be super-intelligent.

The problem with laws is that they are inevitably ambiguous.
They are analogous to the expert system approach to AI, that
cannot cope with the messiness of the real world. Human laws
require intelligent judges to resolve their ambiguities. Who
will supply the intelligent judgement for applying laws to
super-intelligent machines?

I agree whole heartedly that the stakes are high, but think
the safer apporach is to build ethics into the fundamental
driver of super-intelligent machines, which will be their
reinforcement values.
*takes deep breath*

You're thinking of the logical entailment approach and the problem with that, as it appears to you, is that no simple set of built-in principles can entail everything the SI needs to know about ethics - right?

Like, the complexity of everything the SI needs to do is some very high quantity, while the complexity of the principles that are supposed to entail it is small, right?

If SIs have behaviors that are reinforced by a set of values V, what is the internal mechanism that an SI uses to determine the amount of V? Let's say that the SI contains an internal model of the environment, which I think is what you mean by temporal credit assignment, et cetera, and the SI has some predicate P that applies to this internal model and predicts the amount of "human happiness" that exists. Or perhaps you weren't thinking of a system that complex; perhaps you just want a predicate P that applies to immediate sense data, like the human sense of pleasurable tastes.

What is the complexity of the predicate P?

I mean, I'm sure it seems very straightforward to you to determine when "human happiness" is occurring...

--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to