Ian Parker wrote

> What we would want
> in a "*friendly"* system would be a set of utilitarian axioms.

"If we program a machine for winning a war, we must think well what
we mean by winning."

(Norbert Wiener, Cybernetics, 1948)

> It is also important that AGI is fully axiomatic
> and proves that 1+1=2 by set theory, as Russell did.

Quoting the two important statements from

http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms

"Gödel's first incompleteness theorem showed that Principia could not
be both consistent and complete."

and

"Gödel's second incompleteness theorem shows that no formal system
extending basic arithmetic can be used to prove its own consistency."

So in effect your AGI is either crippled but safe or powerful but
potentially behaves different from your axiomatic intentions.

> We will need morality to be axiomatically defined.

As constraints, possibly. But we can only check the AGI in runtime for
certain behaviors (i.e., while it's active), but we can't prove in
advance whether it will break the constraints or not.

Get me right: We can do a lot with such formal specifications and we
should do them where necessary or appropriate, but we have to understand
that our set of guaranteed behavior is a proper subset of the set of
all possible behaviors the AGI can execute. It's heuristics in the end.

> Unselfishness going wrong is in fact a frightening thought. It would in
> AGI be a symptom of incompatible axioms.

Which can happen in a complex system.

> Suppose system A is monitoring system B. If system Bs
> resources are being used up A can shut down processes in A. I talked
> about computer gobledegook. I also have the feeling that with AGI we
> should be able to get intelligible advice (in NL) about what was going
> wrong. For this reason it would not be possible to overload AGI.

This isn't going to guarantee that system A, B, etc. behave in all
ways as intended, except they are all special purpose systems (here:
narrow AI). If A, B etc. are AGIs, then this checking is just an
heuristic, no guarantee or proof.

> In a resource limited society freeloading is the biggest issue.

All societies are and will be constrained by limited resources.

> The fundamental fact about Western crime is that very little of it is
> to do with personal gain or greed.

Not that sure whether this statement is correct. It feels wrong from
what I know about human behavior.

>> Unselfishness gone wrong is a symptom, not a cause. The causes for
>> failed states are different.
>
> Axiomatic contradiction. Cannot occur in a mathematical system.

See above...



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to