Ian Parker wrote

>> "If we program a machine for winning a war, we must think well what
>> we mean by winning."
>
> I wasn't thinking about winning a war, I was much more thinking about
> sexual morality and men kissing.

If we program a machine for doing X, we must think well what we mean
by X.

Now clearer?

> "Winning" a war is achieving your political objectives in the war. Simple
> definition.

Then define your political objectives. No holes, no ambiguity, no
forgotten cases. Or does the AGI ask for our feedback during mission?
If yes, down to what detail?

> The axioms which we cannot prove
> should be listed. You can't prove them. Let's list them and all the
> assumptions.

And then what? Cripple the AGI by applying just those theorems we can
prove? That excludes of course all those we're uncertain about. And
it's not so much a single theorem that's problematic but a system of
axioms and inference rules that changes its properties when you
modify it or that is incomplete from the beginning.

Example (very plain just to make it clearer what I'm talking about):

The natural numbers N are closed against addition. But N is not
closed against subtraction, since n - m < 0 where m > n.

You can prove the theorem that subtracting a positive number from
another number decreases it:

http://us2.metamath.org:88/mpegif/ltsubpos.html

but you can still have a formal system that runs into problems.
In the case of N it's missing closedness, i.e., undefined area.
Now transfer this simple example to formal systems in general.
You have to prove every formal system as it is, not just a single
theorem. The behavior of an AGI isn't a single theorem but a system.

> The heuristics could be tested in an off line system.

Exactly. But by definition heuristics are incomplete, their solution
space is smaller than the set of all solutions. No guarantee for the
optimal solution, just probabilities < 1, elaborated hints.

>>> Unselfishness going wrong is in fact a frightening thought. It would
>>> in
>>> AGI be a symptom of incompatible axioms.
>>
>> Which can happen in a complex system.
>
> Only if the definitions are vague.

I bet against this.

> Better to have a system based on "*democracy*" in some form or other.

The rules you mention are goals and constraints. But they are heuristics
you check during runtime.



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to