On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
1. It is not possible for a less intelligent entity (human) to predict the
behavior of a more intelligent entity.  A state machine cannot simulate
another machine with more states than itself.
(...)

I think you should add "in the general case" to the statement above.
In particular cases a less intelligent entity is perfectly able to
predict the behavior of a more intelligent one. For instance, my cats
are less intelligent than me (or so I hope ;-) and they can predict
several of my actions and take decisions based on that. For instance
"Lúcio has finished dinner and so he will not be at the kitchen
anymore tonight, so I should better meow for more food".

I guess they can predict that based on previous cases - countless
times that I finished dinner, turned the kitchen light off and went to
my bedroom. Which by the way may hint at a way to predict (in the same
cat-like statistical way) the friendliness of an AI:

- Start the AI inside a virtual environment approximating reality, but
don't tell the AI that it's virtual.
- Observe a significant number of the AI actions (and reactions) in
that virtual reality.
- If the AI is considered friendly, then restart it, this time in a
real environment.

Which by the way goes to another point of your list:

2. A rigorous proof that an AI will be friendly requires a rigorous definition of 
"friendly".

People (and even science) often are satisfied with proofs that are
empirical instead of rigorous. And I think that the definition of
friendliness may be intrinsically subjective. The VR testbed for AI
would accommodate both empiricism and subjectivism in proving
friendliness.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to