On 9/13/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Then I'm afraid it is hopeless.

Well the formal approach to Friendly AI is of course hopeless, but only in the same sense that approach to AGI in general is hopeless.

1. It is not possible for a less intelligent entity (human) to predict the behavior of a more intelligent entity.  A state machine cannot simulate another machine with more states than itself.

This is a common enough meme that it's worth debunking again.

We can't in general predict even the behavior of a less intelligent entity. For example, my ability to predict what my brother's cat will do isn't noticeably better than random. Oh, I can predict what it _won't_ do - for example, I can confidently predict it won't fly to the moon or earn a degree in physics. But not what it will do.

Similarly, while I obviously (more or less by definition) can't predict what moves a better player than me will make on the chessboard, I can't predict what moves a worse player will make either. I suspect if anything my prediction record would be worse in the latter case.

I sometimes can predict what an entity more or less intelligent than me will do, if I know its goals; then I can predict it will end up achieving those goals. Note that this becomes more reliable the _more_ intelligent said entity is. Given a chess problem with mate in 3, I'm more confident in the move a player will make, the better that player is.

As for the state machine argument, consider the following program:

i = 0
while 1:
 i = i + 1
 print i

run on a machine with a googolplex bytes of memory at a googolplex operations per second. That machine has far more states than me, yet I can quite confidently predict its actions.

2. A rigorous proof that an AI will be friendly requires a rigorous definition of "friendly".

Of course. "Friendly" is just a smokescreen; a rigorous proof that an AI will make paperclips requires a rigorous definition of paperclips, which we also don't have. A rigorous proof that an AI will be intelligent requires a rigorous definition of intelligence, which we especially don't have. That entire line of thinking is a dead end.

3. Assuming (2), proving this property runs into Godel's incompleteness theorem for any AI system with a Kolmogorov complexity over about 1000 bits.  See http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Ditto.

4. There is no experimental evidence that consiousness exists.  You believe that it does because animals that lacked an instinct for self preservation and fear of death were eliminated by natural selection.

There is overwhelming evidence that consciousness exists. Every waking hour, I not only experience my own consciousness, but other people act as though they were also conscious.

Mind you, I don't believe it will be feasible to create conscious or self-willed (e.g. RPOP) AI in the foreseeable future. But that's another matter.

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to