I really hate to get into this endless discussion. I think everyone agrees that some randomness in AGI decision making is good (e.g. learning through exploration). Also it does not matter if the source of randomness is a true random source, such as thermal noise in neurons, or a deterministic pseudo random number generator, such as iterating a cryptographic hash function with a secret seed.
I think what is confusing Mike (and I am sure he will correct me) is that the inability of humans to predict their own thoughts (what will I later decide to have for dinner?) is something that needs to be programmed into an AGI. There is actually no other way to program it. A computer with finite memory can only model (predict) a computer with less memory. No computer can simulate itself. When we introspect on our own brains, we must simplify the model to a probabilistic one, whether or not it is actually deterministic. -- Matt Mahoney, [EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936