Eliezer S. Yudkowsky wrote: > "Important", because I strongly suspect Hofstadterian superrationality > is a *lot* more ubiquitous among transhumans than among us...
It's my understanding that Hofstadterian superrationality is not generally accepted within the game theory research community as a valid principle of decision making. Do you have any information to the contrary, or some other reason to think that it will be commonly used by transhumans? About a week ago Eliezer also wrote: > 2) While an AIXI-tl of limited physical and cognitive capabilities might > serve as a useful tool, AIXI is unFriendly and cannot be made Friendly > regardless of *any* pattern of reinforcement delivered during childhood. I always thought that the biggest problem with the AIXI model is that it assumes that something in the environment is evaluating the AI and giving it rewards, so the easiest way for the AI to obtain its rewards would be to coerce or subvert the evaluator rather than to accomplish any real goals. I wrote a bit more about this problem at http://www.mail-archive.com/everything-list@eskimo.com/msg03620.html. ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]