On Dec 21, 2007 10:36 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > The problem here seems to be that we can't agree on a useful definition of > intelligence. As a practical matter, we are interested in an agent meeting > goals in a specific environment, or a finite set of environments, not all > possible environments. In the case of environments having bounded space and > time complexity, Hutter proved there is a computable (although intractable) > solution, AIXItl. In the case of a set of environments having bounded > algorithmic complexity where the goal is prediction, Legg proved in > http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a > solution. So in either case, there is one agent that does better than all > others over a finite set of environments, thus an upper bound on intelligence > by these measures.
Matt, Problem with you referring to these works this way is that statements you try to justify are pretty obvious and don't require these particular works to support them. The only difference is use of particular terms such as 'intelligence', which in itself is arbitrary and doesn't say anything. You have to refer to specific mathematical structures. > If you prefer to use the Turing test than a more general test of intelligence, > then superhuman intelligence is not possible by his definition, because Turing > did not define a test for it. Humans cannot recognize intelligence superior > to their own. For example, adult humans easily recognize superior > intelligence when William James Sidis (see > http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at > 18 months and admitted to Harvard at age 11, but you would not expect children > his own age to recognize it. Likewise, when Sidis was an adult, most people > merely thought his behavior was strange, rather than intelligent, because they > did not understand it. I don't 'prefer' any such test, I don't know any satisfactory solutions to this problem. Intelligence is 'what brains do', that is what we can say on current level of theory here, and I suspect it's end of story until we are fairly close to a solution. You can discuss elaborations within particular approach, but then again you'd have to provide more specifics. > More generally, you cannot test for universal intelligence without > environments of at least the same algorithmic complexity as the agent being > tested, because otherwise (as Legg showed) simpler agents could pass the same > tests. For real world it's a useless observation. And no, it doesn't model your example with humans above, it's just a superficial similarity. -- Vladimir Nesov mailto:[EMAIL PROTECTED] ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=78592952-79df48
