Kaj, (Disclaimer: I do not claim to know the sort of maths that Ben and
Hutter and others have used in defining intelligence. I'm fully aware that I'm dabbling in areas that I have little education in, and might be making a complete fool of myself. Nonetheless...)
I'm currently writing my PhD thesis at the moment in which, at Hutter's request, I am going to provide what should be an easy to understand explanation of AIXI and the universal intelligence measure. Hopefully this will help make the subject more understandable to people outside the area of complexity theory. I'll let this list know when this is out. The intelligence of a system is a function of the amount of different
arbitrary goals (functions that the system maximizes as it changes over time) it can carry out and the degree by which it can succeed in those different goals (how much it manages to maximize the functions in question) in different environments as compared to other systems.
This is essentially what Hutter and I do. We measure the performance of the system for a given environment (which includes the goal) and then sum them up. The only additional thing is that we weight them according to the complexity of each environment. We use Kolmogorov complexity, but you could replace this with another complexity measure to get a computable intelligence measure. See for example the work of Hernandez (which I reference in my papers on this). Once I've finished my thesis, one thing that I plan to do is to write a program to test the universal intelligence of agents. This would eliminate a thermostat from being an intelligent system,
since a thermostat only carries out one goal.
Not really, it just means that the thermostat has an intelligence of one on your scale. I see no problem with this. In my opinion the important thing is that an intelligence measure orders things correct. For example, a thermostat should be more intelligent than a system that does nothing. A small machine learning algorithm should be smarter still, a mouse smarter still, and so on... Humans would be
classified as relatively intelligent, since they can be given a wide variety of goals to achieve. It also has the benefit of assigning narrow-AI systems a very low intelligence, which is what we want it to do.
Agreed. If you want to read about the intelligence measure that I have developed with Hutter check out the following. A summary set of talk slides: http://www.vetta.org/documents/Benelearn-UniversalIntelligence-Talk.pdf Or for a longer paper: http://www.vetta.org/documents/ui_benelearn.pdf Unfortunately the full length journal paper (50 pages) is still in review so I'm not sure when that will come out. But my PhD thesis will contain this material and that should be ready in a few months time. Cheers Shane ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936
