Shane,
I think the topic deserves far more time and thought in AI than it currently gets.
Fully agree. The situation in mainstream AI is even worse on this topic, compared to the new AGI community. Will you write something for AGI-08 on this?
If an optimisation algorithm searches some of a solution space (because of lack of computer power to search all of it) and then returns a solution, does this system have some intelligence according to your definition?
It depends. If it always search the same part of the space, it isn't intelligent; if it searches different parts of the space in a context and experience sensitive manner, it is intelligent; if it doesn't only search among listed alternatives, but also find out new alternatives, it is much more intelligent.
Both AIXI and universal intelligence are too far away from reality to be directly implemented. I think we all agree on that. In their current form their main use is for theoretical study.
I understand your motivation. You hope AIXI will serve a role for AI like what what Turing Machine does for computer science --- TM is surely unrealistic, though nobody will deny its usefulness as a theoretical construction. I wish AIXI can evolve into such a role, though in its current form, I'm afraid that its assumptions are too idealized even for this purpose --- if a boundary is too loose, it won't have actual impact in the system designed within it.
In the case of universal intelligence I think there is some hope due to the fact that the C-Test is based on quite similar ideas and this has been used to construct an intelligence test with sensible results. Sometime after my thesis I'm going to code up an intelligence test based on universal intelligence and see how well various AI algorithms perform.
I'll be very interested in your progress. Beside the problem of resources, another issue is the form of reward. I see that you have tried hard to establish a formal model covering all AI systems, which I appreciate. However, beside input/output of the system, you assume the rewards to be maximized come from the environment in a numerical form, which is an assumption not widely accepted outside the reinforcement learning community. For example, NARS may interpret certain input as reward, and certain other input as punishment, but it depends on many factors in the system, and is not objective at all. For this kind of systems (I'm sure NARS isn't the only one), how can your evaluation framework be applied? Cheers, Pei
Cheers Shane ________________________________ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?&
----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936