Shane Legg wrote: >A better idea, I think, would be to test the system on *all* problems >that can be described in n bits or less (or use a large random sample >from this space). Then your system is gauranteed to be completely >general in a computational sense.
Sounds good to me. Perhaps my motivation in thinking about test problems is to give advance notice to those who in the future may claim they have developed, or may claim that they are on the path to developing, human level AI or AGI. But given my lack of credentials in this arena, I would feel a little sheepish professing to be a judge (I doubt many here would put a lot of weight on a fresh Loebner bronze medal). Still, it seems there may be some benefit in developing Shane's list. The Loebner-type Turing test is fraught with difficulties, but is the only defined milestone that I am aware of (except for lesser solved problems such as playing grandmaster level chess). A collection of tests that serve as milestones may be useful for guiding, gauging, and judging. Various types and difficulties of test could occupy the space. If the space could be coherently defined and populated by people respected in the field, we would have a sophisticated means in which to discuss progress. Of course, it would not hurt to give each one a substantial cash prize value :-) On the subject of whether an AGI is a Turing Machine, it struck me that an AGI will change based upon interaction with the physical universe. So, its internal state will be continuously changing due to input from the vastly complex real world, making it unknowable to the extent that we don't know everything about that which it interacts with. We could only predict its behavior if we knew its complete history right up to the very instance of action, which may not be any easier than knowing what a bored human will do in the next five seconds. Kevin Copple ------- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
