> From: William Pearson [mailto:[EMAIL PROTECTED] > Subject: Re: [agi] An AGI Test/Prize > > I do not think such things are possible. Any problem that we know > about and can define, can be solved with a giant look up table, or > more realistically, calculated by an unlearning TM. Unless you are of > the opinion that learning is unnecessary for intelligence? In which > case what you want may be possible. > > Any appearance of learning can also be faked by GLUT and unlearning > TMs, using time as an input. If you want to rigorously define > intelligence, you will need to look at how the internals change and > base a definition on that. My current thinking is based on which > search spaces the system moves through while trying to map input to > output, and how it makes use of information from the outside to change > what it does. >
Whether or not learning is necessary for intelligence would depend on the exact definition of it. The minimally intelligent engine would contain internal information. What is the minimal internal state it would need to start with if any? Is the system, before any input, intelligent? There could be a very simple mathematical definition of intelligence. John ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=55555572-67e13d
