I guess I am mundane. I dont spend a lot of time thinking about a definition of intelligence. Goertzels is good enough for me.
Instead I think in terms of what I want these machines to do -- which includes human-level: -NL understanding and generation (including discourse level) -Speech recognition and generation (including appropriate pitch and volume modulation) -Non-speech auditory recognition and generation -Visual recognition and real time video generation -World-knowledge representation, understanding and reasoning -Computer program understanding and generation -Common sense reasoning -Cognition -Context sensitivity -Automatic learning -Intuition -Creativity -Inventiveness -Understanding human nature and human desires and goals(not expecting full human-level here) -Ability to scan and store and, over time, convert and incorporate into learned deep structure vast amounts of knowledge including ultimately all available recorded knowledge To do such thinking I have come up with a fairly uniform approach to all these tasks, so I guess you could call that approach something approaching "a theory of intelligence". But I mainly think of it as a theory of how to get certain really cool things done. I dont expect to get what is listed all at once, but, barring some major set back, this will probably all happen (with perhaps partial exception on the last item) within twenty years, and with the right people getting big money most of it could substantially all happen in ten. In addition, as we get closer to the threshold I think intelligence (at least from our perspective) should include: -helping make individual people, human organizations, and human government more intelligent, happy, cooperative, and peaceful -helping creating a transition into the future that is satisfying for most humans Edward W. Porter Porter & Associates 24 String Bridge S12 Exeter, NH 03833 (617) 494-1722 Fax (617) 494-1822 [EMAIL PROTECTED] -----Original Message----- From: John G. Rose [mailto:[EMAIL PROTECTED] Sent: Saturday, October 20, 2007 1:27 PM To: agi@v2.listbox.com Subject: RE: [agi] An AGI Test/Prize Interesting background about on some thermodynamics history J. But basic definitions of intelligence, not talking about reinventing particle physics here, a basic, workable definition, not rigorous mathematical proof just something simple. AI, AGI cmon not asking for tooo much. In my mind it is not looking that sophisticated at the atomic level and it seems like it is VERY applicable for implementation if not required for testing. Though Hutter and Legg are apparently working diligently on this stuff and have a lot papers. John I largely agree. It's worth pointing out that Carnot published "Reflections on the Motive Power of Fire" and established the science of thermodynamics more than a century after the first working steam engines were built. That said, I opine that an intuitive grasp of some of the important elements in what will ultimately become the science of intelligence is likely to be very useful to those inventing AGI. Yeah, most certainly.... However, an intuitive grasp -- and even a well-fleshed-out qualitative theory supplemented by heuristic back-of-the-envelope calculations and prototype results -- is very different from a defensible, rigorous theory that can stand up to the assaults of intelligent detractors.... I didn't start seriously trying to design & implement AGI until I felt I had a solid intuitive grasp of all related issues. But I did make a conscious choice to devote more effort to utilizing my intuitive grasp to try to design and create AGI, rather than to creating better general AI theories.... Both are worthy pursuits, and both are difficult. I actually enjoy theory better. But my sense is that the heyday of AGI theorizing is gonna come after AGI experimentation has progressed a good bit further than it has today... _____ This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/? <http://v2.listbox.com/member/?& > & ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=8660244&id_secret=55773358-059800