Jim Bromer wrote:
Loosemore said,
"It is very important to understand that the paper I wrote was about the
methodology of AGI research, not about specific theories/models/systems
within AGI. It is about the way that we come up with ideas for systems
and the way that we explore those systems, not about the content of
anyone's particular ideas."
And Abram said,
"A revised version of my argument would run something like this. As the
approximation problem gets more demanding, it gets more difficult to
devise logical heuristics. Increasingly, we must rely on intuitions
tested by experiments. There then comes a point when making the
distinction between the heuristic and the underlying search becomes
unimportant; the method is all heuristic, so to speak. At this point
we are simply using "messy" methods,"
I wondered if Abram was talking about the way an AI program should work or the
way research into AI should work, or the way AI programs and research into AI
should work?
Jim Bromer
I interpreted him (see parallel post) to be referring still to the
question of how to deal with planning systems, where there is a
formalism (the logic substructure) which cannot be allowed to run its
methods to completion (because they would take too long) and which
therefore has to use "approximation methods", or heuristics, to guess
which are the most likely best planning choices. When the system is
required to do more real-world-type performance (as in an AGI, rather
than a narrow AI) it's behavior will be dominated by the heuristics.
He then went on to talk about methodology: do we just use intuitions to
pick heuristics, or do we make the methodology more systematic by
engaging in automatic searches of the space of possible heuristics?
My perspective on that question would back up one step: if it is a
complex system we are dealing with, we should have been using
systematic, automatic searches of the design space BEFORE, when we were
choosing whether or not to do planning with a Logic+Heuristics design!
But of course, that would be wildly, extravagantly infeasible. So,
instead, I propose to start from a basic design that is as similar as
possible to the human design, and then do our systematic, automatic
search (of the space of mechanism-designs) in an outward direction from
that human-cognition baseline. If intelligence involves even a small
amount of complexity, it could well be that this is the only feasible
way to ever get an intelligence up and running.
Treat it, in other words, as a calculus of variations problem.
Richard Loosemore.
-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com