On Sat, Apr 5, 2008 at 12:24 AM, William Pearson <[EMAIL PROTECTED]> wrote:
> On 01/04/2008, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>  >
>  > This question supposes a specific kind of architecture, where these
>  >  things are in some sense separate from each other.
>
>  I am agnostic to how much things are separate. At any particular time
>  a machine can be doing less or more of each of these things. For
>  example in humans it is quite common to talk of concentration.
>
>  E.g. "I'm sorry I wasn't concentrating on what you said, could you repeat 
> it."
>  "Stop thinking about the girl, concentrate on the problem at hand."
>
>  Do you think this is meaningful?

It is in some sense, but you need to distinguish levels of
description. Implementation of system doesn't have a
"thinking-about-the-girl" component, but when system obtains certain
behaviors, you can say that this process that is going now is a
"thinking-about-the-girl" process. If, along with learning this
process, you form a mechanism for moving attention elsewhere, you can
evoke that mechanism by, for example, sending a phrase "Stop thinking
about the girl" to sensory input. But these specific mechanisms are
learned, what you need as a system designer is provide ways of their
formation in general case.

Also, your list contained 'reasoning', 'seeing past experiences and
how they apply to the current one', 'searching for new ways of doing
things' and 'applying each heuristic'. Only in some architectures will
these things be explicit parts of system design. From my perspective,
it's analogous to adding special machine instructions for handling
'Internet browsing' in general-purpose processor, where browser is
just one of thousands of applications that can run on it, and it would
be inadequately complex for processor anyway.

You need to ration resources, but these are anonymous modelling
resources that don't have inherent 'bicycle-properties' or
'language-processing-properties'. Some of them happen to correlate
with things we want them to, by virtue of being placed in contact with
sensory input that can communicate structure of those things.
Resources are used to build inference structures within the system
that allow it to model hidden processes, which in turn should allow it
to achieve its goals. If there are high-level resource allocation
rules to be discovered, these rules will look at goals and formed
inference structures and determine that certain changes are good for
overall performance. Discussion of such rules needs at least some
notions about makeup of inference process and its relation to goals.

Even worse, goals can be implicit in inference system itself and be
learned starting from a blank slate, in which case the way resources
got distributed describes the goals, and not the other way around. In
this case the 'ultimate metagoal' can be formation of coherent models
(including models of system's goals in its model of its own behavior),
at which point high-level modularity and goal-directed resource
allocation disappear in a puff of mind projection fallacy.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to