On 05/04/2008, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Sat, Apr 5, 2008 at 12:24 AM, William Pearson <[EMAIL PROTECTED]> wrote:
>  > On 01/04/2008, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>  >  >
>
> >  > This question supposes a specific kind of architecture, where these
>  >  >  things are in some sense separate from each other.
>  >
>  >  I am agnostic to how much things are separate. At any particular time
>  >  a machine can be doing less or more of each of these things. For
>  >  example in humans it is quite common to talk of concentration.
>  >
>  >  E.g. "I'm sorry I wasn't concentrating on what you said, could you repeat 
> it."
>  >  "Stop thinking about the girl, concentrate on the problem at hand."
>  >
>  >  Do you think this is meaningful?
>
>
> It is in some sense, but you need to distinguish levels of
>  description. Implementation of system doesn't have a
>  "thinking-about-the-girl" component,

Who ever said it did? All I have said is there needs to be the
mechanisms for an economy, not exactly what the economic agents are. I
don't know what they should be. It is body/environment specific, most
likely.

> but when system obtains certain
>  behaviors, you can say that this process that is going now is a
>  "thinking-about-the-girl" process. If, along with learning this
>  process, you form a mechanism for moving attention elsewhere, you can
>  evoke that mechanism by, for example, sending a phrase "Stop thinking
>  about the girl" to sensory input. But these specific mechanisms are
>  learned, what you need as a system designer is provide ways of their
>  formation in general case.

You also need a way to decide that something should get more attention
than something else. Being told to attend to something is not always
enough.

>  Also, your list contained 'reasoning', 'seeing past experiences and
>  how they apply to the current one', 'searching for new ways of doing
>  things' and 'applying each heuristic'. Only in some architectures will
>  these things be explicit parts of system design.

I don't have them as explicit parts of system design, I have nothing
that people would call a cognitive design at the moment. I am not so
interested in thinking at the moment as building a more *useful*
system (although under some circumstances a thinking system will be a
useful one).

> From my perspective,
>  it's analogous to adding special machine instructions for handling
>  'Internet browsing' in general-purpose processor, where browser is
>  just one of thousands of applications that can run on it, and it would
>  be inadequately complex for processor anyway.

I'd agree, I'm just adding a very loose economy. Any actor is allowed
to exist in an economy, I was just giving some examples of potential
ways to separate things. If they don't fit in your system ignore them
and add what does fit.

>  You need to ration resources, but these are anonymous modelling
>  resources that don't have inherent 'bicycle-properties' or
>  'language-processing-properties'.

So does the whatever allows your system to differentiate between
bicycle and non-bicycle somehow manage to not take up resources when
not being used?

> Some of them happen to correlate
>  with things we want them to, by virtue of being placed in contact with
>  sensory input that can communicate structure of those things.
>  Resources are used to build inference structures within the system
>  that allow it to model hidden processes, which in turn should allow it
>  to achieve its goals.

I'm still not seeing why it should model the right hidden processes.
Stick your system in the real world, which processes (from other
people, the weather, fluid dynamics, itself) should it try and model?
Why do some people have a lot more elaborate models of these things
than other people?

> If there are high-level resource allocation
>  rules to be discovered, these rules will look at goals and formed
>  inference structures and determine that certain changes are good for
>  overall performance.

What happens if two rules conflict? Which rule wins? What happens if
rules can only be discovered experimentally?

> Discussion of such rules needs at least some
>  notions about makeup of inference process and its relation to goals.

I'm not creating rules to determine how resources are distributed.
That would not be a free market economy. I agree the creation of the
rules will come about when the cognitive system is being designed, but
would be local to each agent.

>  Even worse, goals can be implicit in inference system itself and be
>  learned starting from a blank slate,

There is no useful system that is a blank slate. All learning systems
have bias as you well know, and so have implicit information about the
world.

I would view an economy as having an implicit goal. The closest thing
to an explicit goal for an agent in my economy is, "to survive", but
it is in no way hard binding. To survive credit is needed to purchase
resources (including memory to stay in and processing power to earn
more credit), for which you need to please the consumer (the user or
other utility function). An agent would also need to "pay" credit to
other agents (else it is a pretty poor economy), and this would have
to be decided/learned on the fly as the population of agents changes.

> in which case the way resources
>  got distributed describes the goals, and not the other way around.

I think I see what you are saying. I would have to say I am not
explicitly giving rules on how to distribute resources. I give each
agent the ability to purchase resources and they have to figure out
which the best resources for them to purchase, are.

>  In
>  this case the 'ultimate metagoal' can be formation of coherent models
>  (including models of system's goals in its model of its own behavior),
>  at which point high-level modularity and goal-directed resource
>  allocation disappear in a puff of mind projection fallacy.

Can you be more clear what you mean here?

 Will Pearson

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to