[agi] The resource allocation problem

2008-04-07 Thread Jim Bromer
Will Pearson wrote :
The resource allocation problem and why it needs to be solved first
...
Is there one right way of deciding these things when you have limited
resources? At time A might you want more reasoning done (while in a
debate) and at time B more visual processing (while driving).

An intelligent system needs to solve this problem for itself, as only
it will know what is important for the problems it faces. That is it
is a local problem. It also requires resources itself. If resources
are tight then very approximate methods of determining how many
resources to spend on each activity.

Due to this, the resource management should not be algorithmic, but
free to adapt to the amount of resources at hand. I'm intent on a
economic solution to the problem, where each activity is an economic
actor.
===

I don't know exactly what Will meant when he said that resource
management should not be algorithmic, but I feel that resource
management can be done logically since the management of allocating
and combining resources and then finding them when needed can be
separated from the conceptual complexities that are necessary for
advanced AI.  While the initial management decisions of resources
would be dependent on these conceptual complexities, this dependence
does not have to subsequently reflect every dynamic of that
complexity.  A better way to say this might to be to declare that
Will's idea of resource management can be seen as being comprised of
two distinct parts, the one that is fully integrated into the
conceptual complexity that is necessary for such a system and another
part that is managing the resources once the conceptual interrelations
are decided on.  This second part can be logical.  Although many of
these interrelations are going to be changing,  the logical part can
take over again after the conceptual relations are decided on.  The
only problem with this analysis is that current logical methods of
indexing are probably not adequate to work with the kind of complexity
that we are currently thinking of because the Frame Problem is
relevant to the problem of quickly finding relevant data in a  massive
collection of distributed data that is highly associative and
interrelated.  You can add indexing to alleviate the burden of the
problem, but this only works up to the point where the rate of
complexity of the additional indexing overtakes the decrease in
complexity that the indexing can offer, and this point can be reached
pretty quickly.
Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread Richard Loosemore

William Pearson wrote:

On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:

On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote:
  The resource allocation problem and why it needs to be solved first
 
   How much memory and processing power should you apply to the following 
things?:
 
   Visual Processing
   Reasoning
   Sound Processing
   Seeing past experiences and how they apply to the current one
   Searching for new ways of doing things
   Applying each heuristic
 


This question supposes a specific kind of architecture, where these
 things are in some sense separate from each other.


I am agnostic to how much things are separate. At any particular time
a machine can be doing less or more of each of these things. For
example in humans it is quite common to talk of concentration.

E.g. I'm sorry I wasn't concentrating on what you said, could you repeat it.
Stop thinking about the girl, concentrate on the problem at hand.

Do you think this is meaningful?


I recently wrote a paper (with Trevor Harley) which included some 
comments of relevance to this issue.


This is known as attention rather than concentration, and there is a 
huge literature on it (in the cognitive psychology area).


Our recent paper was analyzing a neuroscience claim that related to the 
location of attentional mechanisms in the brain.  One thing that I 
pointed out was that people differ greatly in their interpretation of 
exactly what kind of mechanism is involved, and some of them (like me) 
see attention as being more of a distributed, flexible spotlight 
rather than a specific place.


That means that if my proposal is correct then resources are extremely 
flexible for some mechanisms like attention.


But that does not mean it is flexible for everything.

In the above list of different mechanisms I would say that any decision 
about resource allocation cannot be made at this date.  I think we need 
to discover this empirically, by finding out what the actual performance 
limitations are.  Of course this is easy for me to say, given that my 
architecture involves exttreme flexibiity anywhere.



Richard Loosemore


If they are but
 aspects of the same process, with modalities integrated parts of
 reasoning, resources can't be rationed on such a high level. Rather
 underlying low-level elements should be globally restricted and
 differentiate to support different high-level processes (so that
 certain portion of them gets mainly devoted to visual processing,
 high-level reasoning, language, etc.).



It boils down to the same thing. If more low level elements are
devoted (neuron-equivalents?) to a task in general it is giving it
more potential memory, processing power and bandwidth. How is that
decided? In some connectionist systems, I would associate it with the
stability-plasticity problem described here in section 6.

http://www.cns.bu.edu/Profiles/Grossberg/Gro1987CogSci.pdf

The nutshell is if you have new learning, should it overwrite the old?
If so, which information, if not, please can I have your infinite
memory system.

Assuming you are implementing this on a normal computer you can easily
see that it all boils down to these resources.

In the brain not all elements can work at peak effectiveness at the
same time (consider the perils of driving while on the mobile *). So
even if you had devoted the elements, then further decisions need to
be made at run time. Different regions of elements become the
resources to be rationed. For example Short-term memory becomes a
resource you have to decide how to use.

Also oxygen would seem to be a resource in short supply, within the brain.

The question remains the same, how should a system choose what to do
or what to be.

  Will

* http://www.bmj.com/cgi/content/abstract/bmj.38537.397512.55v1

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread Vladimir Nesov
On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote:
 On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
  
   This question supposes a specific kind of architecture, where these
things are in some sense separate from each other.

  I am agnostic to how much things are separate. At any particular time
  a machine can be doing less or more of each of these things. For
  example in humans it is quite common to talk of concentration.

  E.g. I'm sorry I wasn't concentrating on what you said, could you repeat 
 it.
  Stop thinking about the girl, concentrate on the problem at hand.

  Do you think this is meaningful?

It is in some sense, but you need to distinguish levels of
description. Implementation of system doesn't have a
thinking-about-the-girl component, but when system obtains certain
behaviors, you can say that this process that is going now is a
thinking-about-the-girl process. If, along with learning this
process, you form a mechanism for moving attention elsewhere, you can
evoke that mechanism by, for example, sending a phrase Stop thinking
about the girl to sensory input. But these specific mechanisms are
learned, what you need as a system designer is provide ways of their
formation in general case.

Also, your list contained 'reasoning', 'seeing past experiences and
how they apply to the current one', 'searching for new ways of doing
things' and 'applying each heuristic'. Only in some architectures will
these things be explicit parts of system design. From my perspective,
it's analogous to adding special machine instructions for handling
'Internet browsing' in general-purpose processor, where browser is
just one of thousands of applications that can run on it, and it would
be inadequately complex for processor anyway.

You need to ration resources, but these are anonymous modelling
resources that don't have inherent 'bicycle-properties' or
'language-processing-properties'. Some of them happen to correlate
with things we want them to, by virtue of being placed in contact with
sensory input that can communicate structure of those things.
Resources are used to build inference structures within the system
that allow it to model hidden processes, which in turn should allow it
to achieve its goals. If there are high-level resource allocation
rules to be discovered, these rules will look at goals and formed
inference structures and determine that certain changes are good for
overall performance. Discussion of such rules needs at least some
notions about makeup of inference process and its relation to goals.

Even worse, goals can be implicit in inference system itself and be
learned starting from a blank slate, in which case the way resources
got distributed describes the goals, and not the other way around. In
this case the 'ultimate metagoal' can be formation of coherent models
(including models of system's goals in its model of its own behavior),
at which point high-level modularity and goal-directed resource
allocation disappear in a puff of mind projection fallacy.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread William Pearson
On 05/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sat, Apr 5, 2008 at 12:24 AM, William Pearson [EMAIL PROTECTED] wrote:
   On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:


This question supposes a specific kind of architecture, where these
  things are in some sense separate from each other.
  
I am agnostic to how much things are separate. At any particular time
a machine can be doing less or more of each of these things. For
example in humans it is quite common to talk of concentration.
  
E.g. I'm sorry I wasn't concentrating on what you said, could you repeat 
 it.
Stop thinking about the girl, concentrate on the problem at hand.
  
Do you think this is meaningful?


 It is in some sense, but you need to distinguish levels of
  description. Implementation of system doesn't have a
  thinking-about-the-girl component,

Who ever said it did? All I have said is there needs to be the
mechanisms for an economy, not exactly what the economic agents are. I
don't know what they should be. It is body/environment specific, most
likely.

 but when system obtains certain
  behaviors, you can say that this process that is going now is a
  thinking-about-the-girl process. If, along with learning this
  process, you form a mechanism for moving attention elsewhere, you can
  evoke that mechanism by, for example, sending a phrase Stop thinking
  about the girl to sensory input. But these specific mechanisms are
  learned, what you need as a system designer is provide ways of their
  formation in general case.

You also need a way to decide that something should get more attention
than something else. Being told to attend to something is not always
enough.

  Also, your list contained 'reasoning', 'seeing past experiences and
  how they apply to the current one', 'searching for new ways of doing
  things' and 'applying each heuristic'. Only in some architectures will
  these things be explicit parts of system design.

I don't have them as explicit parts of system design, I have nothing
that people would call a cognitive design at the moment. I am not so
interested in thinking at the moment as building a more *useful*
system (although under some circumstances a thinking system will be a
useful one).

 From my perspective,
  it's analogous to adding special machine instructions for handling
  'Internet browsing' in general-purpose processor, where browser is
  just one of thousands of applications that can run on it, and it would
  be inadequately complex for processor anyway.

I'd agree, I'm just adding a very loose economy. Any actor is allowed
to exist in an economy, I was just giving some examples of potential
ways to separate things. If they don't fit in your system ignore them
and add what does fit.

  You need to ration resources, but these are anonymous modelling
  resources that don't have inherent 'bicycle-properties' or
  'language-processing-properties'.

So does the whatever allows your system to differentiate between
bicycle and non-bicycle somehow manage to not take up resources when
not being used?

 Some of them happen to correlate
  with things we want them to, by virtue of being placed in contact with
  sensory input that can communicate structure of those things.
  Resources are used to build inference structures within the system
  that allow it to model hidden processes, which in turn should allow it
  to achieve its goals.

I'm still not seeing why it should model the right hidden processes.
Stick your system in the real world, which processes (from other
people, the weather, fluid dynamics, itself) should it try and model?
Why do some people have a lot more elaborate models of these things
than other people?

 If there are high-level resource allocation
  rules to be discovered, these rules will look at goals and formed
  inference structures and determine that certain changes are good for
  overall performance.

What happens if two rules conflict? Which rule wins? What happens if
rules can only be discovered experimentally?

 Discussion of such rules needs at least some
  notions about makeup of inference process and its relation to goals.

I'm not creating rules to determine how resources are distributed.
That would not be a free market economy. I agree the creation of the
rules will come about when the cognitive system is being designed, but
would be local to each agent.

  Even worse, goals can be implicit in inference system itself and be
  learned starting from a blank slate,

There is no useful system that is a blank slate. All learning systems
have bias as you well know, and so have implicit information about the
world.

I would view an economy as having an implicit goal. The closest thing
to an explicit goal for an agent in my economy is, to survive, but
it is in no way hard binding. To survive credit is needed to purchase
resources (including memory to stay in and processing power to earn
more credit), for which you need to please the 

Re: [agi] The resource allocation problem

2008-04-05 Thread J Storrs Hall, PhD
Note that in the brain, there is a fair extent to which functions are mapped 
to physical areas -- this is why you can find out anything using fMRI, for 
example, and is the source of the famous sensory and motor homunculi
(e.g. http://faculty.etsu.edu/currie/images/homunculus1.JPG).

There's plasticity but it's limited and operates over a timescale of days or 
weeks or more.

The architecture seems to have a huge parallelism at the lower levels, but 
ties into a serial bottleneck at the very top, i.e. conscious, level(s) -- 
hence the need for attentional mechanisms.



On Tuesday 01 April 2008 10:30:13 am, William Pearson wrote:
 The resource allocation problem and why it needs to be solved first
 
 How much memory and processing power should you apply to the following 
things?:
 
 Visual Processing
 Reasoning
 Sound Processing
 Seeing past experiences and how they apply to the current one
 Searching for new ways of doing things
 Applying each heuristic
 
etc...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-05 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
Note that in the brain, there is a fair extent to which functions are mapped 
to physical areas -- this is why you can find out anything using fMRI, for 
example, 


This is not correct.  fMRI gives the illusion that functions are mapped 
to specific areas, whereas in fact what is usually being localized is 
the signal picked up the fMRI scanner, not a 'function' as such.


The problem is analogous to the dialog-joke that starts with some kind 
of noise, then Person A says What was that? and Person B responds A 
noise.


As I mentioned in my previous post, Trevor Harley and I went to the 
trouble of detailing several examples of this in a recent paper.




Richard Loosemore




and is the source of the famous sensory and motor homunculi
(e.g. http://faculty.etsu.edu/currie/images/homunculus1.JPG).

There's plasticity but it's limited and operates over a timescale of days or 
weeks or more.


The architecture seems to have a huge parallelism at the lower levels, but 
ties into a serial bottleneck at the very top, i.e. conscious, level(s) -- 
hence the need for attentional mechanisms.




On Tuesday 01 April 2008 10:30:13 am, William Pearson wrote:

The resource allocation problem and why it needs to be solved first

How much memory and processing power should you apply to the following 

things?:

Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
Applying each heuristic


etc...

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-04 Thread William Pearson
On 01/04/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Apr 1, 2008 at 6:30 PM, William Pearson [EMAIL PROTECTED] wrote:
   The resource allocation problem and why it needs to be solved first
  
How much memory and processing power should you apply to the following 
 things?:
  
Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
Applying each heuristic
  


 This question supposes a specific kind of architecture, where these
  things are in some sense separate from each other.

I am agnostic to how much things are separate. At any particular time
a machine can be doing less or more of each of these things. For
example in humans it is quite common to talk of concentration.

E.g. I'm sorry I wasn't concentrating on what you said, could you repeat it.
Stop thinking about the girl, concentrate on the problem at hand.

Do you think this is meaningful?

 If they are but
  aspects of the same process, with modalities integrated parts of
  reasoning, resources can't be rationed on such a high level. Rather
  underlying low-level elements should be globally restricted and
  differentiate to support different high-level processes (so that
  certain portion of them gets mainly devoted to visual processing,
  high-level reasoning, language, etc.).


It boils down to the same thing. If more low level elements are
devoted (neuron-equivalents?) to a task in general it is giving it
more potential memory, processing power and bandwidth. How is that
decided? In some connectionist systems, I would associate it with the
stability-plasticity problem described here in section 6.

http://www.cns.bu.edu/Profiles/Grossberg/Gro1987CogSci.pdf

The nutshell is if you have new learning, should it overwrite the old?
If so, which information, if not, please can I have your infinite
memory system.

Assuming you are implementing this on a normal computer you can easily
see that it all boils down to these resources.

In the brain not all elements can work at peak effectiveness at the
same time (consider the perils of driving while on the mobile *). So
even if you had devoted the elements, then further decisions need to
be made at run time. Different regions of elements become the
resources to be rationed. For example Short-term memory becomes a
resource you have to decide how to use.

Also oxygen would seem to be a resource in short supply, within the brain.

The question remains the same, how should a system choose what to do
or what to be.

  Will

* http://www.bmj.com/cgi/content/abstract/bmj.38537.397512.55v1

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] The resource allocation problem

2008-04-01 Thread William Pearson
The resource allocation problem and why it needs to be solved first

How much memory and processing power should you apply to the following things?:

Visual Processing
Reasoning
Sound Processing
Seeing past experiences and how they apply to the current one
Searching for new ways of doing things
Applying each heuristic

Is there one right way of deciding these things when you have limited
resources? At time A  might you want more reasoning done (while in a
debate) and at time B more visual processing (while driving).

There is also the long term memory problem, should you remember your
first kiss or the first star trek episode you saw. Which is more
important?

An intelligent system needs to solve this problem for itself, as only
it will know what is important for the problems it faces. That is it
is a local problem. It also requires resources itself. If resources
are tight then very approximate methods of determining how many
resources to spend on each activity.

Due to this, the resource management should not be algorithmic, but
free to adapt to the amount of resources at hand. I'm intent on a
economic solution to the problem, where each  activity is an economic
actor.

This approach needs to be at the lowest level because each activity
has to be programmed with the knowledge of how to act in an economic
setting as well as to perform its job. How much should it pay for the
other activities of the the programs around it?

I'll attempt to write a paper on this, with proper references (Baum,
Mark Miller et Al.) But I would be interested in feedback at this
stage,

  Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-01 Thread Pei Wang
Will,

In NARS I use dynamic resource allocation. See
http://nars.wang.googlepages.com/wang.computation.pdf and
http://www.cogsci.indiana.edu/pub/wang.resource.ps

A similar approach is Hofstadter's parallel terraced scan, see
http://www.cogsci.indiana.edu/parallel.html

Beside the references you mentioned, you may also want to check out
the other works reported at 1996 AAAI Symposium on Flexible
Computation (http://flexcomp.microsoft.com/flexpr.htm),

and

Stuart Russell and Eric Wefald, Principles of Meta-Reasoning,
http://citeseer.ist.psu.edu/russell91principles.html

plus the psychological literature on attention and memory.

I'd be very interested in your paper.

Pei



On Tue, Apr 1, 2008 at 10:30 AM, William Pearson [EMAIL PROTECTED] wrote:
 The resource allocation problem and why it needs to be solved first

  How much memory and processing power should you apply to the following 
 things?:

  Visual Processing
  Reasoning
  Sound Processing
  Seeing past experiences and how they apply to the current one
  Searching for new ways of doing things
  Applying each heuristic

  Is there one right way of deciding these things when you have limited
  resources? At time A  might you want more reasoning done (while in a
  debate) and at time B more visual processing (while driving).

  There is also the long term memory problem, should you remember your
  first kiss or the first star trek episode you saw. Which is more
  important?

  An intelligent system needs to solve this problem for itself, as only
  it will know what is important for the problems it faces. That is it
  is a local problem. It also requires resources itself. If resources
  are tight then very approximate methods of determining how many
  resources to spend on each activity.

  Due to this, the resource management should not be algorithmic, but
  free to adapt to the amount of resources at hand. I'm intent on a
  economic solution to the problem, where each  activity is an economic
  actor.

  This approach needs to be at the lowest level because each activity
  has to be programmed with the knowledge of how to act in an economic
  setting as well as to perform its job. How much should it pay for the
  other activities of the the programs around it?

  I'll attempt to write a paper on this, with proper references (Baum,
  Mark Miller et Al.) But I would be interested in feedback at this
  stage,

   Will Pearson

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The resource allocation problem

2008-04-01 Thread Mike Tintner
Charles H: Due to this, the resource management should not be algorithmic, 
but

 free to adapt to the amount of resources at hand. I'm intent on a
 economic solution to the problem, where each  activity is an economic
 actor.


The idea of economics is v. interesting  important. I think -  I'm 
confident science will come to think - of humans as psychoeconomies - 
continually having to decide how much effort and time we will continue to 
invest in each activity, both mental and physical. We automatically ask 
whether it's worth investing our resources  - worth the likely risks and 
costs in terms of effort and time . (Is it worth it? Can I be 
arsed/bothered Is there any chance of it working? It'll take forever/no 
time at all.. etc. etc)


This is a continuous metacognitive level of activity-assessment, and it 
applies to very small sub-activities as well. We continually ask ourselves, 
for example, even in putting together posts like these, whether it is worth 
developing this idea or that, or trying to dig up a reference, or find an 
analogy. We don't just proceed in automatic trains of thought, as AFAIK 
current computer programs do.


Such psychoeconomic, metacognitive resource management is essential for a 
true AGI. For one thing, a true AGI has to be able to drop - and therefore 
decide whether it's worth dropping - any activity at literally any moment - 
in order to attend to something more important that may arise.


So I'd be interested to hear more from you here, especially on how your 
management will be other than algorithmic.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com