Hoi Shane,

> > Do I have to see it like something that the value of the nth bit is a
> > (complex) function of all the former bits? Then it makes sense to me.
> > After some length l of the pattern computation becomes unfeasible.
> > But this is not the way I intend my system to handle patterns. It learns
> > the pattern after a lot of repeted occurences of it (in perception). And
> > then it just stores the whole pattern ;-) No compression there. But since
> > the environment is made outof smaller patterns, the pattern can be
> > formulated in those smaller patterns, and thus save memory space.
>
> This is ok, but it does limit the sorts of things that your system
> is able to do.  I actually suspect that humans do a lot of very
> simple pattern matching like you suggest and in some sense "fake"
> being able to work out complex looking patterns.  It's just that
> we have seen so many patterns in the past and that we are very
> good at doing fast and sometimes slightly abstract pattern matching
> on a huge database of experience.   Nevertheless you need to be a
> little careful because some very "simple" patterns that don't
> repeat in a very explicit way could totally confuse your system:
>
> 000010000200003.....99999000010000200003....

My system sees that '99999' as noise. As you know I use neural networks to 
filter that out. The '99999' would make it uncertain as its predictions fail, 
it would begin to forget in what pattern it was, start hypothesing based on 
past experience which pattern it is in now. But when the pattern continues in 
the same way again after '99999', the state it was beginning to forget starts 
to be reactivated again.
Of course neural networks are black boxes, you don't really know how to 
translate its operations to high-level cognitive terms.
  
What my system stores in the neural networks are not explicit patterns, but 
just causal rules: given some time sequence e.g. 010010010110010 it can 
calculate the next bit (or actually, a vector of bits), being its prediction. 
So this talk of storing 1 2 3 or n patterns is just convenient and should be 
taken with a grain of salt; it's unnatural and too rigid. 

>
> Your system, if I understand correctly, would not see the pattern
> until it had seen the whole cycle several times.

Then it would begin to see it as a pattern that should be remembered, not as 
just some noise. What counts is repetition of the pattern. It is then likely 
that the pattern will occur again in the future, and that is what my agent 
wants to get a grip on.

  Something like
> 5*100,000*2 = 1,000,000 characters

You mean here something like 5 repetitions of 100,000 characters and that then 
repeated twice?????

 into the sequence and even then
> it would need to remember 100,000 characters of information. A
> human would see the pattern after just a few characters with
> perhaps some uncertainly as to what will happen after the 99999.
> The total storage required for the pattern with a human would be
> far less than 100,000 characters your system would need too.

I didn't understand quite what you meant.

But as I said my system stores rules (in a neural network fuzzy way) on how to 
continue a sequence. It does not have to store sequences explicitely. And if 
there are regularities in the sequence, the rules that can build up that 
sequence take less space than the sequence itself.
(I hope this is something of an answer.) 

>
> > Yes, but in general you don't know the complexity of the simplest
> > solution of the problem in advance. It's more likely that you get to know
> > first what the complexity of the environment is.
>
> In general an agent doesn't know the complexity of its
> environment either.

OK OK, but I think it's more likely. In the beginning, it is exposed to its 
environment and doesn't know how to solve the problem, it doesn't know how to 
discriminate the relevant phenomena yet. If it has to estimate the complexity 
of the solutiuon of the problem then, it would be based on all the phenomena, 
not just the relevant ones. 

>
> > The strategy I'm proposing is: ignore everything that is too complex.
> > Just forget about it and hope you can, otherwise it's just bad luck. Of
> > course you want to do the very best to solve the problem, and that
> > entails that some complex phenomenon that can be handled must not be
> > ignored a priori; it must only be ignored if there is evidence that
> > understanding that phenomenon does not help solving your the problem.
> > In order for this strategy to work you need to know what the maximum
> > complexity is an agent can handle, as a function of the resources of the
> > agent: Cmax(R). And it would be very helpful for making design decisions
> > to know Cmax(R) in advance. You can then build in that everything above
> > Cmax(R) should be ignored; 'vette pech' as we say in Dutch if you then
> > are not able to solve the problem.
>
> Why not just do this dynamically?  Try to look at how much of
> the agent's resources are being used for something and how much
> benefit the agent is getting from this.  If something else comes
> along that seems to have a better ratio of benefit to resource
> usage then throw away some of the older stuff to free up resources
> for this new thing.

Yes, I guess I have to do something like this. If the agent notices that it 
cannot understand certain phenomena completely, it should not try to. Just 
stop doing that, and shift attention to other phenomena. And if it notices 
that the same benefit/positive reinforcement signal (or more) can be gotten 
in an easier way it should prefer that. However, benefit maximisation remains 
leading, efficiency comes next. 
I would have liked to have had some a priori measure, since that makes 
designing easier, but I guess that is not possible.

Tot schrijvens,
Arnoud

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to