Arnoud,

I'm not sure if this makes much sense.  An "ideal" agent is not going
to be a "realistic" agent.  The bigger your computer and the better
your software more complexity your agent will be able to deal with.

With an ideal realistic agent I meant the best software we can make on the best hardware we can make.

In which case I think the question is pretty much impossible to answer. Who knows what the best hardware we can make is? Who knows what the best software we can make is?


Do I have to see it like something that the value of the nth bit is a (complex) function of all the former bits? Then it makes sense to me. After some length l of the pattern computation becomes unfeasible.
But this is not the way I intend my system to handle patterns. It learns the pattern after a lot of repeted occurences of it (in perception). And then it just stores the whole pattern ;-) No compression there. But since the environment is made outof smaller patterns, the pattern can be formulated in those smaller patterns, and thus save memory space.

This is ok, but it does limit the sorts of things that your system is able to do. I actually suspect that humans do a lot of very simple pattern matching like you suggest and in some sense "fake" being able to work out complex looking patterns. It's just that we have seen so many patterns in the past and that we are very good at doing fast and sometimes slightly abstract pattern matching on a huge database of experience. Nevertheless you need to be a little careful because some very "simple" patterns that don't repeat in a very explicit way could totally confuse your system:

000010000200003.....99999000010000200003....

Your system, if I understand correctly, would not see the pattern
until it had seen the whole cycle several times.  Something like
5*100,000*2 = 1,000,000 characters into the sequence and even then
it would need to remember 100,000 characters of information.  A
human would see the pattern after just a few characters with
perhaps some uncertainly as to what will happen after the 99999.
The total storage required for the pattern with a human would be
far less than 100,000 characters your system would need too.


Yes, but in general you don't know the complexity of the simplest solution of the problem in advance. It's more likely that you get to know first what the complexity of the environment is.

In general an agent doesn't know the complexity of its environment either.


The strategy I'm proposing is: ignore everything that is too complex. Just forget about it and hope you can, otherwise it's just bad luck. Of course you want to do the very best to solve the problem, and that entails that some complex phenomenon that can be handled must not be ignored a priori; it must only be ignored if there is evidence that understanding that phenomenon does not help solving your the problem.
In order for this strategy to work you need to know what the maximum complexity is an agent can handle, as a function of the resources of the agent: Cmax(R). And it would be very helpful for making design decisions to know Cmax(R) in advance. You can then build in that everything above Cmax(R) should be ignored; 'vette pech' as we say in Dutch if you then are not able to solve the problem.

Why not just do this dynamically? Try to look at how much of the agent's resources are being used for something and how much benefit the agent is getting from this. If something else comes along that seems to have a better ratio of benefit to resource usage then throw away some of the older stuff to free up resources for this new thing.

Shane

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to