Hi Shane,

On Friday 19 September 2003 02:58, Shane Legg wrote:
> arnoud wrote:
> > How large can those constants be? How complex may the environment be
> > maximally for an ideal, but still realistic, agi agent (thus not a
> > solomonof or AIXI agent) to be still succesful? Does somebody know how to
> > calculate (and formalise) this?
>
> I'm not sure if this makes much sense.  An "ideal" agent is not going
> to be a "realistic" agent.  The bigger your computer and the better
> your software more complexity your agent will be able to deal with.

With an ideal realistic agent I meant the best software we can make on the 
best hardware we can make.

>
> The only way I could see that it would make sense would be if you
> could come up with an algorithm and prove that it made the best
> possible usage of time and space in terms of achieving its goals.
> Then the constants you are talking about would be set by this
> algorithm and the size of the biggest computer you could get.

Yes, but then the agent is made already. I think some estimates of the 
constants would help me to make design decisions. But if the constants can 
only be determined afterwards, they are of no use to me. 

>
> > Not even an educated guess?
> >
> > But I think some things can be said:
> > Suppose perception of the environment is just a bit at a time:
> > ...010100010010010111010101010...
> >
> > In the random case: for any sequence of length l the number of possible
> > patterns is 2^l. Completely hopeless, unless prediction precision need
> > decreases also exponentially with l. But that is not realistic. You then
> > know nothing, but you want nothing also.
>
> Yes, this defines the limiting case for Solomonoff Induction...
>
> > in the logarithmic case: the number of possible patterns of length l
> > increases logarithmically with l: #p < constant * log(l). If the constant
> > is not to high this environment can be learned easily. There is no need
> > for vagueness
>
> Not true.  Just because the sequence is very compressible in a
> Kolmogorov sense doesn't imply that it's easy to learn.  For example
> you could have some sequence where the computation time of the n-th
> bit take n^1000 computation cycles. There is only one pattern and
> it's highly compressible as it has a pretty short algorithm however
> there is no way you'll ever learn what the pattern is.

Do I have to see it like something that the value of the nth bit is a 
(complex) function of all the former bits? Then it makes sense to me. After 
some length l of the pattern computation becomes unfeasible.
But this is not the way I intend my system to handle patterns. It learns the 
pattern after a lot of repeted occurences of it (in perception). And then it 
just stores the whole pattern ;-) No compression there. But since the 
environment is made outof smaller patterns, the pattern can be formulated in 
those smaller patterns, and thus save memory space.
In the logarithmic case: say there are 2 patterns of length 100, then there 
are 3 patterns of length 1000. Let's say the 2 patterns of l = 100 are 
primitive and are stored bit by bit. The 3 patterns however can be stored 
using 10 bits for each. The 4 patterns of length 10^4 can be stored using 16 
bits for each, etc etc.
It isn't really different in the linear case, except that number of patterns 
that can be found in the environment grows linearly with l, and there's need 
for abstraction (i.e. storing as classes of sequences, lossy data 
compression).

>
> > I suppose the point I'm trying to make is that complexity of the
> > environment is not all. It's is also important to know how many of the
> > complexity can be ignored.
>
> Yes.  The real measure of how difficult an environment is is not
> the complexity of the environment, but rather the complexity of
> the simplest solution to the problem that you need to solve in
> that environment.

Yes, but in general you don't know the complexity of the simplest solution of 
the problem in advance. It's more likely that you get to know first what the 
complexity of the environment is.
The strategy I'm proposing is: ignore everything that is too complex. Just 
forget about it and hope you can, otherwise it's just bad luck. Of course you 
want to do the very best to solve the problem, and that entails that some 
complex phenomenon that can be handled must not be ignored a priori; it must 
only be ignored if there is evidence that understanding that phenomenon does 
not help solving your the problem.
In order for this strategy to work you need to know what the maximum 
complexity is an agent can handle, as a function of the resources of the 
agent: Cmax(R). And it would be very helpful for making design decisions to 
know Cmax(R) in advance. You can then build in that everything above Cmax(R) 
should be ignored; 'vette pech' as we say in Dutch if you then are not able 
to solve the problem. 

>
> Shane
>
> P.S. one of these days I'm going to get around to replying to your
> other emails to me!!  sorry about the delay!

Ach, you're mailing now. I don't want you sleep bad because feelings of guilt 
;-)

Bye,
Arnoud

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to