> How complex may the environment
> be maximally
> for an ideal, but still realistic, agi agent (thus not a
> solomonof or AIXI
> agent) to be still succesful? Does somebody know how to calculate (and
> formalise) this?
>
> Bye,
> Arnoud

There are two different questions here, and I'm not sure which one you mean

Given a set of computational resources R, we can ask either

1)
what is a complexity level C so that, for any environment E of complexity
level C, there is *some* AI system running on R that can predict events in E
reasonably well (i.e. assuming that R is specialized for E)?

or

2)
what is a complexity level C so that, for some AI system X running on R, and
for *any* environment E of complexity C, X can predict events in E
reasonably well (i.e. assuming that R is not specialized for E)?


The human brain for instance is highly specialized to certain environments,
and cannot predict equally intelligently in other environments of comparable
"a priori complexity" to the environments for which it's specialized.

Note that my computational resources R includes both space and time
resources.  Unless the time resources are unrealistically ample, this rules
out answering 2) by using an AI system X that does an AIXItl style search
through all programs that could run on resources (R - epsilon) and finding
the optimal one for the given environment E.

Of course, contemporary mathematics does not give us answers to either of
these questions.  Nobody knows....


-- Ben G

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to