Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case though. What I'm looking for is how
the AGI will create that function. Because the AGI is created by humans,
some sort of direction will be given by the humans creating them. What kind
of direction, in mathematical terms, is my question. In other words I'm
looking for a way to mathematically define how the AGI will mathematically
define its goals.

Valentina


On 8/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> Valentina Poletti <[EMAIL PROTECTED]> wrote:
> > I was wondering why no-one had brought up the information-theoretic
> aspect of this yet.
>
> It has been studied. For example, Hutter proved that the optimal strategy
> of a rational goal seeking agent in an unknown computable environment is
> AIXI: to guess that the environment is simulated by the shortest program
> consistent with observation so far [1]. Legg and Hutter also propose as a
> measure of universal intelligence the expected reward over a Solomonoff
> distribution of environments [2].
>
> These have profound impacts on AGI design. First, AIXI is (provably) not
> computable, which means there is no easy shortcut to AGI. Second, universal
> intelligence is not computable because it requires testing in an infinite
> number of environments. Since there is no other well accepted test of
> intelligence above human level, it casts doubt on the main premise of the
> singularity: that if humans can create agents with greater than human
> intelligence, then so can they.
>
> Prediction is central to intelligence, as I argue in [3]. Legg proved in
> [4] that there is no elegant theory of prediction. Predicting all
> environments up to a given level of Kolmogorov complexity requires a
> predictor with at least the same level of complexity. Furthermore, above a
> small level of complexity, such predictors cannot be proven because of Godel
> incompleteness. Prediction must therefore be an experimental science.
>
> There is currently no software or mathematical model of non-evolutionary
> recursive self improvement, even for very restricted or simple definitions
> of intelligence. Without a model you don't have friendly AI; you have
> accelerated evolution with AIs competing for resources.
>
> References
>
> 1. Hutter, Marcus (2003), "A Gentle Introduction to The Universal
> Algorithmic Agent {AIXI}",
> in Artificial General Intelligence, B. Goertzel and C. Pennachin eds.,
> Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm
>
> 2. Legg, Shane, and Marcus Hutter (2006),
> A Formal Measure of Machine Intelligence, Proc. Annual machine
> learning conference of Belgium and The Netherlands (Benelearn-2006).
> Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf
>
> 3. http://cs.fit.edu/~mmahoney/compression/rationale.html
>
> 4. Legg, Shane, (2006), Is There an Elegant Universal Theory of
> Prediction?,
> Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
> Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno,
> Switzerland.
> http://www.vetta.org/documents/IDSIA-12-06-1.pdf
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to