>These have profound impacts on AGI design. First, AIXI is (provably) not 
>computable,
>which means there is no easy shortcut to AGI. Second, universal intelligence 
>is not
>computable because it requires testing in an infinite number of environments. 
>Since
>there is no other well accepted test of intelligence above human level, it 
>casts doubt on
>the main premise of the singularity: that if humans can create agents with 
>greater than
>human intelligence, then so can they.

I don't know for sure that these statements logically follow from one
another. The brain probably contains a collection of kludges for
intractably hard tasks, much like wine 1.0 is probably still mostly
stubs.

Higher intelligence bootstrapping itself has already been proven on
Earth. Presumably it can happen in a simulation space as well, right?

Eric B

On 8/23/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
> On Sat, Aug 23, 2008 at 7:00 AM, William Pearson <[EMAIL PROTECTED]>
> wrote:
>> 2008/8/23 Matt Mahoney <[EMAIL PROTECTED]>:
>>> Valentina Poletti <[EMAIL PROTECTED]> wrote:
>>>> I was wondering why no-one had brought up the information-theoretic
>>>> aspect of this yet.
>>>
>>> It has been studied. For example, Hutter proved that the optimal strategy
>>> of a rational goal seeking agent in an unknown computable environment is
>>> AIXI: to guess that the environment is simulated by the shortest program
>>> consistent with observation so far [1].
>>
>> By my understanding, I would qualify this as "Hutter proved that the
>> *one of the* optimal strategies of a rational error-free goal seeking
>> agent, which has no impact on the environment beyond its explicit
>> output, in an unknown computable environment is AIXI: to guess that
>> the environment is simulated by the shortest program consistent with
>> observation so far"
>>  Will Pearson
>
> I think the question of the mathematics or quasi mathematics of
> algorithmic theory would be better studied using a more general
> machine intelligence kind of approach.  The Hutter Solomonoff approach
> of Algorithmic Information Theory looks to me like it is too narrow
> and lacking a fundamental ground against which theories can be tested
> but I don't know for sure because I could never find a sound basis to
> use to study the theory.
>
> I just found a Ray Solomonoff's web site and he has a couple of links
> to lectures on it.
> http://www.idsia.ch/~juergen/ray.html
>
> Jim Bromer
>
>
> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to