--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> On Dec 21, 2007 6:56 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > > Still more nonsense:  as I have pointed out before, Hutter's implied
> > > definitions of "agent" and "environment" and "intelligence" are not
> > > connected to real world usages of those terms, because he allows all of
> > > these things to depend on infinities (infinitely capable agents,
> > > infinite numbers of possible universes, etc.).
> > >
> > > If he had used the terms "djshgd", "uioreou" and "astfdl" instead of
> > > "agent", "environment" and "intelligence", his analysis would have been
> > > fine, but he did not.  Having appropriated those terms he did not show
> > > why anyone should believe that his results applied in any way to the
> > > things in the real world that are called "agent" and "environment" and
> > > "intelligence".  As such, his conclusions were bankrupt.
> > >
> > > Having pointed this out for the benefit of others who may have been
> > > overly impressed by the Hutter paper, just because it looked like
> > > impressive maths, I have no interest in discussing this yet again.
> >
> > I suppose you will also dismiss any paper that mentions a Turing machine
> as
> > irrelevant to computer science because real computers don't have infinite
> > memory.
> >
> 
> Your assertions here do seem to have interpretation in which they are
> correct, but it has little to nothing to do with practical matters.
> 
> For example, if 'intelligence' thing as defined by some obscure model
> is measured as I(x)=1-1/x, where x depends on particular design, and
> model investigates properties of Ultimate Intelligence of I=1, it
> doesn't mean that there is any point in building a system with x>1000
> if we already have one with x=1000, since it will provide only
> marginal improvement. You can't get away with qualitative conclusion
> like "and so, there is always a better mousetrap" without some
> quantitative reasons for that.

The problem here seems to be that we can't agree on a useful definition of
intelligence.  As a practical matter, we are interested in an agent meeting
goals in a specific environment, or a finite set of environments, not all
possible environments.  In the case of environments having bounded space and
time complexity, Hutter proved there is a computable (although intractable)
solution, AIXItl.  In the case of a set of environments having bounded
algorithmic complexity where the goal is prediction, Legg proved in
http://www.vetta.org/documents/IDSIA-12-06-1.pdf that there again is a
solution.  So in either case, there is one agent that does better than all
others over a finite set of environments, thus an upper bound on intelligence
by these measures.

If you prefer to use the Turing test than a more general test of intelligence,
then superhuman intelligence is not possible by his definition, because Turing
did not define a test for it.  Humans cannot recognize intelligence superior
to their own.  For example, adult humans easily recognize superior
intelligence when William James Sidis (see
http://en.wikipedia.org/wiki/William_James_Sidis ) was reading newspapers at
18 months and admitted to Harvard at age 11, but you would not expect children
his own age to recognize it.  Likewise, when Sidis was an adult, most people
merely thought his behavior was strange, rather than intelligent, because they
did not understand it.

More generally, you cannot test for universal intelligence without
environments of at least the same algorithmic complexity as the agent being
tested, because otherwise (as Legg showed) simpler agents could pass the same
tests.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=78571912-79cf39

Reply via email to