Interesting but, if I've understood the Universal Intelligence paper, there are 
3 major flaws.

1) It seems to assume that intelligence is based on a rational, deterministic 
program - is that right? Adaptive intelligence, I would argue, definitely 
isn't. There isn't a rational, right way to approach the problems adaptive 
intelligence has to deal with. See investing below.

2) It assumes that intelligent agents maximise their rewards. Wrong. You don't 
except in extreme situations try to maximise your rewards when you invest on 
the stockmarket - or invest in any other action.

In the real world, you have to decide how to invest your time, energy and 
resources in taking/solving problematic decisions/problems (like how to invest 
on the stockmarket). Those decisions carry rewards, risks and uncertainty.  The 
higher the rewards, the higher the risks (nor just of failure but of all kinds 
of danger). The lower the rewards, the lower the risks (and the greater the 
security). Agents vary enormously in how adventurous (maximising) or cautious 
(minimising) they are prepared to be - and vary not just in different 
individuals' performance, but in "intra-individual" performance over time.

3) And finally, just to really screw up this search for intelligence 
definitions - any definition will be fundamentally ARBITRARY There will always 
be conflicting ideals of what intelligent problemsolving involves..

Think of academic essays. Standard grading rubrics usually prefer essays where 
points are made with lengthy reasoning and corroboration in extended sentences. 
They often give comparative examples of lengthier and shorter structures - 
giving the former higher marks. It's perfectly legitimate to argue though that 
the shorter examples given are ofen pithier, more to the point and preferable. 

Ditto re stockmarket investing. One person may used a highly detailed chartist 
method to govern his decisions. Another may use a few simple but flexible rules 
of thumb. Who's to say which is more intelligent, especially if the latter gets 
better results?

This isn't to argue that we shouldn't use measures of intelligence - just that 
we should remember that they aren't and can't be definitive.

.
  ----- Original Message ----- 
  From: Shane Legg 
  To: [email protected] 
  Sent: Friday, April 27, 2007 6:54 PM
  Subject: Re: [agi] Circular definitions of intelligence



  Kaj,


    (Disclaimer: I do not claim to know the sort of maths that Ben and
    Hutter and others have used in defining intelligence. I'm fully aware 
    that I'm dabbling in areas that I have little education in, and might
    be making a complete fool of myself. Nonetheless...)

  I'm currently writing my PhD thesis at the moment in which, at Hutter's 
  request, I am going to provide what should be an easy to understand
  explanation of AIXI and the universal intelligence measure.  Hopefully
  this will help make the subject more understandable to people outside
  the area of complexity theory.  I'll let this list know when this is out.
   


    The intelligence of a system is a function of the amount of different
    arbitrary goals (functions that the system maximizes as it changes
    over time) it can carry out and the degree by which it can succeed in
    those different goals (how much it manages to maximize the functions 
    in question) in different environments as compared to other systems.

  This is essentially what Hutter and I do.  We measure the performance
  of the system for a given environment (which includes the goal) and 
  then sum them up.  The only additional thing is that we weight them
  according to the complexity of each environment.  We use Kolmogorov
  complexity, but you could replace this with another complexity measure
  to get a computable intelligence measure.  See for example the work of 
  Hernandez (which I reference in my papers on this).  Once I've finished
  my thesis, one thing that I plan to do is to write a program to test the
  universal intelligence of agents.




    This would eliminate a thermostat from being an intelligent system,
    since a thermostat only carries out one goal. 

  Not really, it just means that the thermostat has an intelligence of one
  on your scale.  I see no problem with this.  In my opinion the important 
  thing is that an intelligence measure orders things correct.  For example,
  a thermostat should be more intelligent than a system that does nothing.
  A small machine learning algorithm should be smarter still, a mouse 
  smarter still, and so on...  




    Humans would be
    classified as relatively intelligent, since they can be given a wide 
    variety of goals to achieve. It also has the benefit of assigning
    narrow-AI systems a very low intelligence, which is what we want it to
    do.

  Agreed.
   
  If you want to read about the intelligence measure that I have developed 
  with Hutter check out the following.  
  A summary set of talk slides:

  http://www.vetta.org/documents/Benelearn-UniversalIntelligence-Talk.pdf 

  Or for a longer paper:

  http://www.vetta.org/documents/ui_benelearn.pdf

  Unfortunately the full length journal paper (50 pages) is still in review so 
  I'm not sure when that will come out.  But my PhD thesis will contain this
  material and that should be ready in a few months time.

  Cheers
  Shane



------------------------------------------------------------------------------
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?&;


------------------------------------------------------------------------------


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.463 / Virus Database: 269.6.1/777 - Release Date: 26/04/2007 
15:23

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to