>> All rational goal-seeking agents must have a mental state of maximum utility 
>> where any thought or perception would be unpleasant because it would result 
>> in a different state.

I'd love to see you attempt to prove the above statement.

What if there are several states with utility equal to or very close to the 
maximum?  What if the utility of the state decreases the longer that you are in 
it (something that is *very* true of human beings)?  What if uniqueness raises 
the utility of any new state sufficient that there will always be states that 
are better than the current state (since experiencing uniqueness normally 
improves fitness through learning, etc)?

  ----- Original Message ----- 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Wednesday, August 27, 2008 10:52 AM
  Subject: AGI goals (was Re: Information theoretic approaches to AGI (was Re: 
[agi] The Necessity of Embodiment))


  An AGI will not design its goals. It is up to humans to define the goals of 
an AGI, so that it will do what we want it to do.

  Unfortunately, this is a problem. We may or may not be successful in 
programming the goals of AGI to satisfy human goals. If we are not successful, 
then AGI will be useless at best and dangerous at worst. If we are successful, 
then we are doomed because human goals evolved in a primitive environment to 
maximize reproductive success and not in an environment where advanced 
technology can give us whatever we want. AGI will allow us to connect our 
brains to simulated worlds with magic genies, or worse, allow us to directly 
reprogram our brains to alter our memories, goals, and thought processes. All 
rational goal-seeking agents must have a mental state of maximum utility where 
any thought or perception would be unpleasant because it would result in a 
different state.


  -- Matt Mahoney, [EMAIL PROTECTED]




  ----- Original Message ----
  From: Valentina Poletti <[EMAIL PROTECTED]>
  To: agi@v2.listbox.com
  Sent: Tuesday, August 26, 2008 11:34:56 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  Thanks very much for the info. I found those articles very interesting. 
Actually though this is not quite what I had in mind with the term 
information-theoretic approach. I wasn't very specific, my bad. What I am 
looking for is a a theory behind the actual R itself. These approaches 
(correnct me if I'm wrong) give an r-function for granted and work from that. 
In real life that is not the case though. What I'm looking for is how the AGI 
will create that function. Because the AGI is created by humans, some sort of 
direction will be given by the humans creating them. What kind of direction, in 
mathematical terms, is my question. In other words I'm looking for a way to 
mathematically define how the AGI will mathematically define its goals.

  Valentina

   
  On 8/23/08, Matt Mahoney <[EMAIL PROTECTED]> wrote: 
    Valentina Poletti <[EMAIL PROTECTED]> wrote:
    > I was wondering why no-one had brought up the information-theoretic 
aspect of this yet.

    It has been studied. For example, Hutter proved that the optimal strategy 
of a rational goal seeking agent in an unknown computable environment is AIXI: 
to guess that the environment is simulated by the shortest program consistent 
with observation so far [1]. Legg and Hutter also propose as a measure of 
universal intelligence the expected reward over a Solomonoff distribution of 
environments [2].

    These have profound impacts on AGI design. First, AIXI is (provably) not 
computable, which means there is no easy shortcut to AGI. Second, universal 
intelligence is not computable because it requires testing in an infinite 
number of environments. Since there is no other well accepted test of 
intelligence above human level, it casts doubt on the main premise of the 
singularity: that if humans can create agents with greater than human 
intelligence, then so can they.

    Prediction is central to intelligence, as I argue in [3]. Legg proved in 
[4] that there is no elegant theory of prediction. Predicting all environments 
up to a given level of Kolmogorov complexity requires a predictor with at least 
the same level of complexity. Furthermore, above a small level of complexity, 
such predictors cannot be proven because of Godel incompleteness. Prediction 
must therefore be an experimental science.

    There is currently no software or mathematical model of non-evolutionary 
recursive self improvement, even for very restricted or simple definitions of 
intelligence. Without a model you don't have friendly AI; you have accelerated 
evolution with AIs competing for resources.

    References

    1. Hutter, Marcus (2003), "A Gentle Introduction to The Universal 
Algorithmic Agent {AIXI}",
    in Artificial General Intelligence, B. Goertzel and C. Pennachin eds., 
Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

    2. Legg, Shane, and Marcus Hutter (2006),
    A Formal Measure of Machine Intelligence, Proc. Annual machine
    learning conference of Belgium and The Netherlands (Benelearn-2006).
    Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

    3. http://cs.fit.edu/~mmahoney/compression/rationale.html

    4. Legg, Shane, (2006), Is There an Elegant Universal Theory of Prediction?,
    Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
    Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno, 
Switzerland.
    http://www.vetta.org/documents/IDSIA-12-06-1.pdf

    -- Matt Mahoney, [EMAIL PROTECTED]


    -------------------------------------------
    agi
    Archives: https://www.listbox.com/member/archive/303/=now
    RSS Feed: https://www.listbox.com/member/archive/rss/303/
    Modify Your Subscription: https://www.listbox.com/member/?&;
    Powered by Listbox: http://www.listbox.com




  -- 
  A true friend stabs you in the front. - O. Wilde

  Einstein once thought he was wrong; then he discovered he was wrong.

  For every complex problem, there is an answer which is short, simple and 
wrong. - H.L. Mencken 

------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  


------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to