Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread John LaMuth
Matt

Below is a sampling of my peer reviewed conference presentations on my 
background ethical theory ...

This should elevate me above the common crackpot

#

Talks

  a.. Presentation of a paper at ISSS 2000 (International Society for Systems 
Sciences) Conference in Toronto, Canada on various aspects of the new science 
of Powerplay Politics.
  · Toward a Science of Consciousness: TUCSON April 8-12, 2002 Tucson 
Convention Center, Tucson,  

 Arizona-sponsored by the Center for Consciousness 
Studies-University of Arizona (poster presentation).

  ·  John presented a poster at the 8th International Tsukaba Bioethics 
Conference at Tsukaba, 

  Japan on Feb. 15 to 17,  2003. 

  ·  John has presented his paper - The Communicational Factors 
Underlying the Mental   

  Disorders at the 2006 Annual Conf. of the Western Psychological 
Association at Palm Springs, CA 

Honors

  a.. Honors Diploma for Research in Biological Sciences (June 1977) - Univ. of 
Calif. Irvine.
  b.. John is a member of the APA and the American Philosophical Association.
LaMuth, J. E. (1977). The Development of the Forebrain as an Elementary 
Function of

   the Parameters of Input Specificity and Phylogenetic Age.

J. U-grad Rsch: Bio. Sci. U. C. Irvine. (6): 274-294.

  

LaMuth, J. E. (2000). A Holistic Model of Ethical Behavior Based Upon a 
Metaperspectival 

Hierarchy of the Traditional Groupings of Virtue, 

Values,  Ideals. Proceedings of the 44th Annual World Congress for 
the Int.

Society for the Systems Sciences - Toronto.

   

LaMuth, J. E. (2003). Inductive Inference Affective Language Analyzer 

Simulating AI. - US Patent # 6,587,846.

LaMuth, J. E. (2004). Behavioral Foundations for the Behaviourome / Mind 
Mapping 

Project. Proceedings for the  Eighth International Tsukuba 
Bioethics 

Roundtable,Tsukuba, Japan.

LaMuth, J. E. (2005). A Diagnostic Classification of the Emotions: A 
Three-Digit Coding 

System for Affective Language. Lucerne Valley: Fairhaven.



LaMuth, J. E. (2007). Inductive Inference Affective Language Analyzer 

Simulating Transitional AI. - US Patent # 7,236,963.



**

Although I currently have no working model, I am collaborating on a working 
prototype.



I was responding to your challenge for ...an example of a mathematical, 
software, biological, or physical example of RSI, or at least a plausible 
argument that one could be created



I feel I have proposed a plausible argument, and considering the great stakes 
involved concerning ethical safeguards for AI, an avenue worthy of critique ...



More on this in the last half of )
www.angelfire.com/rnb/fairhaven/specs.html 


John LaMuth

www.ethicalvalues.com 








  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Monday, August 25, 2008 7:30 AM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  John, I have looked at your patent and various web pages. You list a lot of 
nice sounding ethical terms (honor, love, hope, peace, etc) but give no details 
on how to implement them. You have already admitted that you have no 
experimental results, haven't actually built anything, and have no other 
results such as refereed conference or journal papers describing your system. 
If I am wrong about this, please let me know.


  -- Matt Mahoney, [EMAIL PROTECTED]




  - Original Message 
  From: John LaMuth [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, August 24, 2008 11:21:30 PM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)



  - Original Message - 
  From: Matt Mahoney [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Sunday, August 24, 2008 2:46 PM
  Subject: Re: Information theoretic approaches to AGI (was Re: [agi] The 
Necessity of Embodiment)


  I have challenged this list as well as the singularity and SL4 lists to come 
up with an example of a mathematical, software, biological, or physical example 
of RSI, or at least a plausible argument that one could be created, and nobody 
has. To qualify, an agent has to modify itself or create a more intelligent 
copy of itself according to an intelligence test chosen by the original. The 
following are not examples of RSI:
   
   1. Evolution of life, including humans.
   2. Emergence of language, culture, writing, communication technology, and 
computers.

   -- Matt Mahoney, [EMAIL PROTECTED]
   
  ###
  *

  Matt

  Where have you been for the last 2 months ??

  I had been talking then about my 2 US Patents for ethical/friendly AI
  along lines of a recursive simulation targeting 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Bob Mottram
2008/8/24 Mike Tintner [EMAIL PROTECTED]:
 Just a v. rough, first thought. An essential requirement of  an AGI is
 surely that it must be able to play - so how would you design a play machine
 - a machine that can play around as a child does?




Play may be about characterising the state space.  As an embodied
entity you need to know which areas of the space are relatively
predictable and which are not.  Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of outcomes or affordances, which may be very useful in
practical situations.

You'll notice that play tends to be directed towards activities with
high novelty.  With enough experience through play an unfamiliar or
novel situation can be decomposed into a set of more predictable
outcomes.  Eventually the novelty wears off because prediction matches
observation, and so the system moves on.  Finding new novel situations
to explore may involve the deliberate introduction of random or risky
(seemingly mal-adaptive) behavior.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 8:09 AM, Terren Suydam [EMAIL PROTECTED] wrote:
 I know we've gotten a little off-track here from play, but the really
 interesting question I would pose to you non-embodied advocates is:
 how in the world will you motivate your creation?  I suppose that you
 won't. You'll just tell it what to do (specify its goals) and it will do it,
 because it has no autonomy at all. Am I guilty of anthropomorphizing
 if I say autonomy is important to intelligence?


This is fuzzy, mysterious and frustrating. Unless you *functionally*
explain what you mean by autonomy and embodiment, the conversation
degrades to a kind of meaningless philosophy that occupied some smart
people for thousands of years without any results.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner


Bob M: Play may be about characterising the state space.  As an embodied

entity you need to know which areas of the space are relatively
predictable and which are not.  Armed with this knowledge when
planning an action in future you can make a reasonable estimate of the
possible range of outcomes or affordances, which may be very useful in
practical situations. You'll notice that play tends to be directed 
towards activities with

high novelty.  With enough experience through play an unfamiliar or
novel situation can be decomposed into a set of more predictable
outcomes.


What I was particularly interested in asking you is the following: part of 
the condition of being human is that you have to not just explore the 
outside world, but your own body and brain. And in fact it's potentially 
endless, because the degrees of freedom and range of possibilities for both 
are vast. So there is room to never stop exploring and developing your golf 
swing, say, or working out new ways to dredge out well-buried memories, and 
integrate them into new structures - for example, we can all develop a 
memory for dialogue, say, or for physical structures, (incl. from the past). 
Clearly, play along with development generally are a part of 
self-(one-s-own-system)-exploration.


Now robots too have similarly vast if not quite so vast possibilities of 
movement and thought. So in principle it sounds like a good, if not 
long-term essential idea to have them play and explore themselves as humans 
do. In principle, it would be a good idea for a pure AGI computer to explore 
its own vast possibilities/ways-of-thinking. Is anyone trying to design a 
self-exploring robot or computer? Does this principle have a name?





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread David Hart
On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


Interestingly, some views on AI advocate specifically prohibiting
self-awareness and self-exploration as a precaution against the development
of unfriendly AI. In my opinion, these views erroneously transfer familiar
human motives onto 'alien' AGI cognitive architectures - there's a history
of discussing this topic  on SL4 and other places.

I believe however that most approaches to designing AGI (those that do not
specifically prohibit self-aware and self-explortative behaviors) take for
granted, and indeed intentionally promote, self-awareness and
self-exploration at most stages of AGI development. In other words,
efficient and effective recursive self-improvement (RSI) requires
self-awareness and self-exploration. If any term exists to describe a
'self-exploring robot or computer', that term is RSI. Coining a lesser term
for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
suspect that 'RSI' is ultimately a more useful and meaningful term.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Mike Tintner

Terren:I know we've gotten a little off-track here from play, but the really

interesting question I would pose to you non-embodied advocates is:
how in the world will you motivate your creation?


Again, I think you're missing out the most important aspect of having a body 
,  ( is there a good definition of this? I think roboticists make some kind 
of deal of it). A body IS play, in a broad sense. It's first of all 
continuously *roving.*  -continuously moving, continuously thinking, 
*whether something is called for or not* (unlike machines which only act to 
order). Frankly, the idea that a human or animal body and brain are 
programmed in an *extended* way - for a minute of continuous action, say, as 
opposed to short routines/habits tossed together, can't be taken seriously - 
we have a major problem concentrating, following a train of thought or 
sticking to a train of movement, for that long. Our mind is continuously 
going off at tangents. The plus side of that is that we are highly adaptable 
and flexible - very ready to get a new handle on things.


The second, still more important advantage of a body, (the part, I think, 
that roboticists stress) is that it incorporates a vast range of 
possibilities which surely *do not have to be laboriously pre-specified* - 
vast ranges of possible movement and thought that can be playfully explored 
as required, rather than explicitly coded for beforehand. Start moving your 
hand around, twiddling your fingers independently  together, and twisting 
the whole unit, every which way.It's never-ending. And a good deal of it 
will be novel. So the basic general principle of learning any new movement, 
presumably,is have a stab at it - stick your hand out at the object in a 
loosely appropriate shape, and then play around with your grip/handling - 
explore your body's range of possibilities. There's no beforehand.


Ditto the brain has a vast capacity for ranges of free *non-pre-specified* 
association - start thinking of - visualising - your screwdriver. Now think 
of similar *shapes*. You should find you can keep going for a good while - a 
stream of new, divergent, not convergently, algorithmically pre-arranged 
associations, (as Kauffman insists).The brain is designed for free, 
unprogrammed association in a way that computers clearly haven't been - or 
haven't been to date. It can freely handle and play with ideas as the hand 
can objects.


God/Evolution clearly looked at Matt's bill for an army of programmers to 
develop an AGI, and decided He couldn't afford it - he'd try something 
simpler and more ingenious. Play around first, program routines second, 
develop culture and AI third.


P.S. The whole concept of an unembodied intelligence is a nonsense. There 
is *no such thing*.  The real distinction, presumably, is between embodied 
intelligences that can control their bodies, like humans, and those, like 
computers to date, that can't (or barely). Unembodied intelligences don't 
and *can't* exist.


*Self-control* - being able to control your body - is perhaps the most vital 
dimension of having a body in the sense of the standard debate. Without 
that, you can't understand the distinction between inert matter and life - 
one of the most fundamental early distinctions in understanding the world. 
Without that, I doubt that you can really understand anything.






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
Note that in this view play has nothing to do with having a body.  An AGi
concerned solely with mathematical theorem proving would also be able to
play...

On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 About play... I would argue that it emerges in any sufficiently
 generally-intelligent system
 that is faced with goals that are difficult for it ... as a consequence of
 other general cognitive
 processes...

 If an intelligent system has a goal G which is time-consuming or difficult
 to achieve ...

 it may then synthesize another goal G1 which is easier to achieve

 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G
 |-
 Achieving G1 implies reward

 As links between goal-achievement and reward are to some extent modified by
 uncertain
 inference (or analogous process, implemented e.g. in neural nets), we thus
 have the
 emergence of play ... in cases where G1 is much easier to achieve than G
 ...

 Of course, if working toward G1 is actually good practice for working
 toward G, this may give the intelligent
 system (if it's smart and mature enough to strategize) or evolution impetus
 to create
 additional bias toward the pursuit of G1

 In this view, play is a quite general structural phenomenon ... and the
 play that human kids do with blocks and sticks and so forth is a special
 case, oriented toward ultimate goals G involving physical manipulation

 And the knack in gaining anything from play is in appropriate
 similarity-assessment ... i.e. in measuring similarity between G and G1 in
 such a way that achieving G1 actually teaches things useful for achieving G

 So for any goal-achieving system that has long-term goals which it can't
 currently effectively work directly toward, play may be an effective
 strategy...

 In this view, we don't really need to design an AI system with play in
 mind.  Rather, if it can explicitly or implicitly carry out the above
 inference, concept-creation and subgoaling processes, play should emerge
 from its interaction w/ the world...

 ben g



 On Tue, Aug 26, 2008 at 8:20 AM, David Hart [EMAIL PROTECTED] wrote:

 On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


 Interestingly, some views on AI advocate specifically prohibiting
 self-awareness and self-exploration as a precaution against the development
 of unfriendly AI. In my opinion, these views erroneously transfer familiar
 human motives onto 'alien' AGI cognitive architectures - there's a history
 of discussing this topic  on SL4 and other places.

 I believe however that most approaches to designing AGI (those that do not
 specifically prohibit self-aware and self-explortative behaviors) take for
 granted, and indeed intentionally promote, self-awareness and
 self-exploration at most stages of AGI development. In other words,
 efficient and effective recursive self-improvement (RSI) requires
 self-awareness and self-exploration. If any term exists to describe a
 'self-exploring robot or computer', that term is RSI. Coining a lesser term
 for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
 suspect that 'RSI' is ultimately a more useful and meaningful term.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome  - Dr Samuel Johnson





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
About play... I would argue that it emerges in any sufficiently
generally-intelligent system
that is faced with goals that are difficult for it ... as a consequence of
other general cognitive
processes...

If an intelligent system has a goal G which is time-consuming or difficult
to achieve ...

it may then synthesize another goal G1 which is easier to achieve

We then have the uncertain syllogism

Achieving G implies reward
G1 is similar to G
|-
Achieving G1 implies reward

As links between goal-achievement and reward are to some extent modified by
uncertain
inference (or analogous process, implemented e.g. in neural nets), we thus
have the
emergence of play ... in cases where G1 is much easier to achieve than G
...

Of course, if working toward G1 is actually good practice for working toward
G, this may give the intelligent
system (if it's smart and mature enough to strategize) or evolution impetus
to create
additional bias toward the pursuit of G1

In this view, play is a quite general structural phenomenon ... and the play
that human kids do with blocks and sticks and so forth is a special case,
oriented toward ultimate goals G involving physical manipulation

And the knack in gaining anything from play is in appropriate
similarity-assessment ... i.e. in measuring similarity between G and G1 in
such a way that achieving G1 actually teaches things useful for achieving G

So for any goal-achieving system that has long-term goals which it can't
currently effectively work directly toward, play may be an effective
strategy...

In this view, we don't really need to design an AI system with play in
mind.  Rather, if it can explicitly or implicitly carry out the above
inference, concept-creation and subgoaling processes, play should emerge
from its interaction w/ the world...

ben g


On Tue, Aug 26, 2008 at 8:20 AM, David Hart [EMAIL PROTECTED] wrote:

 On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Is anyone trying to design a self-exploring robot or computer? Does this
 principle have a name?


 Interestingly, some views on AI advocate specifically prohibiting
 self-awareness and self-exploration as a precaution against the development
 of unfriendly AI. In my opinion, these views erroneously transfer familiar
 human motives onto 'alien' AGI cognitive architectures - there's a history
 of discussing this topic  on SL4 and other places.

 I believe however that most approaches to designing AGI (those that do not
 specifically prohibit self-aware and self-explortative behaviors) take for
 granted, and indeed intentionally promote, self-awareness and
 self-exploration at most stages of AGI development. In other words,
 efficient and effective recursive self-improvement (RSI) requires
 self-awareness and self-exploration. If any term exists to describe a
 'self-exploring robot or computer', that term is RSI. Coining a lesser term
 for 'self-exploring AI' may be useful in some proto-AGI contexts, but I
 suspect that 'RSI' is ultimately a more useful and meaningful term.

 -dave
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
Examples of the kind of similarity I'm thinking of:

-- The analogy btw chess or go and military strategy

-- The analogy btw roughhousing and actual fighting

In logical terms, these are intensional rather than extensional similarities

ben

On Tue, Aug 26, 2008 at 9:38 AM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben:If an intelligent system has a goal G which is time-consuming or
 difficult to achieve ...
 it may then synthesize another goal G1 which is easier to achieve
 We then have the uncertain syllogism

 Achieving G implies reward
 G1 is similar to G

 Ben,

 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 7:53 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Or take any number of ethical dilemmas, in which it's ok to steal food if it's
 to feed your kids. Or killing ten people to save twenty. etc. How do you 
 define
 Friendliness in these circumstances? Depends on the context.


Friendliness is not object-level output of AI, not individual
decisions that it makes in certain contexts. Friendliness is a
conceptual dynamics that is embodied by AI, underlying any specific
decisions. And likewise Friendliness is derived not from individual
actions of humans, but from underlying dynamics imperfectly
implemented in humans, which in turn doesn't equate with
implementation of humans, but is an aspect of this implementation
which we can roughly refer to.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Russell Wallace
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 The be-all and end-all here though, I presume is similarity. Is it a
 logic-al concept?  Finding similarities - rough likenesses as opposed to
 rational, precise, logicomathematical commonalities - is actually, I would
 argue, a process of imagination and (though I can't find a ready term)
 physical/embodied improvisation. Hence rational, logical, computing
 approaches have failed to produce any new (in the normal sense of
 surprising)  metaphors or analogies or be creative.

 Maybe you could give an example of what you mean by similarity

See AM, Eurisko, Copycat.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Valentina Poletti
Thanks very much for the info. I found those articles very interesting.
Actually though this is not quite what I had in mind with the term
information-theoretic approach. I wasn't very specific, my bad. What I am
looking for is a a theory behind the actual R itself. These approaches
(correnct me if I'm wrong) give an r-function for granted and work from
that. In real life that is not the case though. What I'm looking for is how
the AGI will create that function. Because the AGI is created by humans,
some sort of direction will be given by the humans creating them. What kind
of direction, in mathematical terms, is my question. In other words I'm
looking for a way to mathematically define how the AGI will mathematically
define its goals.

Valentina


On 8/23/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Valentina Poletti [EMAIL PROTECTED] wrote:
  I was wondering why no-one had brought up the information-theoretic
 aspect of this yet.

 It has been studied. For example, Hutter proved that the optimal strategy
 of a rational goal seeking agent in an unknown computable environment is
 AIXI: to guess that the environment is simulated by the shortest program
 consistent with observation so far [1]. Legg and Hutter also propose as a
 measure of universal intelligence the expected reward over a Solomonoff
 distribution of environments [2].

 These have profound impacts on AGI design. First, AIXI is (provably) not
 computable, which means there is no easy shortcut to AGI. Second, universal
 intelligence is not computable because it requires testing in an infinite
 number of environments. Since there is no other well accepted test of
 intelligence above human level, it casts doubt on the main premise of the
 singularity: that if humans can create agents with greater than human
 intelligence, then so can they.

 Prediction is central to intelligence, as I argue in [3]. Legg proved in
 [4] that there is no elegant theory of prediction. Predicting all
 environments up to a given level of Kolmogorov complexity requires a
 predictor with at least the same level of complexity. Furthermore, above a
 small level of complexity, such predictors cannot be proven because of Godel
 incompleteness. Prediction must therefore be an experimental science.

 There is currently no software or mathematical model of non-evolutionary
 recursive self improvement, even for very restricted or simple definitions
 of intelligence. Without a model you don't have friendly AI; you have
 accelerated evolution with AIs competing for resources.

 References

 1. Hutter, Marcus (2003), A Gentle Introduction to The Universal
 Algorithmic Agent {AIXI},
 in Artificial General Intelligence, B. Goertzel and C. Pennachin eds.,
 Springer. http://www.idsia.ch/~marcus/ai/aixigentle.htm

 2. Legg, Shane, and Marcus Hutter (2006),
 A Formal Measure of Machine Intelligence, Proc. Annual machine
 learning conference of Belgium and The Netherlands (Benelearn-2006).
 Ghent, 2006.  http://www.vetta.org/documents/ui_benelearn.pdf

 3. http://cs.fit.edu/~mmahoney/compression/rationale.html

 4. Legg, Shane, (2006), Is There an Elegant Universal Theory of
 Prediction?,
 Technical Report IDSIA-12-06, IDSIA / USI-SUPSI,
 Dalle Molle Institute for Artificial Intelligence, Galleria 2, 6928 Manno,
 Switzerland.
 http://www.vetta.org/documents/IDSIA-12-06-1.pdf

 -- Matt Mahoney, [EMAIL PROTECTED]


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
A true friend stabs you in the front. - O. Wilde

Einstein once thought he was wrong; then he discovered he was wrong.

For every complex problem, there is an answer which is short, simple and
wrong. - H.L. Mencken



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

That's a fair criticism. I did explain what I mean by embodiment in a previous 
post, and what I mean by autonomy in the article of mine I referenced. But I do 
recognize that in both cases there is still some ambiguity, so I will withdraw 
the question until I can formulate it in more concise terms. 

Terren

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
 This is fuzzy, mysterious and frustrating. Unless you
 *functionally*
 explain what you mean by autonomy and embodiment, the
 conversation
 degrades to a kind of meaningless philosophy that occupied
 some smart
 people for thousands of years without any results.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

Are you saying Friendliness is not context-dependent?  I guess I'm struggling 
to understand what a conceptual dynamics would mean that isn't dependent on 
context. The AGI has to act, and at the end of the day, its actions are our 
only true measure of its Friendliness. So I'm not sure what it could mean to 
say that Friendliness isn't expressed in individual decisions.

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 
 Friendliness is not object-level output of AI, not
 individual
 decisions that it makes in certain contexts. Friendliness
 is a
 conceptual dynamics that is embodied by AI, underlying any
 specific
 decisions. And likewise Friendliness is derived not from
 individual
 actions of humans, but from underlying dynamics imperfectly
 implemented in humans, which in turn doesn't equate
 with
 implementation of humans, but is an aspect of this
 implementation
 which we can roughly refer to.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Vladimir Nesov
On Mon, Aug 25, 2008 at 11:09 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 --- On Sun, 8/24/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Sun, Aug 24, 2008 at 5:51 PM, Terren Suydam
 What is the point of building general intelligence if all
 it does is
 takes the future from us and wastes it on whatever happens
 to act as
 its goal?

 Indeed. Personally, I have no desire to build anything smarter
 than humans. That's a deal with the devil, so to speak, and one
 I believe most ordinary folks would be afraid to endorse, especially
 if they were made aware of the risks. The Singularity is not an
 inevitability, if we demand approaches that are safe in principle.
 And self-modifying approaches are not safe, assuming that
 they could work.

But what is safe, and how to improve safety? This is a complex goal
for complex environment, and naturally any solution to this goal is
going to be very intelligent. Arbitrary intelligence is not safe
(fatal, really), but what is safe is also intelligent.


 I'm all for limiting the intelligence of our creations before they ever get
 to the point that they can build their own or modify themselves. I'm against
 self-modifying approaches, largely because I don't believe it's possible
 to constrain their actions in the way Eliezer hopes. Iterative, recursive
 processes are generally emergent and unpredictable (the interesting ones,
 anyway). Not sure what kind of guarantees you could make for such systems
 in light of such emergent unpredictability.

There is no law that makes large computations less lawful than small
computations, if it is in the nature of computation to preserve
certain invariants. A computation that multiplies two huge numbers
isn't inherently more unpredictable than computation that multiplies
two small numbers. If device A is worse than device B at carrying out
action X, device A is worse for the job, period. The fact that you
call device A more intelligence than B is irrelevant. Being a more
complicated computation is a consequence, not the cause, of being
*better* at carrying out the task. You don't build *a* more
intelligent machine, hope that it will be better, but find out that
it's actually very good at being fatal. Instead, you build a machine
that will be better, and as a side effect it turns out to be more
intelligent, or more complicated.

Likewise, self-modification in not an end in itself, but means to
implement the complexity and efficiency required for better
performance. The complexity that gets accumulated this way is not
accidental, it doesn't make the AI less reliable, because it's being
implemented precisely for the purpose of making AI better, and if it's
expected to make it worse, then it's not done. You have intuitive
expectation that making Z will make AI uncontrollable, which will lead
to a bad outcome, and so you point out that this design that suggests
doing Z will turn out bad. But the answer is that AI itself will check
whether Z is expected to lead to a good outcome before making a
decision to implement Z.


 I don't deny the possibility of disaster. But my stance is, if the only 
 approach
 you have to mitigate disaster is being able to control the AI itself, well, 
 the
 game is over before you even start it. It seems profoundly naive to me that
 anyone could, even in principle, guarantee a super-intelligent AI to 
 renormalize,
 in whatever sense that means. Then you have the difference between theory
 and practice... just forget it. Why would anyone want to gamble on that?


This remark makes my note that the field of AI actually did something
for the last 50 years not that minor. Again you make an argument from
ignorance: I do not know how to do it, nobody knows how to do it,
therefore it can not be done. Argue from knowledge, not from
ignorance. If you know the path, follow it, describe it. If you know
that the path has a certain property, show it. If you know that a
class of algorithms doesn't find a path, say that these algorithms
won't give the answer. But if you are lost, if your map is blank,
don't assert that the territory is blank also, for you don't know.


 (answering to the article)

 Intelligence was created by a blind idiot evolutionary
 process that
 has no foresight and no intelligence. Of course it can be
 designed.
 Intelligence is all that evolution is, but immensely
 faster, better
 and flexible.

 In certain domains, this is true (and AI has historically been about
 limiting research to those domains). But intelligence, as we know it,
 is limited in ways that evolution is not. Intelligence is limited to reasoning
 about causality, a causality we structure by modeling the world around us
 in such a way that we can predict it. Models, however, are not perfect.
 Evolution does not suffer from this limitation, because as you say, it has
 no intelligence. Whatever works, works.

Human intelligence is limited, and indeed this argument might be
valid, for example chimps are somewhat intelligent, immensely

Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner
Valentina:In other words I'm looking for a way to mathematically define how the 
AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever been 
logically or mathematically (axiomatically) derivable from any old one?  e.g. 
topology,  Riemannian geometry, complexity theory, fractals,  free-form 
deformation  etc etc


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Terren Suydam

I don't think it's necessary to be self-aware to do self-modifications. 
Self-awareness implies that the entity has a model of the world that separates 
self from other, but this kind of distinction is not necessary to do 
self-modifications. It could act on itself without the awareness that it was 
acting on itself.  (Goedelian machines would qualify, imo).

The reverse is true, as well. Humans are self aware but we cannot improve 
ourselves in the dangerous ways we talk about with the hard-takeoff scenarios 
of the Singularity. We ought to be worried about self-modifying agents, yes, 
but self-aware agents that can't modify themselves are much less worrying. 
They're all around us.

--- On Tue, 8/26/08, David Hart [EMAIL PROTECTED] wrote:
From: David Hart [EMAIL PROTECTED]
Subject: Re: [agi] How Would You Design a Play Machine?
To: agi@v2.listbox.com
Date: Tuesday, August 26, 2008, 8:20 AM

On 8/26/08, Mike Tintner [EMAIL PROTECTED] wrote:
Is anyone trying to design a self-exploring robot or computer? Does this 
principle have a name?
Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.


I believe however that most approaches to designing AGI (those that do not 
specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.


-dave





  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 8:05 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Are you saying Friendliness is not context-dependent?  I guess I'm
 struggling to understand what a conceptual dynamics would mean
 that isn't dependent on context. The AGI has to act, and at the end of the
 day, its actions are our only true measure of its Friendliness. So I'm not
 sure what it could mean to say that Friendliness isn't expressed in
 individual decisions.

It is expressed in individual decisions, but it isn't these decisions
themselves. If a decision is context-dependent, it doesn't translate
into Friendliness being context-dependent (what would it even mean?).
Friendliness is an algorithm implemented in a calculator (or an
algorithm for assembling a calculator), it is not the digits that show
on its display depending on what buttons were pressed. On the other
hand, the initial implementation of Friendliness leads to very
different dynamics, depending on what sort of morality it is referred
to (see http://www.overcomingbias.com/2008/08/mirrors-and-pai.html ).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

If Friendliness is an algorithm, it ought to be a simple matter to express what 
the goal of the algorithm is. How would you define Friendliness, Vlad?

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 It is expressed in individual decisions, but it isn't
 these decisions
 themselves. If a decision is context-dependent, it
 doesn't translate
 into Friendliness being context-dependent (what would it
 even mean?).
 Friendliness is an algorithm implemented in a calculator
 (or an
 algorithm for assembling a calculator), it is not the
 digits that show
 on its display depending on what buttons were pressed. On
 the other
 hand, the initial implementation of Friendliness leads to
 very
 different dynamics, depending on what sort of morality it
 is referred
 to (see
 http://www.overcomingbias.com/2008/08/mirrors-and-pai.html
 ).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 If Friendliness is an algorithm, it ought to be a simple matter to express
 what the goal of the algorithm is. How would you define Friendliness, Vlad?


Algorithm doesn't need to be simple. The actual Friendly AI that
started to incorporate properties of human morality in it is a very
complex algorithm, and so is the human morality itself. Original
implementation of Friendly AI won't be too complex though, it'll only
need to refer to the complexity outside in a right way, so that it'll
converge on dynamic with the right properties. Figuring out what this
original algorithm needs to be, not to count the technical
difficulties of implementing it, is very tricky though. You start from
the question what is the right thing to do? applied in the context
of unlimited optimization power, and work on extracting a technical
answer, surfacing the layers of hidden machinery that underlie this
question when *you* think about it, translating the question into a
piece of engineering that answers it, and this is Friendly AI.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

I didn't say the algorithm needs to be simple, I said the goal of the algorithm 
ought to be simple. What are you trying to compute? 

Your answer is, what is the right thing to do?

The obvious next question is, what does the right thing mean?  The only way 
that the answer to that is not context-dependent is if there's such a thing as 
objective morality, something you've already dismissed by referring to the 
there are no universally compelling arguments post on the Overcoming Bias 
blog.

You have to concede here that Friendliness is not objective. Therefore, it 
cannot be expressed formally. It can only be approximated, with error. 


--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 From: Vladimir Nesov [EMAIL PROTECTED]
 Subject: Re: [agi] The Necessity of Embodiment
 To: agi@v2.listbox.com
 Date: Tuesday, August 26, 2008, 1:21 PM
 On Tue, Aug 26, 2008 at 8:54 PM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  If Friendliness is an algorithm, it ought to be a
 simple matter to express
  what the goal of the algorithm is. How would you
 define Friendliness, Vlad?
 
 
 Algorithm doesn't need to be simple. The actual
 Friendly AI that
 started to incorporate properties of human morality in it
 is a very
 complex algorithm, and so is the human morality itself.
 Original
 implementation of Friendly AI won't be too complex
 though, it'll only
 need to refer to the complexity outside in a right way, so
 that it'll
 converge on dynamic with the right properties. Figuring out
 what this
 original algorithm needs to be, not to count the technical
 difficulties of implementing it, is very tricky though. You
 start from
 the question what is the right thing to do?
 applied in the context
 of unlimited optimization power, and work on extracting a
 technical
 answer, surfacing the layers of hidden machinery that
 underlie this
 question when *you* think about it, translating the
 question into a
 piece of engineering that answers it, and this is Friendly
 AI.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 9:54 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 I didn't say the algorithm needs to be simple, I said the goal of
 the algorithm ought to be simple. What are you trying to compute?

 Your answer is, what is the right thing to do?

 The obvious next question is, what does the right thing mean?

This is a part where you begin answering that question.


 The only way that the answer to that is not context-dependent is
 if there's such a thing as objective morality, something you've already
 dismissed by referring to the there are no universally compelling
 arguments post on the Overcoming Bias blog.

 You have to concede here that Friendliness is not objective.
 Therefore, it cannot be expressed formally. It can only be approximated,
 with error.

The question itself doesn't exist in vacuum. When *you*, as a human,
ask it, there is a very specific meaning associated with it. You don't
search for the meaning that the utterance would call in a
mind-in-general, you search for meaning that *you* give to it. Or, to
make the it more reliable, for the meaning given by the idealized
dynamics implemented in you (
http://www.overcomingbias.com/2008/08/computations.html ).

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Abram Demski
Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Valentina:In other words I'm looking for a way to mathematically define how
 the AGI will mathematically define its goals.

 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Mike Tintner

Abram,

Thanks for reply. This is presumably after the fact -  can set theory 
predict new branches? Which branch of maths was set theory derivable from? I 
suspect that's rather like trying to derive any numeral system from a 
previous one. Or like trying to derive any programming language from a 
previous one- or any system of logical notation from a previous one.



Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:
Valentina:In other words I'm looking for a way to mathematically define 
how

the AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
been logically or mathematically (axiomatically) derivable from any old
one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
free-form deformation  etc etc

agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


[agi] Re: Information t..PS

2008-08-26 Thread Mike Tintner

Abram,

I suspect what it comes down to - I'm tossing this out off-the-cuff - is 
that each new branch of maths involves new rules, new operations on numbers 
and figures, and new ways of relating the numbers and figures to real 
objects and sometimes new signs, period. And they aren't predictable or 
derivable from previous ones. Set theory is ultimately a v. useful 
convention, not an absolute necessity?


Perhaps this overlaps with our previous discussion, which could perhaps be 
reduced to - is there a universal learning program - an AGI that can learn 
any skill? That perhaps can be formalised as - is there a program that can 
learn any program - a set of rules for learning any set of rules? I doubt 
it. Especially  if as we see with the relatively simple logic discussions on 
this forum, people can't agree on which rules/conventions/systems to apply, 
i.e. there are no definitive rules.


All this can perhaps be formalised neatly, near geometrically. (I'm still 
groping you understand). If we think of a screen of pixels - can all the 
visual games or branches of maths or art that can be expressed on that 
screen - mazes/maze-running/2d geometry/ 3d geometry/Riemannian/ abstract 
art/ chess/ go etc  - be united under - or derived from - a common set of 
metarules?


It should be fairly easy :) for an up-and-coming maths star like you to 
prove the obvious - that it isn't possible. Kauffman was looking for 
something like this. It's equivalent, it seems to me, to proving that you 
cannot derive any stage of evolution of matter or life from the previous 
one - that the world is fundamentally creative - that there are always new 
ways and new rules to join up the dots.



Mike,

The answer here is a yes. Many new branches of mathematics have arisen
since the formalization of set theory, but most of them can be
interpreted as special branches of set theory. Moreover,
mathematicians often find this to be actually useful, not merely a
curiosity.

--Abram Demski

On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED] 
wrote:
Valentina:In other words I'm looking for a way to mathematically define 
how

the AGI will mathematically define its goals.

Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
been logically or mathematically (axiomatically) derivable from any old
one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
free-form deformation  etc etc

agi | Archives | Modify Your Subscription



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Valentina Poletti
Vlad, Terren and all,

by reading your interesting discussion, this saying popped in my mind..
admittedly it has little to do with AGI but you might get the point anyhow:

An old lady used to walk down a street everyday, and on a tree by that
street a bird sang beautifully, the sound made her happy and cheerful and
she was very thankful for that. One day she decided to catch the bird and
place it into a cage, so she could always have it singing for her.
Unfortunately for her, the bird got sad in the cage and stopped singing...
thus taking away her cheer as well.

Well, the story has a different purpose, but one can see a moral that
connects to this argument. Control is an illusion. It takes away the very
nature of what we are trying to control.

My point is that by following Eliezer's approach we might never get an AGI.
Intelligence, as I defined, is the ability to reach goals and those goals
(as Terren pinted out) must somehow have to do with self-preservation if the
system itself is not at equilibrium.
Not long ago, I was implementing the model for a bio-physics research in New
York based on non-linear systems far from equilibrium and this became pretty
evident to me. If any system is kept far from equilibrium, it has the
tendency to form self-preserving entities, given enough time.

I also have a nice definition of friendliness: firstly note that we have
goals both as individuals and as a species (these goals are inherent in the
species and come from self-preservation -see above). The goals of the
individuals and species must somehow match, i.e. not be in significant
conflict, least the individual be considered a criminal, harmful. By
friendliness is meant creating an AGI which will follow the goals of the
human species rather than its own goals.

Valentina



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: Information theoretic approaches to AGI (was Re: [agi] The Necessity of Embodiment)

2008-08-26 Thread Abram Demski
Mike,

That may be the case, but I do not think it is relevant to Valentina's
point. How can we mathematically define how an AGI might
mathematically define its own goals? Well, that question assumes 3
things:

-An AGI defines its own goals
-In doing so, it phrases them in mathematical language
-It is possible to mathematically define the way in which it does this

I think you are questioning assumptions 2 and 3? If so, I do not think
that the theory needs to be able to do what you are saying it cannot:
it does not need to be able to generate new branches of mathematics
from itself before-the-fact. Rather, its ability to generate new
branches (or, in our case, goals) can and should depend on the
information coming in from the environment.

Whether such a logic really exists, though, is a different question.
Before we can choose which goals we should pick, we need some criteria
by which to judge them; but it seems like such a criteria is already a
goal. So, I could cook up any method of choosing goals that sounded
OK, and claim that it was the solution to Valentina's problem, because
Valentina's problem is not yet well-defined.

The closest thing to a solution would be to purposefully give an AGI a
complex, probabilistically-defined, and often-conflicting goal system
with many diverse types of pleasure, like humans have.

On Tue, Aug 26, 2008 at 2:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 Thanks for reply. This is presumably after the fact -  can set theory
 predict new branches? Which branch of maths was set theory derivable from? I
 suspect that's rather like trying to derive any numeral system from a
 previous one. Or like trying to derive any programming language from a
 previous one- or any system of logical notation from a previous one.

 Mike,

 The answer here is a yes. Many new branches of mathematics have arisen
 since the formalization of set theory, but most of them can be
 interpreted as special branches of set theory. Moreover,
 mathematicians often find this to be actually useful, not merely a
 curiosity.

 --Abram Demski

 On Tue, Aug 26, 2008 at 12:32 PM, Mike Tintner [EMAIL PROTECTED]
 wrote:

 Valentina:In other words I'm looking for a way to mathematically define
 how
 the AGI will mathematically define its goals.

 Holy Non-Existent Grail? Has  any new branch of logic or mathematics ever
 been logically or mathematically (axiomatically) derivable from any old
 one?  e.g. topology,  Riemannian geometry, complexity theory, fractals,
 free-form deformation  etc etc
 
 agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Information t..PS

2008-08-26 Thread Abram Demski
On Tue, Aug 26, 2008 at 3:10 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Abram,

 I suspect what it comes down to - I'm tossing this out off-the-cuff - is
 that each new branch of maths involves new rules, new operations on numbers
 and figures, and new ways of relating the numbers and figures to real
 objects and sometimes new signs, period. And they aren't predictable or
 derivable from previous ones. Set theory is ultimately a v. useful
 convention, not an absolute necessity?

I think this is true in one sense and false in another, so I'll try to
be careful to distinguish. Mathematicians could have come up with
everything they have come up with w/o the aid of set theory-- it might
have been harder, but not impossible. (To take a specific example, it
seems like they would have to reconstruct the idea of infinite
ordinals in so many cases that it is unrealistic to suppose they
wouldn't notice the similarity and construct a general theory; but we
shall suppose it anyway for argument's sake.) However, in another
sense, it may be impossible: namely, it would not seem possible for
mathematicians to have done much math at all if set theory wasn't in
there somewhere, that is, if mathematicians thought in a way that did
not admit sets as a coherent concept. I am not claiming that set
theory is the logic of thought, but I do think it is to be
distinguished from things like 2d geometry that mathematicians
investigated essentially because of the sense-modalities at hand.


 Perhaps this overlaps with our previous discussion, which could perhaps be
 reduced to - is there a universal learning program - an AGI that can learn
 any skill? That perhaps can be formalised as - is there a program that can
 learn any program - a set of rules for learning any set of rules? I doubt
 it. Especially  if as we see with the relatively simple logic discussions on
 this forum, people can't agree on which rules/conventions/systems to apply,
 i.e. there are no definitive rules.

3 days ago Matt Mahoney referenced a paper by Shane Legg, supposedly
formally proving this point:

http://www.vetta.org/documents/IDSIA-12-06-1.pdf

I read it, and must say that I disagree with the interpretations
provided for the theorems. Specifically, one conclusion is that
because programs of high Kolmogorov complexity are required if we want
to guarantee the ability to learn sequences of comparably high
Kolmogorov complexity, AI needs to be an experimental science. So,
Shane Legg is assuming that highly complex programs are difficult to
invent. But there is an easy counterexample to this, which also
addresses your above point:

Given is T, the amount of computation time the algorithm is given
between sensory-inputs. Sensory inputs can ideally be thought of as
coming in at the rate of 1 bit per T cpu cycles (fitting with the
framework in the paper, which has data come in 1 bit at a time),
although in practice it would probably come in batches. Each time
period T:
--add the new input to a memory of all data that's come in so far
--Treat the memory as the output of a computer program in some
specific language. Run the program backwards, inferring everything
that can be inferred about its structure. A zero or one can only be
printed by particular basic print statements. It is impossible to know
for certain where conditional statements are, where loops are, and so
on, but at least the space of possibilities is well defined (since we
know which programming language we've chosen). Every time a choice
like this occurs, we split the simulation, so that we will quickly be
running a very large number of programs backwards.
--Whenever we get a complete program from this process, we need to run
it forwards (again, simulating it in parallel with everything else
that is going on). We record what it predicts as the NEXT data, along
with the program's length (because we will be treating shorter
programs as better models, and trusting what they tell us more
strongly than we trust longer programs).
--Because there are so many things going on at once, this will run
VERY slowly; however, we will simply terminate the process at time T
and take the best prediction we have at that point. (If we hadn't
gotten any yet, let's just say we predict 0.)

A more sophisticated version of that alg was presented at the AGI
conference in this paper:

http://www.agiri.org/docs/ComputationalApproximation.pdf

The algorithm will be able to learn any program, if given enough time!

NOW, why did Shane Legg's paper say that such a thing was impossible?
Well, in the formalism of the paper, the above algorithm cheated: it
isn't an algorithm at all! Fun, huh?

The reason is because I parameterized it in terms of that number T.
So, technically, it is a class of algorithms; we get a specific
algorithm by choosing a T-value. If we choose a very large T-value,
the algorithm coulf be very complex, in terms of Kolmogorov
complexity. However, it will not be complex to humans, since it will
just be another 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread John LaMuth
- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, August 26, 2008 6:49 AM
  Subject: Re: [agi] How Would You Design a Play Machine?


  Examples of the kind of similarity I'm thinking of:

  -- The analogy btw chess or go and military strategy

  -- The analogy btw roughhousing and actual fighting

  In logical terms, these are intensional rather than extensional similarities

  ben

  ###

  ***

  Ben

  You have rightfully nailed this issue down as one is serious and the other 
is not to be taken this way a meta-order perspective)...

  The same goes for humor and comedy -- the meta-message being don't take me 
seriously

  That is why I segregated analogical humor seperately (from routine 
seriousness) in my 2nd patent 7236963
  www.emotionchip.net 

  This specialized meta-order-type of disqualification is built directly into 
the schematics ...

  You are correct -- it all hinges on intentions...

  John LaMuth

  www.ethicalvalues.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread John LaMuth
- Original Message - 
  From: Ben Goertzel 
  To: agi@v2.listbox.com 
  Sent: Tuesday, August 26, 2008 6:49 AM
  Subject: Re: [agi] How Would You Design a Play Machine?


  Examples of the kind of similarity I'm thinking of:

  -- The analogy btw chess or go and military strategy

  -- The analogy btw roughhousing and actual fighting

  In logical terms, these are intensional rather than extensional similarities

  ben

  ###

  ***

  Ben

  You have rightfully nailed this issue down as one is serious and the other 
is not to be taken this way a meta-order perspective)...

  The same goes for humor and comedy -- the meta-message being don't take me 
seriously

  That is why I segregated analogical humor seperately (from routine 
seriousness) in my 2nd patent 7236963
  www.emotionchip.net 

  This specialized meta-order-type of disqualification is built directly into 
the schematics ...

  You are correct -- it all hinges on intentions...

  John LaMuth

  www.ethicalvalues.com 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The constraint of Friendly AI

2008-08-26 Thread Vladimir Nesov
On Tue, Aug 26, 2008 at 11:13 PM, Valentina Poletti [EMAIL PROTECTED] wrote:
 Vlad, Terren and all,

 by reading your interesting discussion, this saying popped in my mind..
 admittedly it has little to do with AGI but you might get the point anyhow:

 An old lady used to walk down a street everyday, and on a tree by that
 street a bird sang beautifully, the sound made her happy and cheerful and
 she was very thankful for that. One day she decided to catch the bird and
 place it into a cage, so she could always have it singing for her.
 Unfortunately for her, the bird got sad in the cage and stopped singing...
 thus taking away her cheer as well.

 Well, the story has a different purpose, but one can see a moral that
 connects to this argument. Control is an illusion. It takes away the very
 nature of what we are trying to control.

Then you are doing something wrong. The natural word control biases
how you think about this issue, creating associations with caged
birds, imprisonment, shattered potential and stupid mechanical robots.
Think instead of determination, lawfulness and rational decisions.

You do not see yourself as being controlled, as being limited in your
ability to e.g. eat human babies. Control embodied in you that
prevents you from doing that doesn't take away your human nature; on
the contrary, it is a part of human nature. What is genetically
determined in humans is not there to constrain us, it is not
inflexible and fixed as opposed to general ability of intelligence.
Instead, it is what enables us to be flexible and generally
intelligent, to see what is good. We are determined and controlled by
our nature, but we don't want to escape it, instead we want to improve
on it from within (see
http://www.overcomingbias.com/2008/07/rebelling-withi.html ). For more
about how freedom comes as lawfulness, in intricate and open-ended
forms, see Tooby and Cosmides The psychological foundations of
culture ( http://folk.uio.no/rickyh/papers/TheAdaptedMind.htm ).

Needing to know what we are doing is *necessary* to avoid ruin. You
can't create a Friendly AI with 60% of success, and you can't create
an FAI with 1% of success, because it is too hard to know how likely
it is to work. If you can create an FAI and know that it has 1% chance
to work, you understand FAI well enough to make one that is almost
guaranteed to work. And if you don't understand it well enough to say
that it has that 1% chance of succeeding, how do you know that it's
not in fact a lottery ticket that you have no hope of winning? The
question of Friendly AI has some amount of complexity, and unless you
know what you are doing, you will be confronted by this complexity
playing against you, exponentially reducing your chances. You can't
hope to hit a narrow target being blindfolded and boasting that you
have a chance. Even when you see the target, you probably won't be
ready and will need to continue working on your skill instead.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

It doesn't matter what I do with the question. It only matters what an AGI does 
with it. 

I'm challenging you to demonstrate how Friendliness could possibly be specified 
in the formal manner that is required to *guarantee* that an AI whose goals 
derive from that specification would actually do the right thing.

If you can't guarantee Friendliness, then self-modifying approaches to AGI 
should just be abandoned. Do we agree on that?

Terren

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 The question itself doesn't exist in vacuum. When
 *you*, as a human,
 ask it, there is a very specific meaning associated with
 it. You don't
 search for the meaning that the utterance would
 call in a
 mind-in-general, you search for meaning that *you* give to
 it. Or, to
 make the it more reliable, for the meaning given by the
 idealized
 dynamics implemented in you (
 http://www.overcomingbias.com/2008/08/computations.html ).
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 http://causalityrelay.wordpress.com/
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] The Necessity of Embodiment

2008-08-26 Thread Terren Suydam

--- On Tue, 8/26/08, Vladimir Nesov [EMAIL PROTECTED] wrote:
 But what is safe, and how to improve safety? This is a
 complex goal
 for complex environment, and naturally any solution to this
 goal is
 going to be very intelligent. Arbitrary intelligence is not
 safe
 (fatal, really), but what is safe is also intelligent.

Look, the bottom line is that even if you could somehow build a self-modifying 
AI that was provably Friendly, some evil hacker could come along and modify the 
code. One way or another, we have to treat all smarter-than-us intelligences as 
inherently risky.  

So safe, for me, refers instead to the process of creating the intelligence. 
Can we stop it? Can we understand it?  Can we limit its scope, its power?  With 
simulated intelligences, the answer to all of the above is yes. Pinning your 
hopes of safe AI on the Friendliness of the AI is the mother of all gambles, 
one that in a well-informed democratic process would surely not be undertaken.

 There is no law that makes large computations less lawful
 than small
 computations, if it is in the nature of computation to
 preserve
 certain invariants. A computation that multiplies two huge
 numbers
 isn't inherently more unpredictable than computation
 that multiplies
 two small numbers. 

I'm not talking about straight-forward, linear computation. Since we're talking 
about self-modification, the computation is necessarily recursive and 
iterative. Recursive computation can easily lead to chaos (as in chaos theory, 
not disorder).

The archetypical example of this is the simple equation from population 
dynamics, y=rx(1-x), which is recursively applied for each time interval. For 
values of r greater than some threshold, the behavior is chaotic and thus 
unpredictable, which is a surprising result for such a simple equation.

I'm making a rather broad analogy here by comparing the above example to a 
self-modifying AGI, but the principle holds. An AGI with present goal system G 
computes the Friendliness of a modification M, based on G. It decides to go 
ahead with the modification. This next iteration results in goal system G'. And 
so on, performing Friendliness computations against the resulting goal systems. 
In what sense could one guarantee that this process would not lead to chaos?  
I'm not sure you could even guarantee it would continue self-modifying.

 You have intuitive
 expectation that making Z will make AI uncontrollable,
 which will lead
 to a bad outcome, and so you point out that this design
 that suggests
 doing Z will turn out bad. But the answer is that AI itself
 will check
 whether Z is expected to lead to a good outcome before
 making a
 decision to implement Z.

As has been pointed out before, by others, the goal system can drift as the 
modifications are applied. The question once again is, in what *objective 
sense* can the AI validate that its Friendliness algorithm corresponds to what 
humans actually consider to be Friendly?  What does it compare *against*?
 
 This remark makes my note that the field of AI actually did
 something
 for the last 50 years not that minor. Again you make an
 argument from
 ignorance: I do not know how to do it, nobody knows how to
 do it,
 therefore it can not be done. Argue from knowledge, not
 from
 ignorance. If you know the path, follow it, describe it. If
 you know
 that the path has a certain property, show it. If you know
 that a
 class of algorithms doesn't find a path, say that these
 algorithms
 won't give the answer. But if you are lost, if your map
 is blank,
 don't assert that the territory is blank also, for you
 don't know.

You can do better than that, I hope. I'm not saying it can't be done just 
because I don't know how to do it. I'm giving you epistemological objections 
for why Friendliness can't be specified. It's an argument from principle. If 
those objections are valid, the fanciest algorithm in the world won't solve the 
problem (assuming finite resources, of course). Address those objections first 
before you pick on my ignorance about Friendliness algorithms.
 
 Causal models are not perfect, you say. But perfection is
 causal,
 physical laws are the most causal phenomenon. All the
 causal rules
 that we employ in our approximate models of environment are
 not
 strictly causal, they have exceptions. Evolution has the
 advantage of
 optimizing with the whole flow of environment, but
 evolution doesn't
 have any model of this environment, the counterpart of
 human models in
 evolution is absent. What it has is a simple regularity in
 the
 environment, natural selection. Will all the imperfections,
 human
 models of environment are immensely more precise than this
 regularity
 that relies on natural repetition of context. Evolution
 doesn't have a
 perfect model, it has an exceedingly simplistic model, so
 simple in
 fact that it managed to *emerge* by chance. Humans with
 their
 admittedly limited intelligence, on the other hand, already
 manage to
 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Mike,

So you feel that my disagreement with your proposal is sad?  That's quite 
an ego you have there, my friend.  You asked for input and you got it.  The 
fact that you didn't like my input doesn't make me or the effort I spent 
composing it sad.  I haven't read all of the replies to your post yet, 
but judging by the index listing in my e-mail client, it has already 
drained a considerable amount of time and intellectual energy from the 
members of this list.  You want sad?  That's sad.


Nice try at ignoring the substance of what I wrote while continuing to 
advance you own views.  I did NOT say THINKING about your idea, or any idea 
for that matter, was a waste of time.  Indeed, the second sentence of my 
reply contained the following ...(unless [studying human play is] being 
done purely for research purposes).  I did think about your idea.  I 
concluded what it proposes (not the idea itself) is, in fact, a waste of 
time for people who want to design and build a working AGI before 
mid-century.  I'm sure some list members will agree with you.  I'm also 
sure some will agree with me.  But, most will have their own views on this 
issue.  That's the way it works.


The AGI I (and many others) have in mind will be to human intelligence what 
an airplane is to a bird.  For many of the same reasons airplanes don't 
play like birds do, my AGI won't play (or create) like humans do.  And, 
just as the airplane flies BETTER THAN the bird (for human purposes), my 
AGI will create BETTER THAN any human (for human purposes).


You wrote, [Play] is generally acknowledged by psychologists to be an 
essential dimension of creativity - which is the goal of AGI.


Wrong.  ONE of the goals (not THE goal) of AGI is *inspired* by human 
creativity.  Indeed, I am counting on the creativity of the first 
generation of AGIs to help humans build (or keep humans away from building) 
the second generation of AGIs.  But... neither generation has to (and, 
IMHO, shouldn't) have human-style creativity.


In fact, I suggest we not use the word creativity when discussing 
AGI-type knowledge synthesis because that is a term that has been applied 
solely to human-style intelligence.  Perhaps, idea mining would be a 
better way to describe what I think about when I think about AGI-style 
creativity.  Knowledge synthesis also works for me and has a greater 
syllable count.  Either phrase fits the mechanism I have in mind for an AGI 
that works with MASSIVE quantities of data, using well-studied and 
established data mining techniques, to discover important (to humans and, 
eventually, AGIs themselves) associations.  It would have been impossible 
to build this type of idea mining capability into an AI before the mid 
1990's (before the Internet went public).  It's possible now.  Indeed, 
Google is encouraging it by publishing an open source REST (if memory 
serves) API to the Googleverse.  No human intelligence would be capable of 
doing such data mining without the aid of a computer and, even then, it's 
not easy for the human intellect (associations between massive amounts of 
data are often, themselves, still quite massive - ask the CIA or the NSA or 
Google).


Certainly play is ...fundamental to the human mind-and-body  My point 
was simply that this should have little or no interest to those of us 
attempting to build a working, non-human-style AGI.  We can discuss it all 
we like (however, I don't intend to continue doing so after this reply -- 
I've stated my case).  Such discussion may be worthwhile (if only to show 
up its inherent wrongness) but spending any time attempting to design or 
build an AGI containing a simulation of human-style play (or creativity) is 
not.  There are only so many minutes in a day and only so many days in a 
life.  The human-style (Turing test) approach to AI has been tried.  It 
failed (not in every respect, of course, but the Loebner Prizes - the $25K 
and $100K prizes - established in 1990 remain unclaimed).   I don't intend 
to spend one more minute or hour of my life trying to win the Loebner Prize.


The enormous amount of intellectual energy spent (largely wasted), from the 
mid 1950's to the end of the 1980's, trying to create a human-like AI is a 
true tragedy.  But, perhaps, even more tragic is that unquestioningly 
holding up Turing's imitation game as the gold standard of AI created 
what we call in the commercial software industry a reference problem.  To 
get new clients to buy your software, you need a good reference from 
former/current clients.  Anyone who has attempted to get funding for an AGI 
project since the mid-1990s will attest that the (unintentional but 
nevertheless real) damage caused by Turing and his followers continues to 
have a very real, negative effect on the field of AI/AGI.  I have done, and 
will continue to do, my best to see that this same mistake is not repeated 
in this century's quest to build a beneficial (to humanity) AGI. 
Unfortunately, we 

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Brad Paulsen

Charles,

By now you've probably read my reply to Tintner's reply.  I think that 
probably says it all (and them some!).


What you say holds IFF you are planing on building an airplane that flies 
just like a bird.  In other words, if you are planning on building a 
human-like AGI (that could, say, pass the Turing test).  My position is, 
and has been for decades, that attempting to pass the Turing test (or win 
either of the two, one-time-only, Loebner Prizes) is a waste of precious 
time and intellectual resources.


Thought experiments?  No problem.  Discussing ideas?  No problem. 
Human-like AGI?  Big problem.


Cheers,
Brad

Charles Hixson wrote:
Play is a form a strategy testing in an environment that doesn't 
severely penalize failures.  As such, every AGI will necessarily spend a 
lot of time playing.


If you have some other particular definition, then perhaps I could 
understand your response if you were to define the term.


OTOH, if this is interpreted as being a machine that doesn't do anything 
BUT play (using my supplied definition), then your response has some 
merit, but even that can be very useful.  Almost all of mathematics, 
e.g., is derived out of such play.


I have a strong suspicion that machines that don't have a play mode 
can never proceed past the reptilian level of mentation.  (Here I'm 
talking about thought processes that are mediated via the reptile 
brain in entities like mammals.  Actual reptiles may have some more 
advanced faculties of which I'm unaware.  (Note that, e.g., shrews don't 
have much play capability, but they have SOME.)



Brad Paulsen wrote:
Mike Tintner wrote: ...how would you design a play machine - a 
machine that can play around as a child does?


I wouldn't.  IMHO that's just another waste of time and effort (unless 
it's being done purely for research purposes).  It's a diversion of 
intellectual and financial resources that those serious about building 
an AGI any time in this century cannot afford.  I firmly believe if we 
had not set ourselves the goal of developing human-style intelligence 
(embodied or not) fifty years ago, we would already have a working, 
non-embodied AGI.


Turing was wrong (or at least he was wrongly interpreted).  Those who 
extended his imitation test to humanoid, embodied AI were even more 
wrong.  We *do not need embodiment* to be able to build a powerful AGI 
that can be of immense utility to humanity while also surpassing human 
intelligence in many ways.  To be sure, we want that AGI to be 
empathetic with human intelligence, but we do not need to make it 
equivalent (i.e., just like us).


I don't want to give the impression that a non-Turing intelligence 
will be easy to design and build.  It will probably require at least 
another twenty years of two steps forward, one step back effort.  
So, if we are going to develop a non-human-like, non-embodied AGI 
within the first quarter of this century, we are going to have to 
just say no to Turing and start to use human intelligence as an 
inspiration, not a destination.


Cheers,

Brad



Mike Tintner wrote:
Just a v. rough, first thought. An essential requirement of  an AGI 
is surely that it must be able to play - so how would you design a 
play machine - a machine that can play around as a child does?


You can rewrite the brief as you choose, but my first thoughts are - 
it should be able to play with

a) bricks
b)plasticine
c) handkerchiefs/ shawls
d) toys [whose function it doesn't know]
and
e) draw.

Something that should be soon obvious is that a robot will be vastly 
more flexible than a computer, but if you want to do it all on 
computer, fine.


How will it play - manipulate things every which way?
What will be the criteria of learning - of having done something 
interesting?

How do infants, IOW, play?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?; 


Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com