Re: [agi] AGI Alife

2010-08-06 Thread rob levy
Interesting article:
http://www.newscientist.com/article/mg20727723.700-artificial-life-forms-evolve-basic-intelligence.html?page=1

On Sun, Aug 1, 2010 at 3:13 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:

 Ian Parker wrote

  I would like your
  opinion on *proofs* which involve an unproven hypothesis,

 I've no elaborated opinion on that.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The Math Behind Creativity

2010-07-26 Thread rob levy
That's interesting, and I think I agree mostly, at least abstractly.  So
this is really just a high-level comment on how to approach creativity,
correct?  I guess the title Mathematics of Creativity is what confused
me.  None of this suggests any real mathematical or computational
perspective that will tell us something new or useful (or creative?) about
creativity, right?  Or am I missing something?

On Sun, Jul 25, 2010 at 9:42 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I wasn't trying for a detailed model of creative thinking with
 explanatory power -  merely one dimension (and indeed a foundation) of it.

 In contrast to rational, deterministically programmed computers and robots
 wh. can only operate in closed spaces externally, (artificial environments)
 and only think in closed spaces internally,  human (real AGI) agents are
 designed to operate in the open world externally, (real world environments)
 and to think in open worlds internally.

 IOW when you think about any creative problem, like what am I going to do
 tonight? or let me write a post in reply to MT - you *don't* have a nice
 neat space/frame of options lined up as per a computer program, which your
 brain systematically checks through. You have an open world of associations
 - associated with varying degrees of power - wh. you have to search, or
 since AI has corrupted that word, perhaps we should say quest through in
 haphazard, nonsystematic fashion. You have to *explore* your brain for ideas
 - and it is a risky business, wh. (with more difficult problems) may draw a
 blank.

 (Nor BTW does your brain set up a space for solving creative problems -
 as was vaguely mooted in a recent discussion with Ben. Closed spaces are
 strictly for rational problems).

 IMO though this contrast of narrow AI/rationality as thinking in closed
 spaces vs AGI/creativity as thinking in open worlds is a very powerful
 one.

 Re your examples, I don't think Koestler or Fauconnier are talking of
 defined or closed spaces.  The latter is v. vague about the nature of
 his spaces. I think they're rather like the formulae for creativity that
 our folk culture often talks about. V. loosely. They aren't used in the
 strict senses the terms have in rationality - logic/maths/programming.

 Note that Calvin's/Piaget's idea of consciousness as designed for when you
 don't know what to do accords with my idea of creative thinking as
 effectively starting from a blank page rather than than a ready space of
 options, and going on to explore a world of associations for ideas.

 P.S. I should have stressed that the open world of the brain is
 **multidomain**, indeed **open-domain by contrast with the spaces of
 programs wh. are closed, uni-domain. When you search for what am I going to
 do..?  your brain can go through an endless world of domains -  movies,call
 a friend, watch TV, browse the net, meal, go for walk, play a sport, ask
 s.o. for novel ideas, spend time with my kid ... and on and on.

 The space thinking of rationality is superefficient but rigid and useless
 for AGI. The open world of the human, creative mind is highly inefficient
 by comparison but superflexible and the only way to do AGI.




  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 26, 2010 1:06 AM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] The Math Behind Creativity

 On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I think it's v. useful - although I was really extending his idea.

 Correct me - but almost no matter what you guys do, (or anyone in AI does)
 , you think in terms of spaces, or frames. Spaces of options. Whether you're
 doing logic, maths, or programs, spaces in one form or other are
 fundamental.

 But you won't find anyone - or show me to the contrary - applying spaces
 to creative problems (or AGI problems). T



 I guess we may somehow be familiar with different and non-overlapping
 literature, but it seems to me that most or at least many approaches to
 modeling creativity involve a notion of spaces of some kind.  I won't make a
 case to back that up but I will list a few examples: Koestler's bisociation
 is spacial, D. T. Campbell, the Fogels, Finke et al, and William Calvin's
 evolutionary notion of creativity involve a behavioral or conceptual fitness
 landscape, Gilles Fauconnier  Mark Turner's theory of conceptual blending
 on mental space, etc. etc.

 The idea of the website you posted is very lacking in any kind of
 explanatory power in my opinion.  To me any theory of creativity should be
 able to show how a system is able to generate novel and good results.
  Creativity is more than just outside what is known, created, or working.
  That is a description of novelty, and with no suggestions for the why or
 how of generating novelty.  Creativity also requires the semantic potential
 to reflect on and direct the focusing in on the stream of playful novelty to
 that which is desired or considered good.

 I

Re: [agi] The Math Behind Creativity

2010-07-25 Thread rob levy
Not sure how that is useful, or even how it relates to creativity if
considered as an informal description?

On Sun, Jul 25, 2010 at 10:15 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I came across this, thinking it was going to be an example of maths
 fantasy, but actually it has a rather nice idea about the mathematics of
 creativity.

 
 The Math Behind Creativity http://www.alwayscreative.com/math/

 By Chuck Scott http://www.alwayscreative.com/author/admin/ on June 15,
 2010

 The Science of Creativity is based on the following mathematical formula
 for Creativity:

 [image: C = {infty} - {pi}R^2]

 In other words, Creativity is equal to infinity minus the area of a defined
 circle of what’s working.

 Note: [image: {pi}R^2] is the geometric formula for calculating the area
 of a circle; where [image: pi] is 3.142 rounded to the nearest thousandth,
 and R is a circle’s radius (the length from a circle’s center to edge).



 **

 Simply, it's saying - that for every problem, and ultimately that's not
 just creative but rational problems, there's a definable space of options -
 the spaces you guys work with in your programs - wh. may work, if the
 problem is rational, but normally don't if it's creative. And beyond that
 space is the undefined space of creativity, wh. encompasses the entire world
 in an infinity of combinations. (Or all the fabulous multiverse[s] of Ben's
 mind).  Creative ideas - and that can be small everyday ideas as well as
 large cultural ones - can come from anywhere in, and any combinations of,
 the entire world (incl butterflies in Brazil and caterpillars in Katmandu -
 QED I just drew that last phrase off the cuff from that vast world).
 Creative thinking - and that incl. the thinking of all humans from children
 on - is what in the world ? thinking - that can and does draw upon the
 infinite resources of the world. What in the world is he on about? Where
 in the world will I find s.o. who..? What in the world could be of help
 here?

 And that is another way of highlighting the absurdity of current approaches
 to AGI - that would seek to encompass the entire world of creative
 ideas/options in the infinitesimal spaces/options of programs.




*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com
math_994_a9533b31457bd21311d15e42a60f9153.pngmath_994_d895701496b1057f4cbe3c7c38db0d30.pngmath_994.5_8edb2cf68079344a2edd739531259f6c.png

Re: [agi] The Math Behind Creativity

2010-07-25 Thread rob levy
On Sun, Jul 25, 2010 at 5:05 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  I think it's v. useful - although I was really extending his idea.

 Correct me - but almost no matter what you guys do, (or anyone in AI does)
 , you think in terms of spaces, or frames. Spaces of options. Whether you're
 doing logic, maths, or programs, spaces in one form or other are
 fundamental.

 But you won't find anyone - or show me to the contrary - applying spaces to
 creative problems (or AGI problems). T



I guess we may somehow be familiar with different and non-overlapping
literature, but it seems to me that most or at least many approaches to
modeling creativity involve a notion of spaces of some kind.  I won't make a
case to back that up but I will list a few examples: Koestler's bisociation
is spacial, D. T. Campbell, the Fogels, Finke et al, and William Calvin's
evolutionary notion of creativity involve a behavioral or conceptual fitness
landscape, Gilles Fauconnier  Mark Turner's theory of conceptual blending
on mental space, etc. etc.

The idea of the website you posted is very lacking in any kind of
explanatory power in my opinion.  To me any theory of creativity should be
able to show how a system is able to generate novel and good results.
 Creativity is more than just outside what is known, created, or working.
 That is a description of novelty, and with no suggestions for the why or
how of generating novelty.  Creativity also requires the semantic potential
to reflect on and direct the focusing in on the stream of playful novelty to
that which is desired or considered good.

I would disagree that creativity is outside the established/known.  A better
characterization would be that it resides on the complex boundary of the
novel and the established, which is what make it interesting instead just a
copy, or just total gobbledygook randomness.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
A child AGI should be expected to need help learning how to solve many
problems, and even be told what the steps are.  But at some point it needs
to have developed general problem-solving skills.  But I feel like this is
all stating the obvious.

On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Mike, I think we all agree that we should not have to tell an AGI the steps
 to solving problems. It should learn and figure it out, like the way that
 people figure it out.





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-21 Thread rob levy
I completely agree with this characterization, I was just pointing out the
importance already-existing generally intelligent entities in providing
scaffolding for the system's learning and meta-learning processes.

On Wed, Jul 21, 2010 at 12:25 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Infants *start* with general learning skills - they have to extensively
 discover for themselves how to do most things - control head, reach out,
 turn over, sit up, crawl, walk - and also have to work out perceptually what
 the objects they see are, and what they do... and what sounds are, and how
 they form words, and how those words relate to objects - and how language
 works

 it is this capacity to keep discovering ways of doing things, that is a
 major motivation in their continually learning new activities - continually
 seeking novelty, and getting bored with too repetitive activities

 obviously an AGI needs some help.. but at the mo. all projects get *full*
 help/ *complete* instructions - IOW are merely dressed up versions of narrow
 AI

 no one AFAIK is dealing with the issue of how do you produce a true
 goalseeking agent who *can* discover things for itself?  - an agent, that
 like humans and animals, can *find* its way to its goals generally, as well
 as to learning new activities, on its own initiative  - rather than by
 following instructions.  (The full instruction method only works in
 artificial, controlled environments and can't possibly work in the real,
 uncontrollable world - where future conditions are highly unpredictable,
 even by the sagest instructor). [Ben BTW strikes me as merely gesturing at
 all this].

 There really can't be any serious argument about this - humans and animals
 clearly learn all their activities with v. limited and largely general
 rather than step-by-step instructions.

 You may want to argue there is an underlying general program that
 effectively specifies every step they must take (good luck) - but with
 respect to all their specialist.particular activities, - think having a
 conversation, sex, writing a post, an essay, fantasying, shopping, browsing
 the net, reading a newspaper - etc etc. - you got and get v. little
 step-by-step instruction about these and all your other activities

 So AGI's require a fundamentally and massively different paradigm of
 instruction to the program, comprehensive, step-by-step paradigm of narrow
 AI.

 [The rock wall/toybox tests BTW are AGI activities, where it is
 *impossible* to give full instructions, or produce a formula, whatever you
 may want to do].

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Wednesday, July 21, 2010 3:56 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 A child AGI should be expected to need help learning how to solve many
 problems, and even be told what the steps are.  But at some point it needs
 to have developed general problem-solving skills.  But I feel like this is
 all stating the obvious.

 On Tue, Jul 20, 2010 at 11:32 PM, Matt Mahoney matmaho...@yahoo.comwrote:

   Mike, I think we all agree that we should not have to tell an AGI the
 steps to solving problems. It should learn and figure it out, like the way
 that people figure it out.


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy


 However, I see that there are no valid definitions of AGI that explain what
 AGI is generally , and why these tests are indeed AGI. Google - there are v.
 few defs. of AGI or Strong AI, period.



I like Fogel's idea that intelligence is the ability to solve the problem
of how to solve problems in new and changing environments.  I don't think
Fogel's method accomplishes this, but the goal he expresses seems to be the
goal of AGI as I understand it.

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Well, solving ANY problem is a little too strong.  This is AGI, not AGH
(artificial godhead), though AGH could be an unintended consequence ;).  So
I would rephrase solving any problem as being able to come up with
reasonable approaches and strategies to any problem (just as humans are able
to do).

On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
Fogel originally used the phrase to argue that evolutionary computation
makes sense as a cognitive architecture for a general-purpose AI problem
solver.

On Mon, Jul 19, 2010 at 11:45 AM, rob levy r.p.l...@gmail.com wrote:

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).


 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


 However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Of definitions and tests of AGI

2010-07-19 Thread rob levy
 And are you happy with:

 AGI is about devising *one-off* methods of problemsolving (that only apply
 to the individual problem, and cannot be re-used - at

least not in their totality)



Yes exactly, isn't that what people do?  Also, I think that being able to
recognize where past solutions can be generalized and where past solutions
can be varied and reused is a detail of how intelligence works that is
likely to be universal.



 vs

 narrow AI is about applying pre-existing *general* methods of
 problemsolving  (applicable to whole classes of problems)?



  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 4:45 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI

 Well, solving ANY problem is a little too strong.  This is AGI, not AGH
 (artificial godhead), though AGH could be an unintended consequence ;).  So
 I would rephrase solving any problem as being able to come up with
 reasonable approaches and strategies to any problem (just as humans are able
 to do).

 On Mon, Jul 19, 2010 at 11:32 AM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Whaddya mean by solve the problem of how to solve problems? Develop a
 universal approach to solving any problem? Or find a method of solving a
 class of problems? Or what?

  *From:* rob levy r.p.l...@gmail.com
 *Sent:* Monday, July 19, 2010 1:26 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Of definitions and tests of AGI


  However, I see that there are no valid definitions of AGI that explain
 what AGI is generally , and why these tests are indeed AGI. Google - there
 are v. few defs. of AGI or Strong AI, period.



 I like Fogel's idea that intelligence is the ability to solve the problem
 of how to solve problems in new and changing environments.  I don't think
 Fogel's method accomplishes this, but the goal he expresses seems to be the
 goal of AGI as I understand it.

 Rob
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription 
 http://www.listbox.com
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread rob levy
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more explanation.


I think based on what you wrote, you understood (mostly) what I was trying
to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized as
 dimensionality into finer measures. Can you say some more about this?


I was just referencing Gardenfors' research program of conceptual spaces
(I was intentionally vague about committing to this fully though because I
don't necessarily think this is the whole answer).  Page 2 of this article
summarizes it pretty succinctly: http:// goog_1627994790
www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



Yes definitely.


 Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


I agree this is true of natural intelligence, though maybe in modeling, the
neural level can be shortcut to the topo map level without recourse to
neural computation (use some more straightforward computation like matrix
algebra instead).

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-29 Thread rob levy
Sorry, the link I included was invalid, this is what I meant:

http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf

On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:

 On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Rob,

 I just LOVE opaque postings, because they identify people who see things
 differently than I do. I'm not sure what you are saying here, so I'll make
 some random responses to exhibit my ignorance and elicit more explanation.


 I think based on what you wrote, you understood (mostly) what I was trying
 to get across.  So I'm glad it was at least quasi-intelligible. :)


  It sounds like this is a finer measure than the dimensionality that I
 was referencing. However, I don't see how to reduce anything as quantized as
 dimensionality into finer measures. Can you say some more about this?


 I was just referencing Gardenfors' research program of conceptual spaces
 (I was intentionally vague about committing to this fully though because I
 don't necessarily think this is the whole answer).  Page 2 of this article
 summarizes it pretty succinctly: http:// http://goog_1627994790
 www.geog.ucsb.edu/.../ICSC_2009_AdamsRaubal_Camera-FINAL.pdf



 However, different people's brains, even the brains of identical twins,
 have DIFFERENT mappings. This would seem to mandate experience-formed
 topology.



 Yes definitely.


  Since these conceptual spaces that structure sensorimotor
 expectation/prediction (including in higher order embodied exploration of
 concepts I think) are multidimensional spaces, it seems likely that some
 kind of neural computation over these spaces must occur,


 I agree.


 though I wonder what it actually would be in terms of neurons, (and if
 that matters).


 I don't see any route to the answer except via neurons.


 I agree this is true of natural intelligence, though maybe in modeling, the
 neural level can be shortcut to the topo map level without recourse to
 neural computation (use some more straightforward computation like matrix
 algebra instead).

 Rob




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-28 Thread rob levy
In order to have perceptual/conceptual similarity, it might make sense that
there is distance metric over conceptual spaces mapping (ala Gardenfors or
something like this theory)  underlying how the experience of reasoning
through is carried out.  This has the advantage of being motivated by
neuroscience findings (which are seldom convincing, but in this case it is
basic solid neuroscience research) that there are topographic maps in the
brain.  Since these conceptual spaces that structure sensorimotor
expectation/prediction (including in higher order embodied exploration of
concepts I think) are multidimensional spaces, it seems likely that some
kind of neural computation over these spaces must occur, though I wonder
what it actually would be in terms of neurons, (and if that matters).

But that is different from what would be considered quantitative reasoning,
because from the phenomenological perspective the person is training
sensorimotor expectations by perceiving and doing.  And creative conceptual
shifts (or recognition of novel perceptual categories) can also be explained
by this feedback between trained topographic maps and embodied interaction
with environment (experienced at the ecological level as sensorimotor
expectations (driven by neural maps). Sensorimotor expectation is the basis
of dynamics of perception and coceptualization).


On Sun, Jun 27, 2010 at 7:24 PM, Ben Goertzel b...@goertzel.org wrote:



 On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to know
 ounces. We have the length and width of the floor, and the bottle says to
 use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



 I think that the El Salvadorean maids who come to clean my house
 occasionally, solve this problem without any dimensional analysis or any
 quantitative reasoning at all...

 Probably they solve it based on nearest-neighbor matching against past
 experiences cleaning other dirty floors with water in similarly sized and
 shaped buckets...

 -- ben g


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-27 Thread rob levy
I definitely agree, however we lack a convincing model or plan of any sort
for the construction of systems demonstrating subjectivity, and it seems
plausible that subjectivity is functionally necessary for general
intelligence. Therefore it is reasonable to consider symbiosis as both a
safe design and potentially the only possible design (at least at first),
depending on how creative and resourceful we get in cog sci/ AGI in coming
years.

On Sun, Jun 27, 2010 at 1:13 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 This is wishful thinking. Wishful thinking is dangerous. How about instead
 of hoping that AGI won't destroy the world, you study the problem and come
 up with a safe design.


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, June 26, 2010 1:14:22 PM
 *Subject:* Re: [agi] Questions for an AGI

  why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique
 way of experiencing reality and there is no reason to not take advantage of
 that precious opportunity to create astonishment or bliss. If anything is
 important in the universe, its insuring positive experiences for all areas
 in which it is conscious, I think it will realize that. And with the
 resources available in the solar system alone, I don't think we will be much
 of a burden.


 I like that idea.  Another reason might be that we won't crack the problem
 of autonomous general intelligence, but the singularity will proceed
 regardless as a symbiotic relationship between life and AI.  That would be
 beneficial to us as a form of intelligence expansion, and beneficial to the
 artificial entity a way of being alive and having an experience of the
 world.
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Questions for an AGI

2010-06-26 Thread rob levy

 why should AGIs give a damn about us?


 I like to think that they will give a damn because humans have a unique way
of experiencing reality and there is no reason to not take advantage of that
precious opportunity to create astonishment or bliss. If anything is
important in the universe, its insuring positive experiences for all areas
in which it is conscious, I think it will realize that. And with the
resources available in the solar system alone, I don't think we will be much
of a burden.


I like that idea.  Another reason might be that we won't crack the problem
of autonomous general intelligence, but the singularity will proceed
regardless as a symbiotic relationship between life and AI.  That would be
beneficial to us as a form of intelligence expansion, and beneficial to the
artificial entity a way of being alive and having an experience of the
world.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-25 Thread rob levy
 But there is some other kind of problem.  We should have figured it out by
 now.  I believe that there must be some fundamental computational problem
 that is standing as the major obstacle to contemporary AGI.  Without solving
 that problem we are going to have to wade through years of incremental
 advances.  I believe that the most likely basis of the problem is efficient
 logical satisfiability.  It makes the most senese given the nature of the
 computer and the nature of the best theories of mind.



I think there must be a computational or physical/computational problem we
have yet to clearly identify that goes along with an objection certain
philosophers like Chalmers have made about neural correlates, roughly: why
should one level of analysis or type of structure (eg neurons, brain
regions, dynamically synchronized ensembles of neurons,  or even the
organism-environment system), have this magic property of consciousness?

Since to me at least it seem obvious that the ecological level is the
relevant level of analysis at which to find the meaning relevant to
biological organisms, my sense is that we can reduce the above problem to a
question about meaning/significance, that is: what is it about a system that
makes it unified/integrated such that its relationship to other things
constitutes  a landscape of relevant meaning to the system as a whole.

I think that if that an explanation of meaning-to-a-system is either the
same as an explanation of first-hand subjectivity, or is closely tied to it,
though if subjectivity turns out to be part of a physical problem and not a
purely computational one, then we probably won't solve the above-posed
problem without such a physical explanation being clarified (not necessarily
explained though, just as we don't know what electricity really is for
example).

All computer software and situated robots that have ever been made are
composed of actions or expressions that are meaningful to people, but
software or robots have never been created that can refer to their own
actions in a way that demonstrates skillful knowledge indicating that they
are organized in a truly semantic way, as opposed to a merely programmatic
way.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Fwd: AGI question

2010-06-21 Thread rob levy
Hi

I'm new to this list, but I've been thinking about consciousness, cognition
and AI for about half of my life (I'm 32 years old).  As is probably the
case for many of us here, my interests began with direct recognition of the
depth and wonder of varieties of phenomenological experiences-- and
attempting to comprehend how these constellations of significance fit in
with a larger picture of what we can reliably know about the natural world.


I am secondarily motivated by the fact that (considerations of morality or
amorality aside) AGI is inevitable, though it is far from being a forgone
conclusion that powerful general thinking machines will have a first-hand
subjective relationship to a world, as living creatures do-- and therefore
it is vital that we do as well as possible in understanding what makes
systems conscious.  A zombie machine intelligence singularity is something
I would refer to rather as a holocaust, even if no one were directly
killed, assuming these entities could ultimately prevail over the previous
forms of life on our planet.

I'm sure I'm not the only one on this list who sees a behavioral/ecological
level of analysis as the most likely correct level at which to study
perception and cognition, and perception as being a kind of active
relationship between an organism and an environment.  Having thoroughly
convinced my self of a non-dualist, embodied, externalist perspective on
cognition, I turn to the nature of life itself (and possibly even physics
but maybe that level will not be necessary) to make sense of the nature of
subjectivity.  I like Bohm's or Bateson's panpsychism about systems as
wholes, and significance as informational distinctions (which it would be
natural to understand as being the basis of subjective experience), but this
is descriptive rather than explanatory.

I am not a biologist, but I am increasingly interested in finding answers to
what it is about living organisms that gives them a unity such that
something is something to the system as a whole.  The line of
investigation that theoretical biologists like Robert Rosen and other
NLDS/chaos people have pursued is interesting, but I am unfamiliar with
related work that might have made more progress on the system-level
properties that give life its characteristic unity and system-level
responsiveness.  To me, this seems the most likely candidate for a paradigm
shift that would produce AGI.  In contrast I'm not particularly convinced
that modeling a brain is a good way to get AGI, although I'd guess we could
learn a few more things about the coordination of complex behavior if we
could really understand them.

Another way to put this is that obviously evolutionary computation would be
more than just boring hill-climbing if we knew what an organism even IS
(perhaps in a more precise computational sense). If we can know what an
organism is then it should be (maybe) trivial to model concepts,
consciousness, and high level semantics to the umpteenth degree, or at least
this would be a major hurtle I think.

Even assuming a solution to the problem posed above, there is still plenty
of room for other minds skepticism in non-living entities implemented on
questionably foreign mediums but there would be a lot more reason to sleep
well that the science/technology is leading in a direction in which
questions about subjectivity could be meaningfully investigated.

Rob



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
(I'm a little late in this conversation.  I tried to send this message the
other day but I had my list membership configured wrong. -Rob)

-- Forwarded message --
From: rob levy r.p.l...@gmail.com
Date: Sun, Jun 20, 2010 at 5:48 PM
Subject: Re: [agi] An alternative plan to discover self-organization theory
To: agi@v2.listbox.com


On a related note, what is everyone's opinion on why evolutionary algorithms
are such a miserable failure as creative machines, despite their successes
in narrow optimization problems?

I don't want to conflate the possibly separable problems of biological
development and evolution, though they are interrelated.  There are various
approaches to evolutionary theory such as Lima de Faria's evolution without
selection ideas and Reid's evolution by natural experiment that suggest
natural selection is not  all it's cracked up to be, and that the step of
generating, (mutating, combining, ) is where the more interesting
stuff happens.  Most of the alternatives to Neodarwinian Synthesis I have
seen are based in dynamic models of emergence in complex systems. The upshot
is, you don't get creativity for free, you actually still need to solve a
problem that is as hard as AGI in order to get creativity for free.

So, you would need to solve the AGI-hard problem of evolution and
development of life, in order to then solve AGI itself (reminds me of the
old SNL sketch: first, get a million dollars...).  Also, my hunch is that
there is quite a bit of overlap between the solutions to the two problems.

Rob

Disclaimer: I'm discussing things above that I'm not and don't claim to be
an expert in, but from what I have seen so far on this list, that should be
alright.  AGI is by its nature very multidisciplinary which necessitates
often being breadth-first, and therefore shallow in some areas.


On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is *his*proposal:

 He suggested that I construct a simple NN that couldn't work without self
 organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting asserted
 in ~half of the tests. If/when you stumble onto a combination that sort of
 works, switch to varying the bits one-at-a-time, and iterate in this way
 until the best combination is found.

 Q.  Where are we if this just burns electricity for a few months and finds
 nothing?
 A.  Print out the best combination, break out the wacky tobacy, and come up
 with even better/crazier parameters to test.

 I have never written a line of genetic programming, but I know that others
 here have. Perhaps you could bring some rationality to this discussion?

 What would be a simple NN that needs self-organization? Maybe a small
 pot of neurons that could only work if they were organized into layers,
 e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
 recognition system, given the input that I fed it?

 Any thoughts on how to score partial successes?

 Has anyone tried anything like this in the past?

 Is anyone here crazy enough to want to help with such an effort?

 This Monte Carlo approach might just be simple enough to work, and simple
 enough that it just HAS to be tried.

 All thoughts, stones, and rotten fruit will be gratefully appreciated.

 Thanks in advance.

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] An alternative plan to discover self-organization theory

2010-06-21 Thread rob levy
Matt,

I'm not sure I buy that argument for the simple reason that we have massive
cheap processing now and pretty good knowledge of the initial conditions of
life on our planet (if we are going literal here and not EC in the
abstract), but it's definitely a possible answer.  Perhaps not enough people
have attempted to run evolutionary computation experiments at these massive
scales either.

Rob

On Mon, Jun 21, 2010 at 12:59 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 rob levy wrote:
  On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 Lack of computing power. How much computation would you need to simulate
 the 3 billion years of evolution that created human intelligence?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* rob levy r.p.l...@gmail.com
 *To:* agi agi@v2.listbox.com
 *Sent:* Mon, June 21, 2010 11:56:53 AM

 *Subject:* Re: [agi] An alternative plan to discover self-organization
 theory

 (I'm a little late in this conversation.  I tried to send this message the
 other day but I had my list membership configured wrong. -Rob)

 -- Forwarded message --
 From: rob levy r.p.l...@gmail.com
 Date: Sun, Jun 20, 2010 at 5:48 PM
 Subject: Re: [agi] An alternative plan to discover self-organization theory
 To: agi@v2.listbox.com


 On a related note, what is everyone's opinion on why evolutionary
 algorithms are such a miserable failure as creative machines, despite
 their successes in narrow optimization problems?

 I don't want to conflate the possibly separable problems of biological
 development and evolution, though they are interrelated.  There are various
 approaches to evolutionary theory such as Lima de Faria's evolution without
 selection ideas and Reid's evolution by natural experiment that suggest
 natural selection is not  all it's cracked up to be, and that the step of
 generating, (mutating, combining, ) is where the more interesting
 stuff happens.  Most of the alternatives to Neodarwinian Synthesis I have
 seen are based in dynamic models of emergence in complex systems. The upshot
 is, you don't get creativity for free, you actually still need to solve a
 problem that is as hard as AGI in order to get creativity for free.

 So, you would need to solve the AGI-hard problem of evolution and
 development of life, in order to then solve AGI itself (reminds me of the
 old SNL sketch: first, get a million dollars...).  Also, my hunch is that
 there is quite a bit of overlap between the solutions to the two problems.

 Rob

 Disclaimer: I'm discussing things above that I'm not and don't claim to be
 an expert in, but from what I have seen so far on this list, that should be
 alright.  AGI is by its nature very multidisciplinary which necessitates
 often being breadth-first, and therefore shallow in some areas.


 On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
 talk with my son Eddie, about self-organization theory. This is 
 *his*proposal:

 He suggested that I construct a simple NN that couldn't work without
 self organizing, and make dozens/hundreds of different neuron and synapse
 operational characteristics selectable ala genetic programming, put it on
 the fastest computer I could get my hands on, turn it loose trying arbitrary
 combinations of characteristics, and see what the winning combination
 turns out to be. Then, armed with that knowledge, refine the genetic
 characteristics and do it again, and iterate until it *efficiently* self
 organizes. This might go on for months, but self-organization theory might
 just emerge from such an effort. I had a bunch of objections to his
 approach, e.g.

 Q.  What if it needs something REALLY strange to work?
 A.  Who better than you to come up with a long list of really strange
 functionality?

 Q.  There are at least hundreds of bits in the genome.
 A.  Try combinations in pseudo-random order, with each bit getting
 asserted in ~half of the tests. If/when you stumble onto a combination that
 sort of works, switch to varying the bits one-at-a-time, and iterate in this
 way until the best combination is found.

 Q.  Where are we if this just burns electricity for a few months and finds
 nothing?
 A.  Print out the best combination, break out the wacky tobacy, and come
 up with even better/crazier parameters to test.

 I have never written a line of genetic programming, but I know that others
 here have. Perhaps you could bring some rationality to this discussion?

 What would be a simple NN that needs self-organization? Maybe a small
 pot of neurons that could only work if they were organized into layers,
 e.g. a simple 64-neuron system that would work as a 4x4x4-layer visual
 recognition system, given the input that I fed it?

 Any thoughts