Re: [agi] AGI Alife
I spent a while back in the 90s trying to make AGI and alife converge, before establishing to my satisfaction the approach is a dead end: we will never have anywhere near enough computing power to make alife evolve significant intelligence (the only known success took 4 billion years on a planetary sized nanocomputer network, after all), even if we could set up just the right selection pressures, which we can't. On Tue, Jul 27, 2010 at 4:23 AM, Linas Vepstas linasveps...@gmail.com wrote: I saw the following post from Antonio Alberti, on the linked-in discussion group: ALife and AGI Dear group participants. The relation among AGI and ALife greatly interests me. However, too few recent works try to relate them. For exemple, many papers presented in AGI-09 (http://agi-conf.org/2009/) are about program learning algorithms (combining evolutionary learning and analytical learning). In AGI 2010, virtual pets have been presented by Ben Goertzel and are also another topic of this forum. There are other approaches in AGI that uses some digital evolutionary approach for AGI. For me it is a clear clue that both are related in some instance. By ALife I mean the life-as-it-could-be approach (not simulate, but to use digital environment to evolve digital organisms using digital evolution (faster than Natural one - see http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D). So, I would like to propose some discussion topics regarding ALIfe and AGI: 1) What is the role of Digital Evolution (and ALife) in the AGI context? 2) Is it possible that some aspects of AGI could self-emerge from the digital evolution of intelligent autonomous agents? 3) Is there any research group trying to converge both approaches? Best Regards, and my reply was below: For your question 3), I have no idea. For question 1) I can't say I've ever heard of anyone talk about this. For question 2), I imagine the answer is yes, although the boundaries between what's Alife and what's program learning (for example) may be blurry. So, imagine, for example, a population of many different species of neurons (or should I call them automata? or maybe I should call them virtual ants?) Most of the individuals have only a few friends (a narrow social circle) -- the friendship relationship can be viewed as an axon-dendrite connection -- these friendships are semi-stable; they evolve over time, and the type quality of information exchanged in a friendship also varies. Is a social network of friends able to solve complex problems? The answer is seemingly yes, if the individuals are digital models of neurons. (To carry analogy further: different species of individuals would be analogous to different types of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor neurons. Individuals from one species may tend to be very gregarious, while those from other species might be generally xenophobic. etc.) I have no clue if anyone has ever explored genetic algorithms or related alife algos, factored together with the individuals being involved in a social network (with actual information exchange between friends). No clue as to how natural/artificial selection should work. Do anti-social individuals have a possibly redeeming role w.r.t. the organism as a whole? Do selection pressures on individuals (weak individuals are cullled) destroy social networks? Do such networks automatically evolve altruism, because a working social network with weak, altruistically-supported individuals is better than a shredded, dysfunctional social network consisting of only strong individuals? Dunno. Seems like there could be many many interesting questions. I'd be curious about the answers to Antonio's questions ... --linas --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] AGI Alife
Linas Vepstas wrote First my answers to Antonio: 1) What is the role of Digital Evolution (and ALife) in the AGI context? The nearest I can come up with is Goertzel's virtual pre-school idea, where the environment is given and the proto-AGI learns within it. It's certainly possible to place such a proto-AGI into an evolving environment. I'm not sure how helpful this is, since now we also need to make sense of the evolving environment in order to assess what the agent does. But that's far from the synthetic life approach, where environment and agents are usually not that much pre-defined. And from those synth. approaches I know about, they're mostly concerned with replicating natural evolution, adaption, self-organization a.s.o. Some look into the emergence and evolution of cooperation, but that's often very low level and more interested in general properties; far from AGI. 2) Is it possible that some aspects of AGI could self-emerge from the digital evolution of intelligent autonomous agents? I guess it's possible. But I guess one won't come up with a mechanism that works in an AGI system but with interesting properties of an AGI system. Most intelligent agents are faked, not really cognitive or so. In a simulation you see how agents develop/select strategies and what works in an (evolutionary) environment. Like (wild idea now) the ability to assign parts of its cognitive capacity to memory or processing depending on the environmental context (more memory in unchanging and more processing in changing environments). Those properties could be integrated later as a detail of a bigger framework. 3) Is there any research group trying to converge both approaches? My best ad-hoc idea is to scan through the last year's alife conference program, look for papers that are promising, contact the authors and ask whether they are into AGI or know people who are. http://www.ecal2009.org/documents/ECAL2009_program.pdf One of the topics was artificial consciousness and I saw several papers going into this direction, often indirectly. Like the Swarm Cognition and Artificial Life paper on p.34 or the first poster on p.47. Now to Linas' part: Seems like there could be many many interesting questions. Many of these are specialized issues that are researched in alife but more in social simulation. The Journal of Artificial Societies and Social Simulation http://jasss.soc.surrey.ac.uk/JASSS.html is a good starting point if anyone is interested. cu Jan --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] AGI Alife
I did take a look at the journal. There is one question I have with regard to the assumptions. Mathematically the number of prisoners in Prisoner's dilemma cooperating or not reflects the prevalence of cooperators or non cooperators present. Evolution *should* tend to Von Neumann's zero sum condition. This is an example of Calculus solving a problem far neater and more elegantly than GAs which should only be used where there is no good or obvious Calculus solution. This is my first observation. Second observation about societal punishment eliminating free loaders. The fact of the matter is that *freeloading* is less of a problem in advanced societies than misplaced unselfishness. The 9/11 hijackers performed the most unselfish and unfreeloading acts. Hope I am not accused of glorifying terrorism! How fundamental an issue is this? It is fundamental in that simulations seem :- 1) To be better done by Calculus. 2) Not to be useful in providing simulations of things we are interested in. Neither of these two is necessarily the case. We could in fact simulate opinion formation by social interaction. There there would be no clear cut Calculus outcome. The third observation is that Google is itself a GA. It uses popular appeal in its page ranking systems. This is relevant to Matt's ideas. You can, for example, string programs or other entities together. Of course to do this association one needs Natural Language. You will also need NL in stetting up and describing any process of opinion formation. This is the great unsolved problem. In fact any system not based on NL, but based on a analogue response is Calculus describable. - Ian Parker On 27 July 2010 14:00, Jan Klauck jkla...@uni-osnabrueck.de wrote: Seems like there could be many many interesting questions. Many of these are specialized issues that are researched in alife but more in social simulation. The Journal of Artificial Societies and Social Simulation http://jasss.soc.surrey.ac.uk/JASSS.html is a good starting point if anyone is interested. cu Jan --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] AGI Alife
I think I should say that for a problem to be suitable for GAs the space in which it is embedded has to be non linear. Otherwise we have an easy Calculus solution. http://www.springerlink.com/content/h46r77k291rn/?p=bfaf36a87f704d5cbcb66429f9c8a808pi=0 is described a fair number of such systems. - Ian Parker --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com
Re: [agi] AGI Alife
Evolving AGI via an Alife approach would be possible, but would likely take many orders of magnitude more resources than engineering AGI... I worked on Alife years ago and became frustrated that the artificial biology and artificial chemistry one uses is never as fecund as the real thing We don't understand which aspects of bio and chem are really important for the evolution of complex structures. So, approaching AGI via Alife just replaces one complex set of confusions with another ;-) ... I think that releasing some well-engineered AGI systems in an Alife type environment, and letting them advance and evolve further, would be an awesome experiment, though ;) -- Ben G On Mon, Jul 26, 2010 at 11:23 PM, Linas Vepstas linasveps...@gmail.com wrote: I saw the following post from Antonio Alberti, on the linked-in discussion group: ALife and AGI Dear group participants. The relation among AGI and ALife greatly interests me. However, too few recent works try to relate them. For exemple, many papers presented in AGI-09 (http://agi-conf.org/2009/) are about program learning algorithms (combining evolutionary learning and analytical learning). In AGI 2010, virtual pets have been presented by Ben Goertzel and are also another topic of this forum. There are other approaches in AGI that uses some digital evolutionary approach for AGI. For me it is a clear clue that both are related in some instance. By ALife I mean the life-as-it-could-be approach (not simulate, but to use digital environment to evolve digital organisms using digital evolution (faster than Natural one - see http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D). So, I would like to propose some discussion topics regarding ALIfe and AGI: 1) What is the role of Digital Evolution (and ALife) in the AGI context? 2) Is it possible that some aspects of AGI could self-emerge from the digital evolution of intelligent autonomous agents? 3) Is there any research group trying to converge both approaches? Best Regards, and my reply was below: For your question 3), I have no idea. For question 1) I can't say I've ever heard of anyone talk about this. For question 2), I imagine the answer is yes, although the boundaries between what's Alife and what's program learning (for example) may be blurry. So, imagine, for example, a population of many different species of neurons (or should I call them automata? or maybe I should call them virtual ants?) Most of the individuals have only a few friends (a narrow social circle) -- the friendship relationship can be viewed as an axon-dendrite connection -- these friendships are semi-stable; they evolve over time, and the type quality of information exchanged in a friendship also varies. Is a social network of friends able to solve complex problems? The answer is seemingly yes, if the individuals are digital models of neurons. (To carry analogy further: different species of individuals would be analogous to different types of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor neurons. Individuals from one species may tend to be very gregarious, while those from other species might be generally xenophobic. etc.) I have no clue if anyone has ever explored genetic algorithms or related alife algos, factored together with the individuals being involved in a social network (with actual information exchange between friends). No clue as to how natural/artificial selection should work. Do anti-social individuals have a possibly redeeming role w.r.t. the organism as a whole? Do selection pressures on individuals (weak individuals are cullled) destroy social networks? Do such networks automatically evolve altruism, because a working social network with weak, altruistically-supported individuals is better than a shredded, dysfunctional social network consisting of only strong individuals? Dunno. Seems like there could be many many interesting questions. I'd be curious about the answers to Antonio's questions ... --linas --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org I admit that two times two makes four is an excellent thing, but if we are to give everything its due, two times two makes five is sometimes a very charming thing too. -- Fyodor Dostoevsky --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your
[agi] Clues to the Mind: Learning Ability
http://www.facebook.com/video/video.php?v=287151911466 See how the parrot can learn so much! Does that mean that the parrot does intelligence. Will this parrot pass the turing test? There must be a learning center in the brain which is much lower than the higher cognitive fucntions like imagination and thoughts. cheers, Deepak --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c Powered by Listbox: http://www.listbox.com