[agi] singularity humor

2006-07-13 Thread Eric Baum
Top ten signs the singularity has arrived
http://www.deanesmay.com/posts/1152629462.shtml

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Soar vs Novamente

2006-07-13 Thread James Ratcliff
Just some quick comments.  It appears to me that perhaps the primary topic in question is an ability to generalize or abstract knowledge to varieties of situations.  I would say that for the most part Soar is very good at *representing* and *using* composable (and therefore generalized) knowledge representations, but it is not so far Soar's strong suit to *create* such knowledge representations.  There has been a bit of research in the past to get Soar to do inductive learning, and those efforts have currently shifted a bit to "stepping outside" the standard Soar model and integrating in capabilities for reinforcement learning and episodic learning.  However, these efforts are in early stages.  For the most part when we want nice generalized knowledge in Soar (which is often, when we are trying to build robust cognitive models or intelligent agents), we engineer the abstractions and knowledge representations
 directly into the system.One strength of Soar (in my opinion) is that it encourages "composable" knowledge representations that can rapidly "assemble themselves" (again with the proper hard-coded engineering) into wide varieties of actions or solutions to problems.  So for example, rather than having 1000 different schemas for opening different kinds of doors, or one monolithic high-level schema, the typical approach in Soar would be to engineer independently the various small steps that can compose into a variety of door-opening schemas, and then layer on top of those low-level actions a hierarchy of potential situations (or partial situations) in which the various steps would be appropriate to execute.   Done "correctly", this can lead to a robust reasoning system that can easily switch its behavior as the environment changes.However, there is a big caveat here.  Although I claim (and believe) that Soar
 encourages the development of such robust models, it does not *require* you to represent your knowledge that way.  It is certainly easy to build brittle systems in Soar, containing knowledge that is not abstracted well.  An engineer has to do the work of finding the right abstractions, which it sounds to me like where some of the focus is in Novamente.  Once you have some reasonable abstractions, though, Soar provides a good engine for representing the knowledge in modular and efficient ways.Randy JonesBen Goertzel [EMAIL PROTECTED] wrote: One of the key ideas underlying the NM design is to fully integratethe top-down (logical problem solving and reasoning) based approachwith the bottom-up (unsupervised, reinforcement-learning-basedstatistical pattern recognition) based
 approach.SOAR basically lies firmly in the former camp...-- BenOn 7/12/06, Yan King Yin <[EMAIL PROTECTED]> wrote:   (From a former Soar researcher)  [...]  Generally, the bottom-up pattern based systems do better at noisy pattern recognition problems (perception problems like recognizing letters in scanned OCR text or building complex perception-action graphs where the decisions are largely probabilistic like playing backgammon or assigning labels to chemical molecules).  Top-down reasoning systems like Soar generally do better at higher level reasoning problems.  Selecting the correct formation and movements for a squad of troops when clearing a building, or receiving English instructions from a human operator to guide a robot through a burning building.  [...]  Doug From what
 I read, Soar also deals with (or has provisos to deal with) sensory processing, otherwise it wouldn't be the "unified cognitive architecture" as Allen Newell has intended it to be. The difference in emphasis between Novamente on perceptual learning and Soar on top-down reasoning, may be real but ideally it should not be accepted prima facie .  IMO these 2 emphases should be integrated seamlessly. YKY   To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]---To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com 
		Do you Yahoo!? 
Get on board. You're invited to try the new Yahoo! Mail Beta.
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Computing Intelligence? How too? ................. ping

2006-07-13 Thread James Ratcliff
Danny, I just read an interesting article that goes through a more formal proof of intelligence "A formal measure of Machine Intelligence"which I may have gotten from a link from this group or another.http://www.vetta.org/documents/ui_benelearn.pdfJames Ratcliff[EMAIL PROTECTED] wrote: Does anyone know where I might find information on Fitness algorithms?  How does one determine the level of Intelligence of any given AI System configuration? What are the best methods of computing algorithm fitness or intelligence? Dan GoeFrom : Samantha Atkins <[EMAIL PROTECTED]>To : agi@v2.listbox.comSubject : [agi] pingDate : Wed, 5 Jul 2006 12:47:58
 -0700 No mail seen since 6/30.  Testing.  --- To unsubscribe, change your address, or temporarily deactivate your subscription,  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]---To unsubscribe, change your address, or temporarily deactivate your subscription,please go to http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com 
	
		Sneak preview the  all-new Yahoo.com. It's not radically different. Just radically better. 

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Thanks James.. [agi] Computing Intelligence? How too? ................. ping

2006-07-13 Thread DGoe
James
Many thanks for the link on Computing Intelligence. 
Dan Goe

From : James Ratcliff [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Computing Intelligence? How too? . 
ping 
Date : Thu, 13 Jul 2006 07:44:26 -0700 (PDT)
 Danny,
   I just read an interesting article that goes through a more formal 
proof of intelligence 
   A formal measure of Machine Intelligence
 which I may have gotten from a link from this group or another.
 
 http://www.vetta.org/documents/ui_benelearn.pdf
 
 James Ratcliff
 
 
 [EMAIL PROTECTED] wrote: 
 Does anyone know where I might find information on Fitness algorithms?  
 How does one determine the level of Intelligence of any given AI System 
 configuration? 
 
 What are the best methods of computing algorithm fitness or 
intelligence? 
 
 Dan Goe
 
 
 From : Samantha Atkins 
 To : agi@v2.listbox.com
 Subject : [agi] ping
 Date : Wed, 5 Jul 2006 12:47:58 -0700
  No mail seen since 6/30.  Testing.
  
  ---
  To unsubscribe, change your address, or temporarily deactivate your 
 subscription, 
  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 
 Thank You
 James Ratcliff
 http://falazar.com
   
 -
 Sneak preview the  all-new Yahoo.com. It's not radically different. Just 
radically better. 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Thanks James... Computing Intelligence? How too? ................. ping

2006-07-13 Thread DGoe
James, 
   Many thanks for the link on Computing Intelligence. 
Dan Goe


From : James Ratcliff [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Computing Intelligence? How too? . 
ping 
Date : Thu, 13 Jul 2006 07:44:26 -0700 (PDT)
 Danny,
   I just read an interesting article that goes through a more formal 
proof of intelligence 
   A formal measure of Machine Intelligence
 which I may have gotten from a link from this group or another.
 
 http://www.vetta.org/documents/ui_benelearn.pdf
 
 James Ratcliff
 
 
 [EMAIL PROTECTED] wrote: 
 Does anyone know where I might find information on Fitness algorithms?  
 How does one determine the level of Intelligence of any given AI System 
 configuration? 
 
 What are the best methods of computing algorithm fitness or 
intelligence? 
 
 Dan Goe
 
 
 From : Samantha Atkins 
 To : agi@v2.listbox.com
 Subject : [agi] ping
 Date : Wed, 5 Jul 2006 12:47:58 -0700
  No mail seen since 6/30.  Testing.
  
  ---
  To unsubscribe, change your address, or temporarily deactivate your 
 subscription, 
  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 
 
 
 Thank You
 James Ratcliff
 http://falazar.com
   
 -
 Sneak preview the  all-new Yahoo.com. It's not radically different. Just 
radically better. 
 
 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Soar vs Novamente

2006-07-13 Thread Pei Wang

Soar, like other cognitive architectures (such as ACT-R), is not
designed to directly deal with domain problems. Instead, it is a
high-level platform on which a program can be built for a specific
problem.

On the contrary, Novamente, like other AGI systems (such as NARS),
is designed to directly deal with domain problems. To work well,
usually it needs to be trained with domain-specific knowledge, but
such a training process is fundamentally different from a
programming process.

To me, many other differences, such as the role of learning, follow
from the above difference between program to work and learn to
work.

The current issue of AI Magazine
(http://www.aaai.org/Library/Magazine/vol27.php#Summer) is highly
relevant to this discussion. Especially, the articiles by Langley,
Cassimatis, and JonesWray provide good introductions and discussions
about cognitive architectures.

Pei


On 7/13/06, Ben Goertzel [EMAIL PROTECTED] wrote:

Thanks, Randy.  This is very well put.

Yes, one of the key things missing in rule and logic based AI systems
like SOAR is the learning of new representations to match new
situations and problems.

Interestingly, this is also one of the key things missing in
evolutionary learning as conventionally implemented.  My colleague
Moshe Looks has been working on a modified approach to evolutionary
learning that involves automatically learning new representations for
new problems; it is called MOSES and is being written for integration
into Novamente as well as for standalone use.  Some information on
MOSES is here if you're curious:

http://metacog.org/doc.html

-- Ben

On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:
 Just some quick comments. It appears to me that perhaps the primary
 topic in question is an ability to generalize or abstract knowledge to
 varieties of situations. I would say that for the most part Soar is
 very good at *representing* and *using* composable (and therefore
 generalized) knowledge representations, but it is not so far Soar's
 strong suit to *create* such knowledge representations. There has been
 a bit of research in the past to get Soar to do inductive learning, and
 those efforts have currently shifted a bit to stepping outside the
 standard Soar model and integrating in capabilities for reinforcement
 learning and episodic learning. However, these efforts are in early
 stages. For the most part when we want nice generalized knowledge in
 Soar (which is often, when we are trying to build robust cognitive
 models or intelligent agents), we engineer the abstractions and
 knowledge representations
  directly into the system.

 One strength of Soar (in my opinion) is that it encourages composable
 knowledge representations that can rapidly assemble themselves (again
 with the proper hard-coded engineering) into wide varieties of actions
 or solutions to problems. So for example, rather than having 1000
 different schemas for opening different kinds of doors, or one
 monolithic high-level schema, the typical approach in Soar would be to
 engineer independently the various small steps that can compose into a
 variety of door-opening schemas, and then layer on top of those
 low-level actions a hierarchy of potential situations (or partial
 situations) in which the various steps would be appropriate to execute.
  Done correctly, this can lead to a robust reasoning system that can
 easily switch its behavior as the environment changes.

 However, there is a big caveat here. Although I claim (and believe)
 that Soar
  encourages the development of such robust models, it does not
 *require* you to represent your knowledge that way. It is certainly
 easy to build brittle systems in Soar, containing knowledge that is not
 abstracted well. An engineer has to do the work of finding the right
 abstractions, which it sounds to me like where some of the focus is in
 Novamente. Once you have some reasonable abstractions, though, Soar
 provides a good engine for representing the knowledge in modular and
 efficient ways.

 Randy Jones


 Ben Goertzel [EMAIL PROTECTED] wrote:

  One of the key ideas underlying the NM design is to fully integrate
 the top-down (logical problem solving and reasoning) based approach
 with the bottom-up (unsupervised, reinforcement-learning-based
 statistical pattern recognition) based approach.

 SOAR basically lies firmly in the former camp...

 -- Ben


 On 7/12/06, Yan King Yin wrote:
 
   (From a former Soar researcher)
   [...]
   Generally, the bottom-up pattern based systems do better at noisy
 pattern
  recognition problems (perception problems like recognizing letters in
  scanned OCR text or building complex perception-action graphs where the
  decisions are largely probabilistic like playing backgammon or assigning
  labels to chemical molecules). Top-down reasoning systems like Soar
  generally do better at higher level reasoning problems. Selecting the
  correct formation and movements for a squad of troops when clearing a

Re: [agi] Processing speed for core intelligence in human brain

2006-07-13 Thread Mark Waser



 My personal guesstimate is that 
what are commonly considered the higher order cognitive functions useway 
less than 1% of the total power estimated for the brain (and also, that the 
brain does them very inefficiently so a better implementation would use even 
less power).

 On the other hand, I also 
believe that *everything* "higher order" in the brainruns on top of and 
requires a massive amount of parallel pattern-matching power -- and that this is 
going to be the final limiting factor on reproducing human-level intelligence 
(and that we are probably at least ten years from the necessary processing power 
and architecture and algorithms for this).

  - Original Message - 
  From: 
  Joshua 
  Fox 
  To: agi@v2.listbox.com 
  Sent: Thursday, July 13, 2006 1:55 
  PM
  Subject: **SPAM** [agi] Processing speed 
  for core intelligence in human brain
  
  Greetings, I am new to the list. I hope that the following question adds 
  something of value. 
  Estimates for the total processing speed of intelligence in the human 
  brain are often used as crude guides to understanding the timeline towards 
  human-equivalent intelligence.
  Would someone venture to guesstimate -- even within a couple of 
  orders of magnitude -- the total processing speed of higher order cognitive 
  functions, in contrast to lower-order functions like sensing and 
  actuation.(Use any definition of "higher" and "lower" order which seems 
  reasonable to you.) I appreciate the problems with estimating 
  human-equivalent intelligence based on raw speed, and I recognize that tightly 
  integrated lower-order functionality may be essential to full 
  generalintelligence.
  
  Nonetheless, it would be fascinating to learn, e.g., that the "core" of 
  human intelligence use only 1% of the total power estimated for the brain. 
  That would suggest that if lower order functions can be "outsourced" 
  to the many projects now working on them, and offloaded at runtime to 
  remote systems, then human-order raw power may be closer than we thought. 
  
  
  Joshua
  
  To unsubscribe, change your address, or temporarily deactivate your 
  subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED] 

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [agi] Processing speed for core intelligence in human brain

2006-07-13 Thread Richard Loosemore

Joshua Fox wrote:
Greetings, I am new to the list. I hope that the following question adds 
something of value.


Estimates for the total processing speed of intelligence in the human 
brain are often used as crude guides to understanding the timeline 
towards human-equivalent intelligence.


Would someone venture to guesstimate -- even within a couple of orders 
of magnitude -- the total processing speed of higher order cognitive 
functions, in contrast to lower-order functions like sensing and 
actuation. (Use any definition of higher and lower order which seems 
reasonable to you.)


I appreciate the problems with estimating human-equivalent intelligence 
based on raw speed, and I recognize that tightly integrated lower-order 
functionality may be essential to full general intelligence.
 
Nonetheless, it would be fascinating to learn, e.g., that the core of 
human intelligence use only 1% of the total power estimated for the 
brain. That would suggest that /if/ lower order functions can be 
outsourced to the many projects now working on them, and  offloaded at 
runtime to remote systems, then human-order raw power may be closer than 
we thought.
 
Joshua



Joshua,

I recently addressed a similar issue on the SL4 list, so here is an 
expanded version of my calculation for what I think is involved in 
higher order processing.  My thoughts were geared towards estimating 
when the hardware would be available.  (Answer: yesterday.)


1) Quick Introduction

The basis for these calculations is the idea that the human cognitive 
system does all of its real work by keeping a set of elements 
simultaneously active and allowing them to constrain one another. 
Simple enough idea.  Basis of neural nets, actors, etc.


Then, starting with this idea, I use the fact that the brain is 
organized into cortical columns, and I would (cautiously) hypothesize 
that these could be implementing a grid of cells on which these elements 
can live, when they are active.  This allows us to start talking about 
possible numbers for the simultaneously active elements and their 
operating timescale.


Finally, notice that a good chunk of the cortical column real estate is 
probably devoted to visual processing.  Now, some of this would not just 
be doing data driven processing (which would come under the heading of 
peripheral work, which we want to keep out of the calculation) but 
interactive processing that includes top-down constraints.  Difficult to 
say how much of this visual processing really counts as higher order 
thought, but my guess would be that some fraction of it is not.


2) The Calculation Itself

Approximate number of cortical columns:  1,000,000.  If each of these is 
hosting a single concept, but they are providing a facility for moving 
the concept from one column to the next in real time, to allow concepts 
to make transient connections to near neighbors, then most of them may 
be just available for liquidity purposes (imagine a chinese puzzle on a 
large scale... more empty blocks means more potential for the blocks to 
move around, and hence greater liquidity).  So, number of simultaneously 
active processes will be much less than 1,000,000.


My use of the cortical coumn idea is really just meant as an upper 
bound:  I am not committed to this interpretation of what the columns 
are doing.


Second datum to use:  the sensorium (the sum total of what is actively 
involved in our current representation of the state of the world and the 
content of our abstract thoughts) is likely to contain much less than 
1,000,000 simultaneously active concepts.  Why?  Mostly because the 
contents of a good sized encyclopaedia would involve less than a million 
concepts, and we barely have enough words in our language for that many 
distinct, nameable concepts.  It is hard to believe that we keep 
anything like that many concepts active at once.


Using the above two factors, we could hazard a guess at perhaps as few 
as 10,000 simultaneously active high-level concepts, not a million.  My 
gut feeling is that this is a conservative estimate (i.e. too high).


Further suppose that the function of concepts, when active, is to engage 
in relatively simple interactions with neighbors in order to carry out 
multiple simultaneous relaxation along several dimensions.  When the 
concepts are not active they have to go through different sorts of 
calculations (debriefing after an episode of being used), and when they 
are being activated they have to (effectively) travel from their home 
column to where they are needed.  Considering these other computations 
together we notice that the cortical column may implement multiple 
functions that do not need to be simultaneously active.


Now, all of the above functions are consistent with the complexity and 
layout of the columns.  Notice that what is actually being computed is 
relatively simple, but because of the nature of the column wiring the 
functions take a good deal of wiring to 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Shane Legg
James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and criticism. There is also
another guy who has recently come to my attention who is doing verysimilar stuff. He has a 50 page paper on formal measures of machineintelligence that should be coming out in coming months.I'll make a post here when either of these papers becomes available.
Shane

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Shane Legg
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Shane,Do you mean Warren Smith?Yes.Shane

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread James Ratcliff
Shane, Thanks, I would appreciate that greatly.On the topic of measuring intelligence, what do you think about the actual structure of comparison of some of today's AI systems. I would like to see someone come up with and get support for a general fairly widespread set of test s for general AI other than the turing test. I have recently been working with some testing stuff with the KM from UT. It and two other systems took and passed a AP exam for chemistry, which, though limited, is an impressive feat itself.James RatcliffShane Legg [EMAIL PROTECTED] wrote: James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and
 criticism. There is also another guy who has recently come to my attention who is doing verysimilar stuff. He has a 50 page paper on formal measures of machineintelligence that should be coming out in coming months.I'll make a post here when either of these papers becomes available. Shane  To unsubscribe, change your address, or temporarily deactivate your subscription,  please go to http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Ben Goertzel

I think that public learning/training of an AGI would be a terrible disaster...

Look at what happened with OpenMind and MindPixel  These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.

Jason Hutchens once showed me two versions of his statistical learning
based conversation system, MegaHal.  One was trained by him, the other
by random web-surfers.  The former displayed some occasional apparent
intelligence, the latter constantly spewed amusing but eventually
boring junk about penises and such.

I had the idea once to teach an AI system in Lojban, and then let
random Lojban speakers over the Web interact with it to teach it.
This might work, because the barrier to entry is so high.  Anyone who
has bothered to learn Lojban is probably a serious nerd and wouldn't
feel like filling the AI's mind with a bunch of junk.  Of course, I
haven't bothered to learn Lojban well yet, though ;-( ...

-- Ben

On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:



Ben Goertzel [EMAIL PROTECTED] wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
you
  like,
  I think the above does however highlight something of practical
importance:
  Even if your AI is incomputably super powerful, like AIXI, the training
and
  education of the AI is still really important. Very few people spend
time
  thinking about how to teach and train a baby AI. I think this is a
greatly
  ignored aspect of AI.

 Agree, but there is a reason: before a baby AI is actually built,
 not to much can be said about its education. For example, assume both
 AIXI and NARS are successfully built, they will need to be educated in
 quite different ways (though there will be some similarity), given the
 different design. I'll worry about education after the details of the
 system are relatively stable.

Ben Goertzel [EMAIL PROTECTED] wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
you
  like,
  I think the above does however highlight something of practical
importance:
  Even if your AI is incomputably super powerful, like AIXI, the training
and
  education of the AI is still really important. Very few people spend
time
  thinking about how to teach and train a baby AI. I think this is a
greatly
  ignored aspect of AI.

 Agree, but there is a reason: before a baby AI is actually built,
 not to much can be said about its education. For example, assume both
 AIXI and NARS are successfully built, they will need to be educated in
 quite different ways (though there will be some similarity), given the
 different design. I'll worry about education after the details of the
 system are relatively stable.

Pei,

I think you are right that the process of education and mental
development is going to be different for different types of AGI
systems.

However, I don't think it has to be dramatically different for each
very specific AGI design. And I don't think one has to wait till one
has a working AGI to put serious analysis into its psychological
development and instruction.

In the context of Novamente, I have put a lot of thought into how
mental development should occur for AGI systems that are

-- heavily based on uncertain inference
-- embodied in a real or simulated world where they get to interact
with other agents

Novamente falls into this category, but so do other AGI designs.

A few of my and Stephan Bugaj's thoughts on this are described here:

http://www.agiri.org/forum/index.php?showtopic=158

and here:

http://www.novamente.net/engine/

(see Stage of Cognitive Development...)

I have a whole lot of informal notes written down on AGI Developmental
Psychology, extending the general ideas in this presentation/paper,
and will probably write them up as a manuscript one day...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]



Pei,

I think you are right that the process of education and mental
development is going to be different for different types of AGI
systems.

However, I don't think it has to be dramatically different for each
very specific AGI design. And I don't think one has to wait till one
has a working AGI to put serious analysis into its psychological
development and instruction.

In the context of Novamente, I have put a lot of thought into how
mental development should occur for AGI systems that are

-- heavily based on uncertain inference
-- embodied in a real or simulated world where they get to interact
with other agents

Novamente falls into this category, but so do other AGI designs.

A few of my and Stephan Bugaj's thoughts on this are described here:

http://www.agiri.org/forum/index.php?showtopic=158

and here:

http://www.novamente.net/engine/

(see Stage of Cognitive Development...)

I have a whole lot of informal notes written down on AGI Developmental
Psychology, 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread James Ratcliff
Ben, Yes, but OpenMind did get quite a bit of usable information into it as well, and mainly they learned a lot about the process. I believe, and they are looking at as well, different ways of grading the participants themselves, so the obviously juvienile ones could be graded down and out of the system. Likewise the processes themselves could be graded as to functionality and correctness, with the ability of a user to look at multiple task processes like "Pick up the Ball" and vote on ones that are more functional.At the very least, I would like to open it up to a number of people, and that would speed along the creation of many processes faster than I alone could ever do.James RatcliffBen Goertzel [EMAIL PROTECTED] wrote: I think that public learning/training of an AGI would be a terrible
 disaster...Look at what happened with OpenMind and MindPixel  These projectsallowed the public to upload knowledge into them, which resulted in alot of knowledge of the general nature "Jennifer Lopez got a nicebutt", etc.Jason Hutchens once showed me two versions of his statistical learningbased conversation system, MegaHal.  One was trained by him, the otherby random web-surfers.  The former displayed some occasional apparentintelligence, the latter constantly spewed amusing but eventuallyboring junk about penises and such.I had the idea once to teach an AI system in Lojban, and then letrandom Lojban speakers over the Web interact with it to teach it.This might work, because the barrier to entry is so high.  Anyone whohas bothered to learn Lojban is probably a serious nerd and wouldn'tfeel like filling the AI's mind with a bunch of junk.  Of course, Ihaven't bothered to learn Lojban well yet,
 though ;-( ...-- BenOn 7/13/06, James Ratcliff <[EMAIL PROTECTED]> wrote: Ben Goertzel <[EMAIL PROTECTED]> wrote:While AIXI is all a bit pie in the sky, "mathematical philosophy" if you   like,   I think the above does however highlight something of practical importance:   Even if your AI is incomputably super powerful, like AIXI, the training and   education of the AI is still really important. Very few people spend time   thinking about how to teach and train a baby AI. I think this is a greatly   ignored aspect of AI.   Agree, but there is a reason: before a "baby AI" is actually built,  not to much can be said about its education. For example, assume both  AIXI and NARS are successfully built, they will need to be educated
 in  quite different ways (though there will be some similarity), given the  different design. I'll worry about education after the details of the  system are relatively stable. Ben Goertzel <[EMAIL PROTECTED]> wrote:While AIXI is all a bit pie in the sky, "mathematical philosophy" if you   like,   I think the above does however highlight something of practical importance:   Even if your AI is incomputably super powerful, like AIXI, the training and   education of the AI is still really important. Very few people spend time   thinking about how to teach and train a baby AI. I think this is a greatly   ignored aspect of AI.   Agree, but there is a reason: before a "baby AI" is actually built,  not to much can be said about its
 education. For example, assume both  AIXI and NARS are successfully built, they will need to be educated in  quite different ways (though there will be some similarity), given the  different design. I'll worry about education after the details of the  system are relatively stable. Pei, I think you are right that the process of education and mental development is going to be different for different types of AGI systems. However, I don't think it has to be dramatically different for each very specific AGI design. And I don't think one has to wait till one has a working AGI to put serious analysis into its psychological development and instruction. In the context of Novamente, I have put a lot of thought into how mental development should occur for AGI systems that are -- heavily based on uncertain
 inference -- embodied in a real or simulated world where they get to interact with other agents Novamente falls into this category, but so do other AGI designs. A few of my and Stephan Bugaj's thoughts on this are described here: http://www.agiri.org/forum/index.php?showtopic=158 and here: http://www.novamente.net/engine/ (see "Stage of Cognitive Development...") I have a whole lot of informal notes written down on AGI Developmental Psychology, extending the general ideas in this presentation/paper, and will probably write them up as a manuscript one day... -- Ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 Pei, I think you are right that the process of education and mental development is going to be different for different types of AGI systems. However, I don't think it has to be dramatically different for each very specific AGI design. And I don't think one has to wait 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Ben Goertzel

I agree that using the Net to recruit a team of volunteer AGI
teachers would be a good idea.

But opening the process up to random web-surfers is, IMO, asking for trouble...!

-- Ben

On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:

Ben,
  Yes, but OpenMind did get quite a bit of usable information into it as
well, and mainly they learned a lot about the process.  I believe, and they
are looking at as well, different ways of grading the participants
themselves, so the obviously juvienile ones could be graded down and out of
the system.
  Likewise the processes themselves could be graded as to functionality and
correctness, with the ability of a user to look at multiple task processes
like Pick up the Ball and vote on ones that are more functional.

At the very least, I would like to open it up to a number of people, and
that would speed along the creation of many processes faster than I alone
could ever do.

James Ratcliff


Ben Goertzel [EMAIL PROTECTED] wrote:

 I think that public learning/training of an AGI would be a terrible
disaster...

Look at what happened with OpenMind and MindPixel These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.

Jason Hutchens once showed me two versions of his statistical learning
based conversation system, MegaHal. One was trained by him, the other
by random web-surfers. The former displayed some occasional apparent
intelligence, the latter constantly spewed amusing but eventually
boring junk about penises and such.

I had the idea once to teach an AI system in Lojban, and then let
random Lojban speakers over the Web interact with it to teach it.
This might work, because the barrier to entry is so high. Anyone who
has bothered to learn Lojban is probably a serious nerd and wouldn't
feel like filling the AI's mind with a bunch of junk. Of course, I
haven't bothered to learn Lojban well yet, though ;-( ...

-- Ben

On 7/13/06, James Ratcliff wrote:



 Ben Goertzel wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
 you
   like,
   I think the above does however highlight something of practical
 importance:
   Even if your AI is incomputably super powerful, like AIXI, the
training
 and
   education of the AI is still really important. Very few people spend
 time
   thinking about how to teach and train a baby AI. I think this is a
 greatly
   ignored aspect of AI.
 
  Agree, but there is a reason: before a baby AI is actually built,
  not to much can be said about its education. For example, assume both
  AIXI and NARS are successfully built, they will need to be educated in
  quite different ways (though there will be some similarity), given the
  different design. I'll worry about education after the details of the
  system are relatively stable.

 Ben Goertzel wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
 you
   like,
   I think the above does however highlight something of practical
 importance:
   Even if your AI is incomputably super powerful, like AIXI, the
training
 and
   education of the AI is still really important. Very few people spend
 time
   thinking about how to teach and train a baby AI. I think this is a
 greatly
   ignored aspect of AI.
 
  Agree, but there is a reason: before a baby AI is actually built,
  not to much can be said about its education. For example, assume both
  AIXI and NARS are successfully built, they will need to be educated in
  quite different ways (though there will be some similarity), given the
  different design. I'll worry about education after the details of the
  system are relatively stable.

 Pei,

 I think you are right that the process of education and mental
 development is going to be different for different types of AGI
 systems.

 However, I don't think it has to be dramatically different for each
 very specific AGI design. And I don't think one has to wait till one
 has a working AGI to put serious analysis into its psychological
 development and instruction.

 In the context of Novamente, I have put a lot of thought into how
 mental development should occur for AGI systems that are

 -- heavily based on uncertain inference
 -- embodied in a real or simulated world where they get to interact
 with other agents

 Novamente falls into this category, but so do other AGI designs.

 A few of my and Stephan Bugaj's thoughts on this are described here:

 http://www.agiri.org/forum/index.php?showtopic=158

 and here:

 http://www.novamente.net/engine/

 (see Stage of Cognitive Development...)

 I have a whole lot of informal notes written down on AGI Developmental
 Psychology, extending the general ideas in this presentation/paper,
 and will probably write them up as a manuscript one day...

 -- Ben

 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription,
 please go to
 http://v2.listbox.com/member/[EMAIL 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Pei Wang

Ben,

Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while in AI the former will occur within a mostly fixed
hardware system. Also, an AI system doesn't have to first develop
capabilities responsible for the survival of a human baby.

As a result, for example, Novamente can do some abstract inference (a
formal stage activity) before being able to recognize complicated
patterns (an infantile stage activity).

Of course, certain general principles of education will remain, such
as to teach simple topics before difficult ones, to combine
lectures with questions and exercises, to explain abstract materials
with concrete examples, and so on, but I don't think we can get too
much details with confidence.

As for AIXI, since its input comes from a finite perception space
and a real-number reward space, its output is selected from a fixed
action space, and for a given history (past input and output) there
is a fixed (though unknown) probability for each possible input to
occur, the best training strategy will be very different from the case
of Novamente, which is not based on such assumptions.

Given the different research goals and assumptions about the
interaction between the system and the environment, different AGI
systems will have very different training/educating strategies, which
are similar to each other only in a very vague sense. Furthermore,
since all the systems are far from mature, any design change will
require corresponding change in training. On the contrary, we cannot
decide a training process first, then design the system accordingly.
For these reasons, I'd rather not to spend too much time on training
now, though I fully agree that it will become a major issue in the
future.

Pei


On 7/13/06, Ben Goertzel [EMAIL PROTECTED] wrote:

Pei,

That is actually not correct...

I would teach a baby AIXI about the same way I would teach a baby
Novamente, but I assume the former would learn a lot faster... so the
various stages of instruction would be passed through a lot more
quickly

Furthermore, I expect that the same cognitive structures that would
develop within a Novamente during its learning process, would also
develop within an AIXI during its learning process -- though in the
AIXI these cognitive structures would exist within the currently
active program being used to choose behaviors (due to its being
chosen as optimal during AIXI's program space search).

Please note that both AIXI and Novamente are explicitly based on
uncertain probabilistic inference, so that in spite of the significant
differences between the two (e.g. the latter can run on feasible
computational infrastructure, and is much more complicated due to the
need to fulfill this requirement), there is also a significant
commonality.

-- Ben

On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
 Ben,

 For example, I guess most of your ideas about how to train Novamente
 cannot be applied to AIXI.  ;-)

 Pei

  Pei,
 
  I think you are right that the process of education and mental
  development is going to be different for different types of AGI
  systems.
 
  However, I don't think it has to be dramatically different for each
  very specific AGI design.  And I don't think one has to wait till one
  has a working AGI to put serious analysis into its psychological
  development and instruction.
 
  In the context of Novamente, I have put a lot of thought into how
  mental development should occur for AGI systems that are
 
  -- heavily based on uncertain inference
  -- embodied in a real or simulated world where they get to interact
  with other agents
 
  Novamente falls into this category, but so do other AGI designs.
 
  A few of my and Stephan Bugaj's thoughts on this are described here:
 
  http://www.agiri.org/forum/index.php?showtopic=158
 
  and here:
 
  http://www.novamente.net/engine/
 
  (see Stage of Cognitive Development...)
 
  I have a whole lot of informal notes written down on AGI Developmental
  Psychology, extending the general ideas in this presentation/paper,
  and will probably write them up as a manuscript one day...
 
  -- Ben
 
  ---
  To unsubscribe, change your address, or temporarily deactivate your 
subscription,
  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] singularity humor

2006-07-13 Thread Eliezer S. Yudkowsky

I think this one was the granddaddy:

http://yudkowsky.net/humor/signs-singularity.txt

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]