Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-25 Thread Eugen Leitl
On Tue, Jul 25, 2006 at 11:23:54AM +0200, Shane Legg wrote:

When measuring the intelligence of a human or other animal you
have to use an appropriate test -- clearly cats can't solve linguistic

Cats and people share common capabilities, which can be tested
for by the same test. A human or a dog fetching a stick is
very much the same thing.

problems and even if they could they can't use a pen to write down
their answer.  Thus intelligence tests need to take into account the

Clearly behaviour evaluation to assess task completion applies to 
any system in any environment. In most environments, a human observer
would evaluate very well, especially if the it's an interactive
learning and/or reward/punishment scenario requiring communication.

environment that the agent needs to deal with, the ways in which it
can interact with its environment, and also what types of cognitive
abilities might reasonably be expected.  However it seems unlikely
that AIs will be restricted to having senses, cognitive abilities or
environments that are like those of humans or other animals.  As

AIs are built to solve tasks. Calling human sensory capabilities
in comparison to an AI restricted gives reason to some serious amusement.
There are some very very few domains where AI excel in perception
(sniffing packets, operating in multidimensional spaces and similiar),
but they're not AGIs. They're very brittle, domain-specific problem
solvers. 

such the ways in which we measure intelligence, and indeed our
whole notion of what intelligence is, needs to be expanded to
accommodate this.

Once AGI perform as well as animal or human subjects in task
completion you don't have to worry about defining intelligence
metrics. You'd be too busy with trying to stay alive.

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-25 Thread Ben Goertzel

Hmmm...

About the measurement of general intelligence in AGI's ...

I would tend to advocate a vectorial intelligence approach

I tend to think that quantitatively or otherwise precisely defining
and measuring general intelligence -- as a single number -- is a bit
of a conceptual and pragmatic dead end.

Certainly, it is useful to (quantitatively or qualitatively) evaluate
the performance of an AGI on various tasks in various domains ...

But, combining task performance scores into a single overall
intelligence metric can be done in so many different ways, it
becomes a largely arbitrary exercise IMO.

I would place a bit more faith in a multiple intelligences approach,
wherein cognitive-focus-specific intelligences are defined precisely
and measured, but one doesn't focus on combining them into a single
score.

For instance, one might measure: pure-mathematics intelligence,
applied-mathematics intelligence, music-composition intelligence,
rhetoric intelligence, ethical intelligence, etc.

Defining focus-specific intelligences like this in a precise and
measurable way seems difficult but probably tractable.  The value of
combining such measures into an overall general intelligence measure
seems dubious.

One might also define a domain-transcending intelligence, measured
by supplying a system with tasks involving learning how to solve
problems in totally new areas it has never seen before.  This would be
very hard to measure but perhaps not impossible.

However, in my view, this domain-transcending intelligence -- though
perhaps the most critical part of general intelligence -- should
still be considered as one among many components of general
intelligence, together with a variety of focus-specific intelligences
as defined above.  Domain-transcending intelligence is just one
component of the multiple-intelligence vector.

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-25 Thread Shane Legg
On 7/25/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hmmm...About the measurement of general intelligence in AGI's ...I would tend to advocate a vectorial intelligence approachI'm not against a vector approach. Naturally every intelligent
system will have domains in which it is stronger than others.Knowing what these are is useful and important. A single numbercan't capture this.
One might also define a domain-transcending intelligence, measuredby supplying a system with tasks involving learning how to solveproblems in totally new areas it has never seen before.This would be
very hard to measure but perhaps not impossible.However, in my view, this domain-transcending intelligence -- thoughperhaps the most critical part of general intelligence -- should
I think this most critical part, as you put it, is what's missing ina lot of AI systems. It's why people look at a machine that cansolve difficult calculus problems in the blink of a second and say
that it's not really intelligent.This is the reason I think there's value in having an overall generalmeasure of intelligence --- to highlight the need to put the G backinto AI.Shane

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-24 Thread Eugen Leitl
On Sat, Jul 22, 2006 at 07:48:10PM +0200, Shane Legg wrote:

After some months looking around for tests of intelligence for
machines what I found

Why would machines need a different test of intelligence than
people or animals? Stick them into the Skinner box, make
them solve mazes, make them find food and collaborate with
others in task-solving, etc.

The nice thing is that people build environments where machines
and people can interact in a virtual environment, they only call 
them games for some strange reason.

was... not very much.  Few people have proposed tests of intelligence
for machines,
and other than the Turing test, none of these test have been developed
or used much.
Naturally I'd like universal intelligence, that Hutter and myself
have formulated,
to lead to a practical test that was widely used.  However making the
test practical
poses a number of problems, the most significant of which, I think, is
the sensitivity
that universal intelligence has to the choice of reference universal
Turing machine.
Maybe, with more insights, this problem can be, if not solved, at
least dealt with in
a reasonably acceptable way?
Shane

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


signature.asc
Description: Digital signature


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Shane Legg
James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and criticism. There is also
another guy who has recently come to my attention who is doing verysimilar stuff. He has a 50 page paper on formal measures of machineintelligence that should be coming out in coming months.I'll make a post here when either of these papers becomes available.
Shane

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Shane Legg
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Shane,Do you mean Warren Smith?Yes.Shane

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread James Ratcliff
Shane, Thanks, I would appreciate that greatly.On the topic of measuring intelligence, what do you think about the actual structure of comparison of some of today's AI systems. I would like to see someone come up with and get support for a general fairly widespread set of test s for general AI other than the turing test. I have recently been working with some testing stuff with the KM from UT. It and two other systems took and passed a AP exam for chemistry, which, though limited, is an impressive feat itself.James RatcliffShane Legg [EMAIL PROTECTED] wrote: James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and
 criticism. There is also another guy who has recently come to my attention who is doing verysimilar stuff. He has a 50 page paper on formal measures of machineintelligence that should be coming out in coming months.I'll make a post here when either of these papers becomes available. Shane  To unsubscribe, change your address, or temporarily deactivate your subscription,  please go to http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 
		How low will we go? Check out Yahoo! Messenger’s low  PC-to-Phone call rates.
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Ben Goertzel

I think that public learning/training of an AGI would be a terrible disaster...

Look at what happened with OpenMind and MindPixel  These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.

Jason Hutchens once showed me two versions of his statistical learning
based conversation system, MegaHal.  One was trained by him, the other
by random web-surfers.  The former displayed some occasional apparent
intelligence, the latter constantly spewed amusing but eventually
boring junk about penises and such.

I had the idea once to teach an AI system in Lojban, and then let
random Lojban speakers over the Web interact with it to teach it.
This might work, because the barrier to entry is so high.  Anyone who
has bothered to learn Lojban is probably a serious nerd and wouldn't
feel like filling the AI's mind with a bunch of junk.  Of course, I
haven't bothered to learn Lojban well yet, though ;-( ...

-- Ben

On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:



Ben Goertzel [EMAIL PROTECTED] wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
you
  like,
  I think the above does however highlight something of practical
importance:
  Even if your AI is incomputably super powerful, like AIXI, the training
and
  education of the AI is still really important. Very few people spend
time
  thinking about how to teach and train a baby AI. I think this is a
greatly
  ignored aspect of AI.

 Agree, but there is a reason: before a baby AI is actually built,
 not to much can be said about its education. For example, assume both
 AIXI and NARS are successfully built, they will need to be educated in
 quite different ways (though there will be some similarity), given the
 different design. I'll worry about education after the details of the
 system are relatively stable.

Ben Goertzel [EMAIL PROTECTED] wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
you
  like,
  I think the above does however highlight something of practical
importance:
  Even if your AI is incomputably super powerful, like AIXI, the training
and
  education of the AI is still really important. Very few people spend
time
  thinking about how to teach and train a baby AI. I think this is a
greatly
  ignored aspect of AI.

 Agree, but there is a reason: before a baby AI is actually built,
 not to much can be said about its education. For example, assume both
 AIXI and NARS are successfully built, they will need to be educated in
 quite different ways (though there will be some similarity), given the
 different design. I'll worry about education after the details of the
 system are relatively stable.

Pei,

I think you are right that the process of education and mental
development is going to be different for different types of AGI
systems.

However, I don't think it has to be dramatically different for each
very specific AGI design. And I don't think one has to wait till one
has a working AGI to put serious analysis into its psychological
development and instruction.

In the context of Novamente, I have put a lot of thought into how
mental development should occur for AGI systems that are

-- heavily based on uncertain inference
-- embodied in a real or simulated world where they get to interact
with other agents

Novamente falls into this category, but so do other AGI designs.

A few of my and Stephan Bugaj's thoughts on this are described here:

http://www.agiri.org/forum/index.php?showtopic=158

and here:

http://www.novamente.net/engine/

(see Stage of Cognitive Development...)

I have a whole lot of informal notes written down on AGI Developmental
Psychology, extending the general ideas in this presentation/paper,
and will probably write them up as a manuscript one day...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]



Pei,

I think you are right that the process of education and mental
development is going to be different for different types of AGI
systems.

However, I don't think it has to be dramatically different for each
very specific AGI design. And I don't think one has to wait till one
has a working AGI to put serious analysis into its psychological
development and instruction.

In the context of Novamente, I have put a lot of thought into how
mental development should occur for AGI systems that are

-- heavily based on uncertain inference
-- embodied in a real or simulated world where they get to interact
with other agents

Novamente falls into this category, but so do other AGI designs.

A few of my and Stephan Bugaj's thoughts on this are described here:

http://www.agiri.org/forum/index.php?showtopic=158

and here:

http://www.novamente.net/engine/

(see Stage of Cognitive Development...)

I have a whole lot of informal notes written down on AGI Developmental
Psychology, 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread James Ratcliff
Ben, Yes, but OpenMind did get quite a bit of usable information into it as well, and mainly they learned a lot about the process. I believe, and they are looking at as well, different ways of grading the participants themselves, so the obviously juvienile ones could be graded down and out of the system. Likewise the processes themselves could be graded as to functionality and correctness, with the ability of a user to look at multiple task processes like "Pick up the Ball" and vote on ones that are more functional.At the very least, I would like to open it up to a number of people, and that would speed along the creation of many processes faster than I alone could ever do.James RatcliffBen Goertzel [EMAIL PROTECTED] wrote: I think that public learning/training of an AGI would be a terrible
 disaster...Look at what happened with OpenMind and MindPixel  These projectsallowed the public to upload knowledge into them, which resulted in alot of knowledge of the general nature "Jennifer Lopez got a nicebutt", etc.Jason Hutchens once showed me two versions of his statistical learningbased conversation system, MegaHal.  One was trained by him, the otherby random web-surfers.  The former displayed some occasional apparentintelligence, the latter constantly spewed amusing but eventuallyboring junk about penises and such.I had the idea once to teach an AI system in Lojban, and then letrandom Lojban speakers over the Web interact with it to teach it.This might work, because the barrier to entry is so high.  Anyone whohas bothered to learn Lojban is probably a serious nerd and wouldn'tfeel like filling the AI's mind with a bunch of junk.  Of course, Ihaven't bothered to learn Lojban well yet,
 though ;-( ...-- BenOn 7/13/06, James Ratcliff <[EMAIL PROTECTED]> wrote: Ben Goertzel <[EMAIL PROTECTED]> wrote:While AIXI is all a bit pie in the sky, "mathematical philosophy" if you   like,   I think the above does however highlight something of practical importance:   Even if your AI is incomputably super powerful, like AIXI, the training and   education of the AI is still really important. Very few people spend time   thinking about how to teach and train a baby AI. I think this is a greatly   ignored aspect of AI.   Agree, but there is a reason: before a "baby AI" is actually built,  not to much can be said about its education. For example, assume both  AIXI and NARS are successfully built, they will need to be educated
 in  quite different ways (though there will be some similarity), given the  different design. I'll worry about education after the details of the  system are relatively stable. Ben Goertzel <[EMAIL PROTECTED]> wrote:While AIXI is all a bit pie in the sky, "mathematical philosophy" if you   like,   I think the above does however highlight something of practical importance:   Even if your AI is incomputably super powerful, like AIXI, the training and   education of the AI is still really important. Very few people spend time   thinking about how to teach and train a baby AI. I think this is a greatly   ignored aspect of AI.   Agree, but there is a reason: before a "baby AI" is actually built,  not to much can be said about its
 education. For example, assume both  AIXI and NARS are successfully built, they will need to be educated in  quite different ways (though there will be some similarity), given the  different design. I'll worry about education after the details of the  system are relatively stable. Pei, I think you are right that the process of education and mental development is going to be different for different types of AGI systems. However, I don't think it has to be dramatically different for each very specific AGI design. And I don't think one has to wait till one has a working AGI to put serious analysis into its psychological development and instruction. In the context of Novamente, I have put a lot of thought into how mental development should occur for AGI systems that are -- heavily based on uncertain
 inference -- embodied in a real or simulated world where they get to interact with other agents Novamente falls into this category, but so do other AGI designs. A few of my and Stephan Bugaj's thoughts on this are described here: http://www.agiri.org/forum/index.php?showtopic=158 and here: http://www.novamente.net/engine/ (see "Stage of Cognitive Development...") I have a whole lot of informal notes written down on AGI Developmental Psychology, extending the general ideas in this presentation/paper, and will probably write them up as a manuscript one day... -- Ben --- To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 Pei, I think you are right that the process of education and mental development is going to be different for different types of AGI systems. However, I don't think it has to be dramatically different for each very specific AGI design. And I don't think one has to wait 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Ben Goertzel

I agree that using the Net to recruit a team of volunteer AGI
teachers would be a good idea.

But opening the process up to random web-surfers is, IMO, asking for trouble...!

-- Ben

On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:

Ben,
  Yes, but OpenMind did get quite a bit of usable information into it as
well, and mainly they learned a lot about the process.  I believe, and they
are looking at as well, different ways of grading the participants
themselves, so the obviously juvienile ones could be graded down and out of
the system.
  Likewise the processes themselves could be graded as to functionality and
correctness, with the ability of a user to look at multiple task processes
like Pick up the Ball and vote on ones that are more functional.

At the very least, I would like to open it up to a number of people, and
that would speed along the creation of many processes faster than I alone
could ever do.

James Ratcliff


Ben Goertzel [EMAIL PROTECTED] wrote:

 I think that public learning/training of an AGI would be a terrible
disaster...

Look at what happened with OpenMind and MindPixel These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.

Jason Hutchens once showed me two versions of his statistical learning
based conversation system, MegaHal. One was trained by him, the other
by random web-surfers. The former displayed some occasional apparent
intelligence, the latter constantly spewed amusing but eventually
boring junk about penises and such.

I had the idea once to teach an AI system in Lojban, and then let
random Lojban speakers over the Web interact with it to teach it.
This might work, because the barrier to entry is so high. Anyone who
has bothered to learn Lojban is probably a serious nerd and wouldn't
feel like filling the AI's mind with a bunch of junk. Of course, I
haven't bothered to learn Lojban well yet, though ;-( ...

-- Ben

On 7/13/06, James Ratcliff wrote:



 Ben Goertzel wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
 you
   like,
   I think the above does however highlight something of practical
 importance:
   Even if your AI is incomputably super powerful, like AIXI, the
training
 and
   education of the AI is still really important. Very few people spend
 time
   thinking about how to teach and train a baby AI. I think this is a
 greatly
   ignored aspect of AI.
 
  Agree, but there is a reason: before a baby AI is actually built,
  not to much can be said about its education. For example, assume both
  AIXI and NARS are successfully built, they will need to be educated in
  quite different ways (though there will be some similarity), given the
  different design. I'll worry about education after the details of the
  system are relatively stable.

 Ben Goertzel wrote:

   While AIXI is all a bit pie in the sky, mathematical philosophy if
 you
   like,
   I think the above does however highlight something of practical
 importance:
   Even if your AI is incomputably super powerful, like AIXI, the
training
 and
   education of the AI is still really important. Very few people spend
 time
   thinking about how to teach and train a baby AI. I think this is a
 greatly
   ignored aspect of AI.
 
  Agree, but there is a reason: before a baby AI is actually built,
  not to much can be said about its education. For example, assume both
  AIXI and NARS are successfully built, they will need to be educated in
  quite different ways (though there will be some similarity), given the
  different design. I'll worry about education after the details of the
  system are relatively stable.

 Pei,

 I think you are right that the process of education and mental
 development is going to be different for different types of AGI
 systems.

 However, I don't think it has to be dramatically different for each
 very specific AGI design. And I don't think one has to wait till one
 has a working AGI to put serious analysis into its psychological
 development and instruction.

 In the context of Novamente, I have put a lot of thought into how
 mental development should occur for AGI systems that are

 -- heavily based on uncertain inference
 -- embodied in a real or simulated world where they get to interact
 with other agents

 Novamente falls into this category, but so do other AGI designs.

 A few of my and Stephan Bugaj's thoughts on this are described here:

 http://www.agiri.org/forum/index.php?showtopic=158

 and here:

 http://www.novamente.net/engine/

 (see Stage of Cognitive Development...)

 I have a whole lot of informal notes written down on AGI Developmental
 Psychology, extending the general ideas in this presentation/paper,
 and will probably write them up as a manuscript one day...

 -- Ben

 ---
 To unsubscribe, change your address, or temporarily deactivate your
 subscription,
 please go to
 http://v2.listbox.com/member/[EMAIL 

Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-13 Thread Pei Wang

Ben,

Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while in AI the former will occur within a mostly fixed
hardware system. Also, an AI system doesn't have to first develop
capabilities responsible for the survival of a human baby.

As a result, for example, Novamente can do some abstract inference (a
formal stage activity) before being able to recognize complicated
patterns (an infantile stage activity).

Of course, certain general principles of education will remain, such
as to teach simple topics before difficult ones, to combine
lectures with questions and exercises, to explain abstract materials
with concrete examples, and so on, but I don't think we can get too
much details with confidence.

As for AIXI, since its input comes from a finite perception space
and a real-number reward space, its output is selected from a fixed
action space, and for a given history (past input and output) there
is a fixed (though unknown) probability for each possible input to
occur, the best training strategy will be very different from the case
of Novamente, which is not based on such assumptions.

Given the different research goals and assumptions about the
interaction between the system and the environment, different AGI
systems will have very different training/educating strategies, which
are similar to each other only in a very vague sense. Furthermore,
since all the systems are far from mature, any design change will
require corresponding change in training. On the contrary, we cannot
decide a training process first, then design the system accordingly.
For these reasons, I'd rather not to spend too much time on training
now, though I fully agree that it will become a major issue in the
future.

Pei


On 7/13/06, Ben Goertzel [EMAIL PROTECTED] wrote:

Pei,

That is actually not correct...

I would teach a baby AIXI about the same way I would teach a baby
Novamente, but I assume the former would learn a lot faster... so the
various stages of instruction would be passed through a lot more
quickly

Furthermore, I expect that the same cognitive structures that would
develop within a Novamente during its learning process, would also
develop within an AIXI during its learning process -- though in the
AIXI these cognitive structures would exist within the currently
active program being used to choose behaviors (due to its being
chosen as optimal during AIXI's program space search).

Please note that both AIXI and Novamente are explicitly based on
uncertain probabilistic inference, so that in spite of the significant
differences between the two (e.g. the latter can run on feasible
computational infrastructure, and is much more complicated due to the
need to fulfill this requirement), there is also a significant
commonality.

-- Ben

On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
 Ben,

 For example, I guess most of your ideas about how to train Novamente
 cannot be applied to AIXI.  ;-)

 Pei

  Pei,
 
  I think you are right that the process of education and mental
  development is going to be different for different types of AGI
  systems.
 
  However, I don't think it has to be dramatically different for each
  very specific AGI design.  And I don't think one has to wait till one
  has a working AGI to put serious analysis into its psychological
  development and instruction.
 
  In the context of Novamente, I have put a lot of thought into how
  mental development should occur for AGI systems that are
 
  -- heavily based on uncertain inference
  -- embodied in a real or simulated world where they get to interact
  with other agents
 
  Novamente falls into this category, but so do other AGI designs.
 
  A few of my and Stephan Bugaj's thoughts on this are described here:
 
  http://www.agiri.org/forum/index.php?showtopic=158
 
  and here:
 
  http://www.novamente.net/engine/
 
  (see Stage of Cognitive Development...)
 
  I have a whole lot of informal notes written down on AGI Developmental
  Psychology, extending the general ideas in this presentation/paper,
  and will probably write them up as a manuscript one day...
 
  -- Ben
 
  ---
  To unsubscribe, change your address, or temporarily deactivate your 
subscription,
  please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
 

 ---
 To unsubscribe, change your address, or temporarily deactivate your 
subscription,
 please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your 
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Flow charts? Source Code? .. Computing Intelligence? How too? ................. ping

2006-07-06 Thread William Pearson

On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:

 How would you define the sorts of tasks humans are designed to carry
 out? I can't see an easy way of categorising all the problems
 individual humans have shown there worth at, such as key-hole surgery,
 fighter piloting, cryptography and quantum physics.


 Well, there are two timescales involved, that of the species and that of
the individual. The short answer to the first question is: survival in Stone
Age tribes on the plains of Africa. That this produced an entity that can do
all the things on your list invokes something between wonder and existential
paranoia depending on one's mood and predilections.


Wonder for me. This long timescale viewpoint is useful because it
tells us that there will be lots of programming in humans that is not
useful for a robot/computer to act and survive in the real world. For
example blindly copying a baby human neural net to a electronic robot
wouldn't be smart. it wouldn't have the inherent fear/knowledge to
stay away from water that it would need.


(The absence of any
steps of the Great Filter between the Tertiary and the Cold War is a common
assumption - but it is only an assumption. But I digress.)

 On the individual timescale we're programmable general purpose problem
solvers:


This is an interesting term. If we could define what it means
precisely we would be a long way to building a useful system. What do
you think the closest system humanity has created to a pgpps is?  A
generic PC almost fulfils the description, programmable, generic and
if given the right software to start with can solve problems. But I am
guessing it is missing something. As someone interested in RL systems
I would say an overarching goal system for guiding programmability,
but I would be interested to know what you think.


We're good at learning from our environment, but that only gets you
so far, by itself it won't let you do any of the above things because you'll
be dead before you get the hang of them.


So this whittles away AIXI and similar formalisms from the possible
candidates for being a pgpps.


However, our environment also
contains other people and we can do any of the above by learning the
solutions other people worked out.


Agreed. I definately think this is where a lot of work needs to be
done. There is a variety of different methods we can learn from
others. Copying others, getting instruction even just knowing
something is possible can enable you to get to the same end point
without exact copying, e.g. building an Atom bomb.

 Will Pearson

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]