[agi] The Singularity

2006-12-05 Thread John Scanlon
Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur.  I am 
working very hard at developing real artificial general intelligence, but from 
what I know, it will not come quickly.  It will be slow and incremental.  The 
idea that very soon we can create a system that can understand its own code and 
start programming itself is ludicrous.

Any arguments?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:



Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

real artificial intelligence

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is fake artificial intelligence.u'e(wonder). Of course
if you could define fake artificial intelligence then you define
what real artificial intelligence is.

Once you define what real artificial intelligence means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each sentence has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOviet.)

The point being, that there  are a very finite number of functions
that have to be coded in order to allow the computer to be able to
interpret and act upon anything being said to it(Lojban is already
more expressive than a large amount of Natural Languages) .

How is this all going to be programmed?

Declarative statements: mu'a FANva zo VALsi la.ENGlic. la.LOJban.
zoi.gy. word .gy.
meaning the translation of word VALsi to ENGlic from LOJban is word.

Now the computer knows this fact (held in a Prolog database until
there becomes a logical-speakable language compiler).

I will create a version of the interpreter in the lojban-prolog hybrid
language (More or less finished Lojban parser written in Prolog, am
now working on Lojban-Prolog hybrid language).

Yes I know I've dragged this out very far but it was necessary for me
to reply to:


The idea that very soon we can create a system that can understand its own code

Such as the one above described.


and start programming itself is ludicrous.



Depends on what you see as the goal of programming. If 

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/4/06, Mark Waser  wrote:


Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our conscious
mind (though some degree of this *can* be changed by our conscious minds).
The more we can correctly interpret and affect/program the reflexive part of
our mind with the reflective part, the more intelligent we are.  And,
translating this back to the machine realm circles back to my initial point,
the better the machine can explain it's reasoning and use it's explanation
to improve it's future actions, the more intelligent the machine is (or, in
reverse, no explanation = no intelligence).



Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway.  Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mike Dougherty

On 12/5/06, BillK [EMAIL PROTECTED] wrote:


Your reasoning is getting surreal.

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

What's the point? - I think that's an even better question than defining

degrees of local rationality (good) vs irrationality (bad)  The whole notion
of arbitrarily defining subjective terms as good or better or bad seems
foolish.

If we're going to talk about evolutionary psychology as a motivator for
actions and attribute reactions to stimuli or enviornmental pressures then
it seems egocentric to apply labels like rational to any of the
observations.

Within the scope of these discussions, we put ourselves in a superior
non-human point of view where we can discuss the human decisions like
animals in a zoo.  For some threads it is useful to approach the subject
that way.  For most it illustrates a particular trait of the biased
selection of those humans who participate in this list.

hmm...  just an observation...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser

Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).


Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses.  Do you really want to contend the 
opposite?



You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time.


You're reading something into my statements that I certainly don't mean to 
be there.  Humans behave irrationally a lot of the time.  I consider this 
fact a defect or shortcoming in their intelligence (or make-up).  Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.



Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.


Yup.  Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects.  Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom.  For many bright people, they *do* know all of what 
you're saying and they still go ahead.  This is certainly some form of 
defect, I'm not sure where you'd classify it though.



Human decisions and activities are mostly emotional and irrational.


I think that this depends upon the person.  For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational.  I believe that there are some humans where this is not the 
case.



That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.


Yup, we've evolved to be at least minimally functional though not optimal.


An AGI will have to cope with this mess.


Yes, so far I'm in total agreement with everything you've said . . . .


Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


. . . until now where you make an unsupported blanket statement that doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).


Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning.  Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational?  It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).


And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).


I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion.  That just isn't the case.  Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely to be congruent with them (and even 
more so in well-balanced and happy individuals).




- Original Message - 
From: BillK [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis



On 12/4/06, Mark Waser  wrote:


Explaining our actions is the reflective part of our minds evaluating the
reflexive part of our mind.  The reflexive part of our minds, though,
operates analogously to a machine running on compiled code with the
compilation of code being largely *not* under the control of our 
conscious
mind (though some degree of this *can* be changed by our conscious 
minds).
The more we can correctly interpret and affect/program the reflexive part 
of

our mind with the reflective part, the more intelligent we are.  And,
translating this back to the machine realm circles back to my initial 
point,
the better the machine can 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
Talk about fortuitous timing . . . . here's a link on Marvin Minsky's latest 
about emotions and rational thought

http://www.boston.com/news/globe/health_science/articles/2006/12/04/minsky_talks_about_life_love_in_the_age_of_artificial_intelligence/

The most relevant line to our conversation is Called The Emotion Machine, it 
argues that, contrary to popular conception, emotions aren't distinct from 
rational thought; rather, they are simply another way of thinking, one that 
computers could perform.

- Original Message - 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 10:05 AM
Subject: Re: [agi] A question on the symbol-system hypothesis


 Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).
 
 Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
 intelligence to come up with excuses and that more intelligent people can 
 come up with more and better excuses.  Do you really want to contend the 
 opposite?
 
 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.
 
 You're reading something into my statements that I certainly don't mean to 
 be there.  Humans behave irrationally a lot of the time.  I consider this 
 fact a defect or shortcoming in their intelligence (or make-up).  Just 
 because humans have a shortcoming doesn't mean that another intelligence 
 will necessarily have the same shortcoming.
 
 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.
 
 Yup.  Humans are not as intelligent as they could be.  Generally, they place 
 way too much weight on near-term effect and not enough weight on long-term 
 effects.  Actually, though, I'm not sure whether you classify that as 
 intelligence or wisdom.  For many bright people, they *do* know all of what 
 you're saying and they still go ahead.  This is certainly some form of 
 defect, I'm not sure where you'd classify it though.
 
 Human decisions and activities are mostly emotional and irrational.
 
 I think that this depends upon the person.  For the majority of humans, 
 maybe -- but I'm not willing to accept this as applying to each individual 
 human that their decisions and activities are mostly emotional and 
 irrational.  I believe that there are some humans where this is not the 
 case.
 
 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.
 
 Yup, we've evolved to be at least minimally functional though not optimal.
 
 An AGI will have to cope with this mess.
 
 Yes, so far I'm in total agreement with everything you've said . . . .
 
 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.
 
 . . . until now where you make an unsupported blanket statement that doesn't 
 appear to me at all related to any of the above (and which may be entirely 
 accurate or inaccurate based upon what you mean by ruthless -- but I believe 
 that it would take a very contorted definition of ruthless to make it 
 accurate -- though inhuman should obviously be accurate).
 
 Part of the problem is that 'rationality' is a very emotion-laden term with 
 a very slippery meaning.  Is doing something because you really, really want 
 to despite the fact that it most probably will have bad consequences really 
 irrational?  It's not a wise choice but irrational is a very strong term . . 
 . . (and, as I pointed out previously, such a decision *is* rationally made 
 if you have bad weighting in your algorithm -- which is effectively what 
 humans have -- or not, since it apparently has been evolutionarily selected 
 for).
 
 And logic isn't necessarily so iron if the AGI has built-in biases for 
 conversation and relationships (both of which are rationally derivable from 
 it's own self-interest).
 
 I think that you've been watching too much Star Trek where logic and 
 rationality are the opposite of emotion.  That just isn't the case.  Emotion 
 can be (and is most often noted when it is) contrary to logic and 
 rationality -- but it is equally likely to be congruent with them (and even 
 more so in well-balanced and happy individuals).
 
 
 
 - Original Message - 
 From: BillK [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 

Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff


BillK [EMAIL PROTECTED] wrote: On 12/4/06, Mark Waser  wrote:

 Explaining our actions is the reflective part of our minds evaluating the
 reflexive part of our mind.  The reflexive part of our minds, though,
 operates analogously to a machine running on compiled code with the
 compilation of code being largely *not* under the control of our conscious
 mind (though some degree of this *can* be changed by our conscious minds).
 The more we can correctly interpret and affect/program the reflexive part of
 our mind with the reflective part, the more intelligent we are.  And,
 translating this back to the machine realm circles back to my initial point,
 the better the machine can explain it's reasoning and use it's explanation
 to improve it's future actions, the more intelligent the machine is (or, in
 reverse, no explanation = no intelligence).


Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway.  Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK

You just rationlized the reasons for human choice in your above arguement 
yourself :}
MOST humans act rationaly MOST of the time.  They may not make 'good' 
decisions, but they are rational ones, if you  decides to sleep with your best 
friends wife, you do so because you are attracted and you want her, and you 
rationlize you will probably not get caught.  You have stated the reasons, and 
you move ahead with that plan.
  Vague stuff you cant rationalize easily is why you like the appearance of 
someones face, or why you like this flavor of ice cream.  Those are hard to 
rationalize, but much of our behaviour is easier.
  Now about building a rational vs non-rational AGI, how would you go about 
modeling a non-rational part of it?  Short of a random number generator?

  For the most part we Do want a rational AGI, and it DOES need to explain 
itself.  One fo the first tasks of AGI will be to replace all of the current 
expert systems in fields like medicine.  
  For these it is not merely good enough to say, (as a Doctor AGI) I think he 
has this cancer, and you should treat him with this strange procedure.  There 
must be an accounting that it can present to other doctors and say, yes, I 
noticed a coorelation between these factors that lead me to believe this, with 
this certainty.  An early AI must also proove its merit by explaining what it 
is doing to build up a level of trust.
   Further, it is important in another fashion, in that we can turn around and 
use these smart AI's to further train other Doctors or specialists with the 
AGI's explainations.

Now for some tasks it will not be able to do this, or not within a small amount 
of data and explanations.  The level that it is able to generalize this 
information will reflect its usefullness and possibly intelligence.

In the Halo expirement for the Chemistry API, they were graded not only on 
correct answers but also in their explanations of how they got to those answers.
Some of the explanations were short concise and well reasoned, some fo them 
though, went down to a very basic level of detail and lasted for a couple of 
pages.

If you are flying to Austin, and asking a AGI to plan your route, and it 
chooses a Airline that sounds dodgy that you have never heard of, mainly 
because it was cheap or some other reasoning, you def want to know why it 
choose that, and tell it not to weight that feature as highly.
  For many decisions I believe a small feature set is required, with the larger 

Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore

John Scanlon wrote:

Alright, I have to say this.
 
I don't believe that the singularity is near, or that it will even 
occur.  I am working very hard at developing real artificial general 
intelligence, but from what I know, it will not come quickly.  It will 
be slow and incremental.  The idea that very soon we can create a system 
that can understand its own code and start programming itself is ludicrous.
 
Any arguments?


Back in 17th century Europe, people stood at the end of a long period of 
history (basically, all of previous history) during which curious humans 
had tried to understand how the world worked, but had largely failed to 
make substantial progress.


They had been suffering from an attitude problem:  there was something 
about their entire way of approaching the knowledge-discovery process 
that was wrong.  We now characterize their fault as being the lack of an 
objective scientific method.


Then, all of a sudden, people got it.

Once it started happening, it spread like wildfire.  Then it went into 
overdrive when Isaac Newton cross-bred the new attitude with a vigorous 
dose of mathematical invention.


My point?  That you can keep banging the rocks together for a very long 
time and feel like you are just getting nowhere, but then all of a 
sudden you can do something as simple as change your attitude or your 
methodology slightly, and wham!, everything starts happening at once.


For what it is worth, I do not buy most of Kurzweil's arguments about 
the general progress of the technology curves.


I don't believe in that argument for the singularity at all, I believe 
that it will happen for a specific technological reason.


I think that there is something wrong with the attitude we have been 
adopting toward AI research, which is comparable to the attitude problem 
that divided the pre- and post-Enlightenment periods.


I have summarized a part of this argument in the paper that I wrote for 
the first AGIRI workshop.  The argument in that paper can be summarized 
as:  the first 30 years of AI was all about scruffy engineering, then 
the second 20 years of AI was all about neat mathematics, but because 
of the complex systems problem neither of these approaches would be 
expected to work, and what we need instead is a new attitude that is 
neither engineering nor math, but science. [This paper is due to be 
published in the AGIRI proceedings next year, but if anyone wants to 
contact me I will be able to send a not-for-circulation copy].


However, there is another, more broad-ranging way to look at the present 
situation, and that is that we have three research communities who do 
not communicate with one another:  AI Programmers, Cognitive Scientists 
(or Cognitive Psychologists) and Software Engineers.  What we need is a 
new science that merges these areas in a way that is NOT a lowest common 
denominator kind of merge.  We need people who truly understand all of 
them, not cross-travelling experts who mostly reside in one and (with 
the best will in world) think they know enough about the others.


This merging of the fields has never happened before.  More importantly, 
the specific technical issue related to the complex systems problem (the 
need for science, rather than engineering or math) has also never been 
fully appreciated before.


Everything I say in this post may be wrong, but one thing is for sure: 
this new approach/attitude has not been tried before, so the 
consequences of taking it seriously and trying it are lying out there in 
the future, completely unknown.


I believe that this is something we just don't get yet.  When we do, I 
think we will start to see the last fifty years of AI research as 
equivalent to the era before 1665.  I think that AI will start to take 
off in at breathtaking speed once the new attitude finally clicks.


The one thing that stops it from happening is the ego problem.  Too many 
people with too much invested in the supremacy they have within their 
own domain.  Frankly, I think it might only start to happen if we can 
take some people fresh out of high school and get them through a 
completely new curriculum, then get 'em through their Ph.D.s before they 
realise that all of the existing communities are going to treat them 
like lepers because they refuse to play the game. ;-)  But that would 
only take six years.


After we get it, in other words, *that* is when the singularity starts 
to happen.


If, on the other hand, all we have is the present approach to AI then I 
tend to agree with you John:  ludicrous.





Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

If, on the other hand, all we have is the present approach to AI then I
tend to agree with you John:  ludicrous.




Richard Loosemore


IMO it is not sensible to speak of the present approach to AI

There are a lot of approaches out there... not an orthodoxy by any means...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
 Now about building a rational vs non-rational AGI, how would you go about 
 modeling a non-rational part of it?  Short of a random number generator?

Why would you want to build a non-rational AGI?  It seems like a *really* bad 
idea.  I think I'm missing your point here.

 For the most part we Do want a rational AGI, and it DOES need to explain 
 itself.  One fo the first tasks of AGI will be to replace all of the current 
 expert systems in fields like medicine.  

Yep.  That's my argument and you expand it well.

 Now for some tasks it will not be able to do this, or not within a small 
 amount of data and explanations.  The level that it is able to generalize 
 this information will reflect its usefullness and possibly intelligence.

Yep.  You're saying exactly what I'm thinking.

  For many decisions I believe a small feature set is required, with the 
 larger possible features being so lowly weighted as to not have much impact.

This is where Ben and I are sort of having a debate.  I agree with him that the 
brain may well be using the larger number since it is massively parallel and it 
therefore can.  I think that we differ on whether or not the larger is required 
for AGI (Me = No, Ben = Yes) -- which reminds me . . . 

Hey Ben, if the larger number IS required for AGI, how do you intend to do this 
in a computationally feasible way in a non-massively-parallel system?





  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Tuesday, December 05, 2006 11:17 AM
  Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis




  BillK [EMAIL PROTECTED] wrote:
On 12/4/06, Mark Waser wrote:

 Explaining our actions is the reflective part of our minds evaluating the
 reflexive part of our mind. The reflexive part of our minds, though,
 operates analogously to a machine running on compiled code with the
 compilation of code being largely *not* under the control of our conscious
 mind (though some degree of this *can* be changed by our conscious minds).
 The more we can correctly interpret and affect/program the reflexive part 
of
 our mind with the reflective part, the more intelligent we are. And,
 translating this back to the machine realm circles back to my initial 
point,
 the better the machine can explain it's reasoning and use it's explanation
 to improve it's future actions, the more intelligent the machine is (or, 
in
 reverse, no explanation = no intelligence).


Your reasoning is getting surreal.

As Ben tried to explain to you, 'explaining our actions' is our
consciousness dreaming up excuses for what we want to do anyway. Are
you saying that the more excuses we can think up, the more intelligent
we are? (Actually there might be something in that!).

You seem to have a real difficulty in admitting that humans behave
irrationally for a lot (most?) of the time. Don't you read newspapers?
You can redefine rationality if you like to say that all the crazy
people are behaving rationally within their limited scope, but what's
the point? Just admit their behaviour is not rational.

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.

Human decisions and activities are mostly emotional and irrational.
That's the way life is. Because life is uncertain and unpredictable,
human decisions are based on best guesses, gambles and basic
subconscious desires.

An AGI will have to cope with this mess. Basing an AGI on iron logic
and 'rationality' alone will lead to what we call 'inhuman'
ruthlessness.


BillK


  You just rationlized the reasons for human choice in your above arguement 
yourself :}
  MOST humans act rationaly MOST of the time.  They may not make 'good' 
decisions, but they are rational ones, if you  decides to sleep with your best 
friends wife, you do so because you are attracted and you want her, and you 
rationlize you will probably not get caught.  You have stated the reasons, and 
you move ahead with that plan.
Vague stuff you cant rationalize easily is why you like the appearance of 
someones face, or why you like this flavor of ice cream.  Those are hard to 
rationalize, but much of our behaviour is easier.
Now about building a rational vs non-rational AGI, how would you go about 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Mark Waser [EMAIL PROTECTED] wrote:  Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).

Sure.  Absolutely.  I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses.  Do you really want to contend the 
opposite?

 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.

You're reading something into my statements that I certainly don't mean to 
be there.  Humans behave irrationally a lot of the time.  I consider this 
fact a defect or shortcoming in their intelligence (or make-up).  Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.

Yup.  Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects.  Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom.  For many bright people, they *do* know all of what 
you're saying and they still go ahead.  This is certainly some form of 
defect, I'm not sure where you'd classify it though.

 Human decisions and activities are mostly emotional and irrational.

I think that this depends upon the person.  For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational.  I believe that there are some humans where this is not the 
case.

 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.

Yup, we've evolved to be at least minimally functional though not optimal.

 An AGI will have to cope with this mess.

Yes, so far I'm in total agreement with everything you've said . . . .

 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.

. . . until now where you make an unsupported blanket statement that doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).

Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning.  Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational?  It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).

And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).

I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion.  That just isn't the case.  Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely to be congruent with them (and even 
more so in well-balanced and happy individuals).



You have hinted around it, but I would go one step further and say that Emotion 
is NOT contrary to logic.  In any way really, they cant be compared like that.  
Logic even 'uses' emotion as imput.  The decisions we make are based on rules 
and facts we know, and our emotions, but still logically.
  What emotions often contradict is our actual ability to make good decicions / 
plans. 
  If we do something stupid because of our anger or emotions, then it still is 
a causal logical explanation.
  So humand and AGI may be irrational, but hopefully not illogical.  If it is 
illogical then that implies it made its decision without any logical reasoning, 
so possibly random.   AGI will need some level of randomness, but not for 
general things.

James Ratcliff


___
James 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Mark Waser
 You have hinted around it, but I would go one step further and say that 
 Emotion is NOT contrary to logic.

:-) I thought that my last statement that emotion is equally likely to be 
congruent with logic and reason was a lot more than a hint (unless congruent 
doesn't mean not contrary like I think/thought it did  :-)

I liked your distinction between illogical and irrational -- though I'm not 
sure that others would agree with your using irrational that way.
  - Original Message - 
  From: James Ratcliff 
  To: agi@v2.listbox.com 
  Sent: Tuesday, December 05, 2006 11:34 AM
  Subject: Re: [agi] A question on the symbol-system hypothesis


  Mark Waser [EMAIL PROTECTED] wrote:
 Are
 you saying that the more excuses we can think up, the more intelligent
 we are? (Actually there might be something in that!).

Sure. Absolutely. I'm perfectly willing to contend that it takes 
intelligence to come up with excuses and that more intelligent people can 
come up with more and better excuses. Do you really want to contend the 
opposite?

 You seem to have a real difficulty in admitting that humans behave
 irrationally for a lot (most?) of the time.

You're reading something into my statements that I certainly don't mean to 
be there. Humans behave irrationally a lot of the time. I consider this 
fact a defect or shortcoming in their intelligence (or make-up). Just 
because humans have a shortcoming doesn't mean that another intelligence 
will necessarily have the same shortcoming.

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.

Yup. Humans are not as intelligent as they could be. Generally, they place 
way too much weight on near-term effect and not enough weight on long-term 
effects. Actually, though, I'm not sure whether you classify that as 
intelligence or wisdom. For many bright people, they *do* know all of what 
you're saying and they still go ahead. This is certainly some form of 
defect, I'm not sure where you'd classify it though.

 Human decisions and activities are mostly emotional and irrational.

I think that this depends upon the person. For the majority of humans, 
maybe -- but I'm not willing to accept this as applying to each individual 
human that their decisions and activities are mostly emotional and 
irrational. I believe that there are some humans where this is not the 
case.

 That's the way life is. Because life is uncertain and unpredictable,
 human decisions are based on best guesses, gambles and basic
 subconscious desires.

Yup, we've evolved to be at least minimally functional though not optimal.

 An AGI will have to cope with this mess.

Yes, so far I'm in total agreement with everything you've said . . . .

 Basing an AGI on iron logic
 and 'rationality' alone will lead to what we call 'inhuman'
 ruthlessness.

. . . until now where you make an unsupported blanket statement that 
doesn't 
appear to me at all related to any of the above (and which may be entirely 
accurate or inaccurate based upon what you mean by ruthless -- but I 
believe 
that it would take a very contorted definition of ruthless to make it 
accurate -- though inhuman should obviously be accurate).

Part of the problem is that 'rationality' is a very emotion-laden term with 
a very slippery meaning. Is doing something because you really, really want 
to despite the fact that it most probably will have bad consequences really 
irrational? It's not a wise choice but irrational is a very strong term . . 
. . (and, as I pointed out previously, such a decision *is* rationally made 
if you have bad weighting in your algorithm -- which is effectively what 
humans have -- or not, since it apparently has been evolutionarily selected 
for).

And logic isn't necessarily so iron if the AGI has built-in biases for 
conversation and relationships (both of which are rationally derivable from 
it's own self-interest).

I think that you've been watching too much Star Trek where logic and 
rationality are the opposite of emotion. That just isn't the case. Emotion 
can be (and is most often noted when it is) contrary to logic and 
rationality -- but it is equally likely to 

Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread James Ratcliff
Yes, I could not find a decent definition of irrational at first:
Amending my statements now...

Using the Wiki basis below: the term is used to describe thinking and actions 
which are, or appear to be, less useful or logical than the rational 
alternatives.

I would remove the 'logical' portion of this, because the examples given below, 
emotions, fads, stock markets.
These decisions are all made useing logic, with emotions contirbuting to a 
choice, or a choice being made because we see others wearing the same clothes, 
or based on our (possibly incorrect) beliefs about what the stock market may do.
  The other possibility is to actually incorrectly use the knowledge.  If I 
have all the rules about a stock that would point to it going down, but I still 
purchase and believe it will go up, I am using the logic incorrectly.

  So possibly irrationality could be amended to be something like: basing a 
decision on faulty information, or incorrectly using logic to arrive at a 
choice.

So for my AGI application, I would indeed then model the irrationality in the 
form of emotions / fads etc, as logical components, and it would implicity be 
irrational becuase it could have faulty information.  And incorrectly using the 
logic it has, would only be done if there was an error.

James
Theories of irrational behavior include:
 
   people's actual interests differ from what they believe to be their interests
This is still logical though, just based on beliefs that are wrong to actual 
interests.


From Wiki: http://en.wikipedia.org/wiki/Irrationality
Irrationality is talking or acting without regard of rationality. Usually 
pejorative, the term is used to describe thinking and actions which are, or 
appear to be, less useful or logical than the rational alternatives. These 
actions tend to be regarded as emotion-driven. There is a clear tendency to 
view our own thoughts, words, and actions as rational and to see those who 
disagree as irrational.
 Types of behavior which are often described as irrational include:
 
   fads and fashions
   crowd behavior
   offense or anger at a situation that has not yet occurred
   unrealistic expectations
   falling victim to confidence tricks
   belief in the supernatural without evidence
   stock-market bubbles
   irrationality caused by mental illness, such as obsessive-compulsive 
disorder, major depressive disorder, and paranoia.

Mark Waser [EMAIL PROTECTED] wrote:You  have hinted around it, but 
I would go one step further and say that Emotion is  NOT contrary to logic.
  
 :-) I thought that my last statement that emotion is equally likely to be  
congruent with logic and reason was a lot more than a hint (unless  
congruent doesn't mean not contrary like I think/thought it did   :-)
  
 I liked your distinction between illogical and  irrational -- though I'm not 
sure that others would agree with your using  irrational that way.
- Original Message - 
   From:James Ratcliff
   To: agi@v2.listbox.com 
   Sent: Tuesday, December 05, 2006 11:34AM
   Subject: Re: [agi] A question on thesymbol-system hypothesis
   

Mark Waser [EMAIL PROTECTED] wrote: Are
 you saying that the more excuses we can think up, the more  intelligent
 we are? (Actually there might be something in  that!).

Sure. Absolutely. I'm perfectly willing to contend that it  takes 
intelligence to come up with excuses and that more intelligent  people can 
come up with more and better excuses. Do you really want to  contend the 
opposite?

 You seem to have a real difficulty in  admitting that humans behave
 irrationally for a lot (most?) of the  time.

You're reading something into my statements that I certainly  don't mean to 
be there. Humans behave irrationally a lot of the time. I  consider this 
fact a defect or shortcoming in their intelligence (or  make-up). Just 
because humans have a shortcoming doesn't mean that  another intelligence 
will necessarily have the same  shortcoming.

 Every time someone (subconsciously) decides to do  something, their
 brain presents a list of reasons to go ahead. The  reasons against are
 ignored, or weighted down to be less preferred.  This applies to
 everything from deciding to get a new job to  deciding to sleep with
 your best friend's wife. Sometimes a case  arises when you really,
 really want to do something that you *know*  is going to end in
 disaster, ruined lives, ruined career, etc. and  it is impossible to
 think of good reasons to proceed. But you still  go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody  will find out, it's
 not all my fault anyway, and so  on.

Yup. Humans are not as intelligent as they could be.  Generally, they place 
way too much weight on near-term effect and not  enough weight on long-term 
effects. Actually, though, I'm not sure  whether you classify that as 

Re: [agi] The Singularity

2006-12-05 Thread Hank Conn

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?

It has been my experience that one's expectations on the future of
AI/Singularity is directly dependent upon one's understanding/design of AGI
and intelligence in general.

On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote:


John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:

 I don't believe that the singularity is near, or that it will even
occur.  I
 am working very hard at developing real artificial general intelligence,
but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Marvin and The Emotion Machine [WAS Re: [agi] A question on the symbol-system hypothesis]

2006-12-05 Thread BillK

On 12/5/06, Richard Loosemore wrote:


There are so few people who speak up against the conventional attitude
to the [rational AI/irrational humans] idea, it is such a relief to hear
any of them speak out.

I don't know yet if I buy everything Minsky says, but I know I agree
with the spirit of it.

Minsky and Hofstadter are the two AI thinkers I most respect.




The customer reviews on Amazon are rather critical of Minsky's new book.
They seem to be complaining that the book is more of a general
discussion rather than providing detailed specifications for building
an AI engine.  :)
http://www.amazon.com/gp/product/customer-reviews/0743276639/ref=cm_cr_dp_pt/102-3984994-3498561?ie=UTF8n=283155s=books


The good news is that Minsky appears to be making the book available
online at present on his web site. *Download quick!*

http://web.media.mit.edu/~minsky/
See under publications, chapters 1 to 9.
The Emotion Machine 9/6/2006( 1 2 3 4 5 6 7 8 9 )


I like very much Minsky's summing up from the end of the book:


-
All of these kinds of inventiveness, combined with our unique
expressiveness, have empowered our communities to deal with huge
classes of new situations. The previous chapters discussed many
aspects of what gives people so much resourcefulness:

We have multiple ways to describe many things—and can quickly switch
among those different perspectives.
We make memory-records of what we've done—so that later we can reflect on them.
We learn multiple ways to think so that when one of them fails, we can
switch to another.
We split hard problems into smaller parts, and use goal-trees, plans,
and context stacks to help us keep making progress.
We develop ways to control our minds with all sorts of incentives,
threats, and bribes.
We have many different ways to learn and can also learn new ways to learn.
We can often postpone a dangerous action and imagine, instead, what
its outcome might be in some Virtual World.


Our language and culture accumulates vast stores of ideas that were
discovered by our ancestors. We represent these in multiple realms,
with metaphors interconnecting them.

Most every process in the brain is linked to some other processes. So,
while any particular process may have some deficiencies, there will
frequently be other parts that can intervene to compensate.

Nevertheless, our minds still have bugs. For, as our human brains
evolved, each seeming improvement also exposed us to the dangers
making new types of mistakes. Thus, at present, our wonderful powers
to make abstractions also cause us to construct generalizations that
are too broad, fail to deal with exceptions to rules, accumulate
useless or incorrect information, and to believe things because our
imprimers do. We also make superstitious credit assignments, in which
we confuse real thing with ones that we merely imagine; then we become
obsessed with unachievable goals, and set out on unbalanced, fanatical
searches and quests. Some persons become so unwilling to acknowledge a
serious failure or a great loss that they try to relive their lives of
the past. Also, of course, many people suffer from mental disorders
that range from minor incapacities to dangerous states of dismal
depression or mania.

We cannot expect our species to evolve ways to escape from all such
bugs because, as every engineer knows, as every engineer knows, most
every change in a large complex system will introduce yet other
mistakes that won't show up till the system moves to a different
environment. Furthermore, we also face an additional problem: each
human brain differs from the next because, first, it is built by pairs
of inherited genes, each chosen by chance from one of its parent's
such pairs. Then, during the early development of each brain, many
other smaller details depend on other, small accidental events. An
engineer might wonder how such machines could possibly work, in spite
of so many possible variations.

To explain how such large systems could function reliably, quite a few
thinkers have suggested that our brains must be based on some
not-yet-understood 'holistic' principles, according to which every
fragment of process or knowledge is 'distributed' (in some unknown
global way) so that the system still could function well in spite of
the loss of any part of it because such systems act as though they
were more than the sums of all their parts. However, the arguments
in this book suggest that we do not need to look for any such magical
tricks—because we have so many ways to accomplish each job that we can
tolerate the failure of many particular parts, simply by switching to
using alternative ones. (In other words, we function well because we
can perform with far less than the sum of all of our parts.)

Furthermore, it makes sense to suppose that many of the parts of our
brains are involved with helping to correct or suppress the effects of
defects and bugs in other parts. This means that we will find it hard
to 

Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson

Ben Goertzel wrote:

...
According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben
I do, however, have some question about it being a hard takeoff.  That 
depends largely on

1) how efficient the program is, and
2) what computer resources are available.

To me it seems quite plausible that an AGI might start out as slightly 
less intelligent than a normal person, or even considerably less 
intelligent, with the limitation being due to the available computer 
time.  Naturally, this would change fairly rapidly over time, but not 
exponentially so, or at least not super-exponentially so.


If, however, the singularity is delayed because the programs aren't 
ready, or are too inefficient, then we might see a true hard-takeoff.  
In that case by the time the program was ready, the computer resources 
that it needs would already be plentifully available.   This isn't 
impossible, if the program comes into existence in a few decades, but if 
the program comes into existence within the current decade, then there 
would be a soft-takeoff.  If it comes into existence within the next 
half-decade then I would expect the original AGI to be sub-normal, due 
to lack of available resources.


Naturally all of this is dependent on many different things.  If Vista 
really does require as much of and immense retooling to more powerful 
computers as some predict, then  programs that aren't dependent on Vista 
will have more resources available, as computer designs are forced to be 
faster and more capacious.  (Wasn't Intel promising 50 cores on a single 
chip in a decade?  If each of those cores is as capable as a current 
single core, then it will take far fewer computers netted together to 
pool the same computing capacity...for those programs so structured as 
to use the capacity.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:
 If, on the other hand, all we have is the present approach to AI then I
 tend to agree with you John:  ludicrous.




 Richard Loosemore

 IMO it is not sensible to speak of the present approach to AI

 There are a lot of approaches out there... not an orthodoxy by any means...

I'm aware of the different approaches, and of how very, very different
they are from one another.

But by contrast with the approach I am advocating, they all look like
orthodoxy.  There is a *big* difference between the two sets of ideas.


In that context, and only in that context, it makes sense to talk about
the present approach to AI.



Richard Loosemore.



Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

...

Every time someone (subconsciously) decides to do something, their
brain presents a list of reasons to go ahead. The reasons against are
ignored, or weighted down to be less preferred. This applies to
everything from deciding to get a new job to deciding to sleep with
your best friend's wife. Sometimes a case arises when you really,
really want to do something that you *know* is going to end in
disaster, ruined lives, ruined career, etc. and it is impossible to
think of good reasons to proceed. But you still go ahead anyway,
saying that maybe it won't be so bad, maybe nobody will find out, it's
not all my fault anyway, and so on.
...

BillK
I think you've got a time inversion here.  The list of reasons to go 
ahead is frequently, or even usually, created AFTER the action has been 
done.  If the list is being created BEFORE the decision, the list of 
reasons not to go ahead isn't ignored.  Both lists are weighed, a 
decision is made, and AFTER the decision is made the reasons decided 
against have their weights reduced.  If, OTOH, the decision is made 
BEFORE the list of reasons is created, then the list doesn't *get* 
created until one starts trying to justify the action, and for 
justification obviously reasons not to have done the thing are 
useless...except as a layer of whitewash to prove that all 
eventualities were considered.


For most decisions one never bothers to verbalize why it was, or was 
not, done.



P.S.:  ...and AFTER the decision is made the reasons decided against 
have their weights reduced.  ...:  This is to reinforce a consistent 
self-image.  If, eventually, the decision turns our to have been the 
wrong one, then this must be revoked, and the alternative list 
reinforced.  At which point one's self-image changes and one says things 
like I don't know WHY I would have done that, because the modified 
self image would not have decided in that way.
P.P.S:  THIS IS FABULATION.  I'm explaining what I think happens, but I 
have no actual evidence of the truth of my assertions.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Pei Wang

See http://www.agiri.org/forum/index.php?showtopic=44 and
http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm

Pei

On 12/5/06, Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:


Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread BillK

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...

 Every time someone (subconsciously) decides to do something, their
 brain presents a list of reasons to go ahead. The reasons against are
 ignored, or weighted down to be less preferred. This applies to
 everything from deciding to get a new job to deciding to sleep with
 your best friend's wife. Sometimes a case arises when you really,
 really want to do something that you *know* is going to end in
 disaster, ruined lives, ruined career, etc. and it is impossible to
 think of good reasons to proceed. But you still go ahead anyway,
 saying that maybe it won't be so bad, maybe nobody will find out, it's
 not all my fault anyway, and so on.
 ...

 BillK
I think you've got a time inversion here.  The list of reasons to go
ahead is frequently, or even usually, created AFTER the action has been
done.  If the list is being created BEFORE the decision, the list of
reasons not to go ahead isn't ignored.  Both lists are weighed, a
decision is made, and AFTER the decision is made the reasons decided
against have their weights reduced.  If, OTOH, the decision is made
BEFORE the list of reasons is created, then the list doesn't *get*
created until one starts trying to justify the action, and for
justification obviously reasons not to have done the thing are
useless...except as a layer of whitewash to prove that all
eventualities were considered.

For most decisions one never bothers to verbalize why it was, or was
not, done.



No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Charles D Hixson

BillK wrote:

On 12/5/06, Charles D Hixson wrote:

BillK wrote:
 ...
 


No time inversion intended. What I intended to say was that most
(all?) decisions are made subconsciously before the conscious mind
starts its reason / excuse generation process. The conscious mind
pretending to weigh various reasons is just a human conceit. This
feature was necessary in early evolution for survival. When danger
threatened, immediate action was required. Flee or fight!  No time to
consider options with the new-fangled consciousness brain mechanism
that evolution was developing.

With the luxury of having plenty of time to reason about decisions,
our consciousness can now play its reasoning games to justify what
subconsciously has already been decided.

NOTE: This is probably an exaggeration / simplification. ;)


BillK
I would say that all decisions are made subconsciously, but that the 
conscious mind can focus attention onto various parts of the problem and 
possibly affect the weighings of the factors.


I would also make a distinction between the conscious mind and the 
verbalized elements, which are merely the story that the conscious mind 
is telling.  (And assert that ALL of the stories that we tell ourselves 
are human conceits, i.e., abstractions of parts deemed significant out 
of a much more complex underlying process.)


I've started reading What is Thought by Eric Baum.  So far I'm only 
into the second chapter, but it seems quite promising.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it and how fast?]

2006-12-05 Thread Matt Mahoney

--- Eric Baum [EMAIL PROTECTED] wrote:

 
 Matt --- Hank Conn [EMAIL PROTECTED] wrote:
 
  On 12/1/06, Matt Mahoney [EMAIL PROTECTED] wrote:  The goals
  of humanity, like all other species, was determined by 
  evolution.   It is to propagate the species.
  
  
  That's not the goal of humanity. That's the goal of the evolution
  of humanity, which has been defunct for a while.
 
 Matt We have slowed evolution through medical advances, birth control
 Matt and genetic engineering, but I don't think we have stopped it
 Matt completely yet.
 
 I don't know what reason there is to think we have slowed
 evolution, rather than speeded it up.
 
 I would hazard to guess, for example, that since the discovery of 
 birth control, we have been selecting very rapidly for people who 
 choose to have more babies. In fact, I suspect this is one reason
 why the US (which became rich before most of the rest of the world)
 has a higher birth rate than Europe.

Yes, but actually most of the population increase in the U.S. is from
immigration.  Population is growing the fastest in the poorest countries,
especially Africa.

 Likewise, I expect medical advances in childbirth etc are selecting
 very rapidly for multiple births (which once upon a time often killed 
 off mother and child.) I expect this, rather than or in addition to
 the effects of fertility drugs, is the reason for the rise in 
 multiple births.

The main effect of medical advances is to keep children alive who would
otherwise have died from genetic weaknesses, allowing these weaknesses to be
propagated.

Genetic engineering has not yet had much effect on human evolution, as it has
in agriculture.  We have the technology to greatly speed up human evolution,
but it is suppressed for ethical reasons.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney

--- John Scanlon [EMAIL PROTECTED] wrote:

 Alright, I have to say this.
 
 I don't believe that the singularity is near, or that it will even occur.  I
 am working very hard at developing real artificial general intelligence, but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.
 
 Any arguments?

Not very soon, maybe 10 or 20 years.  General programming skills will first
require an adult level language model and intelligence, something that could
pass the Turing test.

Currently we can write program-writing programs only in very restricted
environments with simple, well defined goals (e.g. genetic algorithms).  This
is not sufficient for recursive self improvement.  The AGI will first need to
be at the intellectual level of the humans who built it.  This means
sufficient skills to do research, and to write programs from ambiguous natural
language specificiations and have enough world knowledge to figure out what
the customer really wanted.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] A question on the symbol-system hypothesis

2006-12-05 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

 Matt Maohoney wrote:
  My point is that when AGI is built, you will have to trust its answers
 based
  on the correctness of the learning algorithms, and not by examining the
  internal data or tracing the reasoning.
 
 Agreed...
 
 I believe this is the fundamental
  flaw of all AI systems based on structured knowledge representations, such
 as
  first order logic, frames, connectionist systems, term logic, rule based
  systems, and so on.
 
 I have a few points in response to this:
 
 1) Just because a system is based on logic (in whatever sense you
 want to interpret that phrase) doesn't mean its reasoning can in
 practice be traced by humans.  As I noted in recent posts,
 probabilistic logic systems will regularly draw conclusions based on
 synthesizing (say) tens of thousands or more weak conclusions into one
 moderately strong one.  Tracing this kind of inference trail in detail
 is pretty tough for any human, pragmatically speaking...
 
 2) IMO the dichotomy between logic based and statistical AI
 systems is fairly bogus.  The dichotomy serves to separate extremes on
 either side, but my point is that when a statistical AI system becomes
 really serious it becomes effectively logic-based, and when a
 logic-based AI system becomes really serious it becomes effectively
 statistical ;-)

I see your point that there is no sharp boundary between structured knowledge
and statistical approaches.  What I mean is that the normal software
engineering practice of breaking down a hard problem into components with well
defined interfaces does not work for AGI.  We usually try things like:

input text -- parser -- semantic extraction -- inference engine -- output
text.

The fallacy is believing that the intermediate representation would be more
comprehensible than the input or output.  That isn't possible because of the
huge amount of data.  In a toy system you might have 100 facts that you can
compress down to a diagram that fits on a sheet of paper.  In reality you
might have a gigabyte of text that you can compress down to 10^9 bits. 
Whatever form this takes can't be more comprehensible than the input or output
text.

I think it is actually liberating to remove the requirement for transparency
that was typical of GOFAI.  For example, your knowledge representation could
still be any of the existing forms but it could also be a huge matrix with
billions of elements.  But it will require a different approach to build, not
so much engineering, but more of an experimental science, where you test
different learning algoriths at the inputs and outputs only.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Your message appeared at first to be rambling and incoherent, but I see that 
that's probably because English is a second language for you.  But that's 
not a problem if your ideas are solid.


Yes, there is fake artificial intelligence out there, systems that are 
proposed to be intelligent but aren't and can't be because they are dead 
ends.  A big example of this is Cyc.  And there are others.


The Turing test is a bad test for AI.  The reasons for this have already 
been brought up on this mailing list.  I could go into the criticisms 
myself, but there are other people here who have already spoken well on the 
subject.


And yes, language is an essential part of any intelligent system.  But there 
there is another part you haven't mentioned -- the actual intelligence that 
can understand and manipulate language.  Intelligence is not just parsing 
and logic.  It is imagination and visualization that relates words to their 
referents in the real world.


What is your idea of how this imagination and visualization that relates 
language to phenomena in the real world can be engineered in software in 
such a way that the singularity will be brought about?



Andrii (lOkadin) Zvorygin wrote:


On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

real artificial intelligence

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is fake artificial intelligence.u'e(wonder). Of course
if you could define fake artificial intelligence then you define
what real artificial intelligence is.

Once you define what real artificial intelligence means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each sentence has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOviet.)

The point 

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
I'm a little bit familiar with Piaget, and I'm guessing that the formal 
stage of development is something on the level of a four-year-old child. 
If we could create an AI system with the intelligence of a four-year-old 
child, then we would have a huge breakthrough, far beyond anything done so 
far in a computer.  And we would be approaching a possible singularity. 
It's just that I see no evidence anywhere of this kind of breakthrough, or 
anything close to it.


My ideas are certainly inadequate in themselves at the present time.  My 
Gnoljinn project is just about at the point where I can start writing the 
code for the intelligence engine.  The architecture is in place, the 
interface language, Jinnteera, is being parsed, images are being sent into 
the Gnoljinn server (along with linguistic statements) and are being 
pre-processed.  The development of the intelligence engine will take time, a 
lot of coding, experimentation, and re-coding, until I get it right.  It's 
all experimental, and will take time.


I see a singularity, if it occurs at all, to be at least a hundred years 
out.  I know you have a much shorter time frame.  But what is it about 
Novamente that will allow it in a few years time to comprehend its own 
computer code and intelligently re-write it (especially a system as complex 
as Novamente)?  The artificial intelligence problem is much more difficult 
than most people imagine it to be.



Ben Goertzel wrote:


John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

I see a singularity, if it occurs at all, to be at least a hundred years
out.


To use Kurzweil's language, you're not thinking in exponential time  ;-)


The artificial intelligence problem is much more difficult
than most people imagine it to be.


Most people have close to zero basis to even think about the topic
in a useful way.

And most professional, academic or industry AI folks are more
pessimistic than you are.


 But what is it about
Novamente that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as complex
as Novamente)?


I'm not going to try to summarize the key ideas underlying Novamente
in an email.  I have been asked to write a nontechnical overview of
the NM approach to AGI for a popular website, and may find time for it
later this month... if so, I'll post a link to this list.

Obviously, I think I have solved some fundamental issues related to
implementing general cognition on contemporary computers.  I believe
the cognitive mechanisms designed for NM will be adequate to lead to
the emergence within the system of the key emergent structures of mind
(self, will, focused awareness), and from these key emergent
structures comes the capability for ever-increasing intelligence.

Specific timing estimates for NM are hard to come by -- especially
because of funding vagaries (currently progress is steady but slow for
this reason), and because of the general difficulty of estimating the
rate of progress of any large-scale software project .. not to mention
various research uncertainties.  But 100 years is way off.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Hank,

Do you have a personal understanding/design of AGI and intelligence in 
general that predicts a soon-to-come singularity?  Do you have theories or a 
design for an AGI?

John



Hank Conn wrote:

  It has been my experience that one's expectations on the future of 
AI/Singularity is directly dependent upon one's understanding/design of AGI and 
intelligence in general.
   
  On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote: 
John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote: 

 I don't believe that the singularity is near, or that it will even occur. 
 I
 am working very hard at developing real artificial general intelligence, 
but
 from what I know, it will not come quickly.  It will be slow and 
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an 
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but 
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter 
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial 
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303