Re: [agi] The Singularity

2006-12-06 Thread John Scanlon
Hank - do you have any theories or AGI designs?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

  My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT "automation"
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).



I'd like to note that as a believer in Determinism there is no real
difference between the "automation" and the "real person" so
technically everything is an automation. Including yourself and all
those around you.

pe'i(I opine) that this universe does not exist independantly and so
is interconnected to other universes.  Meaning we may not have to
suffer the fate of our universe and live even after it has ended it's
life cycle by uploading ourselves to outside universes. This will only
be achievable in a post-Singularity world as we wouldn't have the
technological capacity to do so.

koJMIveko (be alive by your own standards)

--
ta'o(by the way)  We With You Network at: http://lokiworld.org .i(and)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:

Your message appeared at first to be rambling and incoherent, but I see that
that's probably because English is a second language for you.  But that's
not a problem if your ideas are solid.


English is my second language. My first language is Russian but I've
lived in Canada for just over 13 years -- I don't speak Russian on a
day to day basis.  Lojban I have only known about since last spring.
Currently I use Lojban on a day-to-day basis. Perhaps Lojban is
changing the way in which I think and the changes are expressing
themselves in my English. I admit I like using attitudinals
.ui(happiness).


And yes, language is an essential part of any intelligent system.  But there
there is another part you haven't mentioned -- the actual intelligence that
can understand and manipulate language.  Intelligence is not just parsing
and logic.  It is imagination and visualization that relates words to their
referents in the real world.

What is your idea of how this imagination and visualization that relates
language to phenomena in the real world can be engineered in software


If you mean "how will pattern recognition work in the
visual/auditory/sense system of the AI":
- I don't need cameras for keyboard input, OCR, or voice recognition
can handle other forms of language input.
- Cameras and detecting "real" things isn't really my goal. I just
want to increase productivity through automation of the things people
do.
- There are lots of people interested in graphics and pattern
recognition. They can always extend the system. The design goal is
really to make an easily extendable sustainable scalable complex
computer/network that takes care of itself.

If you mean something else you will need to elaborate for me to reply
as I'm having trouble understanding what it can mean.


in such a way that the singularity will be brought about?

I believe in hard-determinism implying anything you or I do is leading
to the Singularity -- if it is meant to be.

The point at which should start growing very fast is shortly after
there are over 150 developers/users on a "social augmentation
network".

MULno  JIKca seZENbaTCAna
Complete Social Augmentation Network:  sa'u(simply speaking) is a
network that allows for the automation of social activities such as
fact/state exchange to allow for creative endeavours to be the sole
occupation of the users (all/most other processes having been
automated) for entertainment. Mind-altering tools are definitely going
to be very popular in such a world.

 My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT "automation"
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).




Andrii (lOkadin) Zvorygin wrote:

> On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:
>>
>> Alright, I have to say this.
>>
>> I don't believe that the singularity is near, or that it will even occur.
>> I
>> am working very hard at developing real artificial general intelligence,
>> but
>> from what I know, it will not come quickly.  It will be slow and
>> incremental.  The idea that very soon we can create a system that can
>> understand its own code and start programming itself is ludicrous.
>>
>> Any arguments?
>>  
>
> Have you read Ray Kurzweil? He doesn't just make things up. There are
> plenty of reasons to believe in the Singularity.  Other than disaster
> theories there really is no negative evidence I've ever come across.
>
> "real artificial intelligence"
>
> .u'i(amusement) A little bit of an oxymoron there.  It also seems to
> imply there is "fake artificial intelligence".u'e(wonder). Of course
> if you could define "fake artificial intelligence" then you define
> what "real artificial intelligence" is.
>
> Once you define what "real artificial intelligence" means, or at least
> what symptoms you would be willing to satisfy for (Turing test).
>
> If it's the Turing test you're after as am I, then language is the
> key(I like stating the obvious please humour me).
>
> Once we established the goal -- a discussion between yourself and the
> computer in the language of choice.
>
> We look at the options that we have available: natural languages;
> artificial languages. Natural languages tend to be pretty ambiguous
> hard to parse, hard to code for -- you can do it if you are a
> masochist I don't mind .ui(happiness).
>
> Many/Most artificial languages suffer from similar if not the same
> kind of ambiguity, though because they are created they by definition
> can only have as many exceptions as were designed in.
>
> There is a promising subset of artificial languages: logical
> languages.  Logical languages adhe

Re: Re: [agi] The Singularity

2006-12-05 Thread John Scanlon

Alright, one last message for the night.

I don't actually consider myself to be pessimistic about AI.  I believe that 
strong AI can and will (bar some global catastrophe) develop.  It's the 
wrong-headed approaches through the history of AI that have hobbled the 
whole enterprise.  The 1970's have been called the AI winter, but I think 
we're in the biggest AI winter right now.



Ben Goertzel wrote:



I see a singularity, if it occurs at all, to be at least a hundred years
out.


To use Kurzweil's language, you're not thinking in "exponential time"  ;-)


The artificial intelligence problem is much more difficult
than most people imagine it to be.


"Most people" have close to zero basis to even think about the topic
in a useful way.

And most professional, academic or industry AI folks are more
pessimistic than you are.


 But what is it about
Novamente that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as 
complex

as Novamente)?


I'm not going to try to summarize the key ideas underlying Novamente
in an email.  I have been asked to write a nontechnical overview of
the NM approach to AGI for a popular website, and may find time for it
later this month... if so, I'll post a link to this list.

Obviously, I think I have solved some fundamental issues related to
implementing general cognition on contemporary computers.  I believe
the cognitive mechanisms designed for NM will be adequate to lead to
the emergence within the system of the key emergent structures of mind
(self, will, focused awareness), and from these key emergent
structures comes the capability for ever-increasing intelligence.

Specific timing estimates for NM are hard to come by -- especially
because of funding vagaries (currently progress is steady but slow for
this reason), and because of the general difficulty of estimating the
rate of progress of any large-scale software project .. not to mention
various research uncertainties.  But 100 years is way off.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Hank,

Do you have a personal "understanding/design of AGI and intelligence in 
general" that predicts a soon-to-come singularity?  Do you have theories or a 
design for an AGI?

John



Hank Conn wrote:

  It has been my experience that one's expectations on the future of 
AI/Singularity is directly dependent upon one's understanding/design of AGI and 
intelligence in general.
   
  On 12/5/06, Ben Goertzel <[EMAIL PROTECTED]> wrote: 
John,

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote: 
>
> I don't believe that the singularity is near, or that it will even occur. 
 I
> am working very hard at developing real artificial general intelligence, 
but
> from what I know, it will not come quickly.  It will be slow and 
> incremental.  The idea that very soon we can create a system that can
> understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an 
obnoxious reply:

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?


Seriously: I agree that progress toward AGI will be incremental, but 
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
minutes" fast, but at least "Wow, this system is getting a lot smarter 
every single week -- I've lost my urge to go on vacation" fast ...
leading up to the phase of "Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ..."

According to my understanding of the Novamente design and artificial 
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

I see a singularity, if it occurs at all, to be at least a hundred years
out.


To use Kurzweil's language, you're not thinking in "exponential time"  ;-)


The artificial intelligence problem is much more difficult
than most people imagine it to be.


"Most people" have close to zero basis to even think about the topic
in a useful way.

And most professional, academic or industry AI folks are more
pessimistic than you are.


 But what is it about
Novamente that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as complex
as Novamente)?


I'm not going to try to summarize the key ideas underlying Novamente
in an email.  I have been asked to write a nontechnical overview of
the NM approach to AGI for a popular website, and may find time for it
later this month... if so, I'll post a link to this list.

Obviously, I think I have solved some fundamental issues related to
implementing general cognition on contemporary computers.  I believe
the cognitive mechanisms designed for NM will be adequate to lead to
the emergence within the system of the key emergent structures of mind
(self, will, focused awareness), and from these key emergent
structures comes the capability for ever-increasing intelligence.

Specific timing estimates for NM are hard to come by -- especially
because of funding vagaries (currently progress is steady but slow for
this reason), and because of the general difficulty of estimating the
rate of progress of any large-scale software project .. not to mention
various research uncertainties.  But 100 years is way off.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
I'm a little bit familiar with Piaget, and I'm guessing that the "formal 
stage of development" is something on the level of a four-year-old child. 
If we could create an AI system with the intelligence of a four-year-old 
child, then we would have a huge breakthrough, far beyond anything done so 
far in a computer.  And we would be approaching a possible singularity. 
It's just that I see no evidence anywhere of this kind of breakthrough, or 
anything close to it.


My ideas are certainly inadequate in themselves at the present time.  My 
Gnoljinn project is just about at the point where I can start writing the 
code for the intelligence engine.  The architecture is in place, the 
interface language, Jinnteera, is being parsed, images are being sent into 
the Gnoljinn server (along with linguistic statements) and are being 
pre-processed.  The development of the intelligence engine will take time, a 
lot of coding, experimentation, and re-coding, until I get it right.  It's 
all experimental, and will take time.


I see a singularity, if it occurs at all, to be at least a hundred years 
out.  I know you have a much shorter time frame.  But what is it about 
Novamente that will allow it in a few years time to comprehend its own 
computer code and intelligently re-write it (especially a system as complex 
as Novamente)?  The artificial intelligence problem is much more difficult 
than most people imagine it to be.



Ben Goertzel wrote:


John,

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:


I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?


Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
minutes" fast, but at least "Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation" fast ...
leading up to the phase of "Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ..."

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Your message appeared at first to be rambling and incoherent, but I see that 
that's probably because English is a second language for you.  But that's 
not a problem if your ideas are solid.


Yes, there is "fake artificial intelligence" out there, systems that are 
proposed to be intelligent but aren't and can't be because they are dead 
ends.  A big example of this is Cyc.  And there are others.


The Turing test is a bad test for AI.  The reasons for this have already 
been brought up on this mailing list.  I could go into the criticisms 
myself, but there are other people here who have already spoken well on the 
subject.


And yes, language is an essential part of any intelligent system.  But there 
there is another part you haven't mentioned -- the actual intelligence that 
can understand and manipulate language.  Intelligence is not just parsing 
and logic.  It is imagination and visualization that relates words to their 
referents in the real world.


What is your idea of how this imagination and visualization that relates 
language to phenomena in the real world can be engineered in software in 
such a way that the singularity will be brought about?



Andrii (lOkadin) Zvorygin wrote:


On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:


Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

"real artificial intelligence"

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is "fake artificial intelligence".u'e(wonder). Of course
if you could define "fake artificial intelligence" then you define
what "real artificial intelligence" is.

Once you define what "real artificial intelligence" means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each "sentence" has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOvie

Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney

--- John Scanlon <[EMAIL PROTECTED]> wrote:

> Alright, I have to say this.
> 
> I don't believe that the singularity is near, or that it will even occur.  I
> am working very hard at developing real artificial general intelligence, but
> from what I know, it will not come quickly.  It will be slow and
> incremental.  The idea that very soon we can create a system that can
> understand its own code and start programming itself is ludicrous.
> 
> Any arguments?

Not very soon, maybe 10 or 20 years.  General programming skills will first
require an adult level language model and intelligence, something that could
pass the Turing test.

Currently we can write program-writing programs only in very restricted
environments with simple, well defined goals (e.g. genetic algorithms).  This
is not sufficient for recursive self improvement.  The AGI will first need to
be at the intellectual level of the humans who built it.  This means
sufficient skills to do research, and to write programs from ambiguous natural
language specificiations and have enough world knowledge to figure out what
the customer really wanted.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Pei Wang

See http://www.agiri.org/forum/index.php?showtopic=44 and
http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm

Pei

On 12/5/06, Andrii (lOkadin) Zvorygin <[EMAIL PROTECTED]> wrote:


Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
>> If, on the other hand, all we have is the present approach to AI then I
>> tend to agree with you John:  ludicrous.
>>
>>
>>
>>
>> Richard Loosemore
>
> IMO it is not sensible to speak of "the present approach to AI"
>
> There are a lot of approaches out there... not an orthodoxy by any means...

I'm aware of the different approaches, and of how very, very different
they are from one another.

But by contrast with the approach I am advocating, they all look like
"orthodoxy".  There is a *big* difference between the two sets of ideas.


In that context, and only in that context, it makes sense to talk about
"the present approach to AI".



Richard Loosemore.



Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson

Ben Goertzel wrote:

...
According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben
I do, however, have some question about it being a "hard takeoff".  That 
depends largely on

1) how efficient the program is, and
2) what computer resources are available.

To me it seems quite plausible that an AGI might start out as slightly 
less intelligent than a normal person, or even considerably less 
intelligent, with the limitation being due to the available computer 
time.  Naturally, this would change fairly rapidly over time, but not 
exponentially so, or at least not super-exponentially so.


If, however, the singularity is delayed because the programs aren't 
ready, or are too inefficient, then we might see a true "hard-takeoff".  
In that case by the time the program was ready, the computer resources 
that it needs would already be plentifully available.   This isn't 
impossible, if the program comes into existence in a few decades, but if 
the program comes into existence within the current decade, then there 
would be a soft-takeoff.  If it comes into existence within the next 
half-decade then I would expect the original AGI to be "sub-normal", due 
to lack of available resources.


Naturally all of this is dependent on many different things.  If Vista 
really does require as much of and immense retooling to more powerful 
computers as some predict, then  programs that aren't dependent on Vista 
will have more resources available, as computer designs are forced to be 
faster and more capacious.  (Wasn't Intel promising 50 cores on a single 
chip in a decade?  If each of those cores is as capable as a current 
single core, then it will take far fewer computers netted together to 
pool the same computing capacity...for those programs so structured as 
to use the capacity.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Hank Conn

"Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?"

It has been my experience that one's expectations on the future of
AI/Singularity is directly dependent upon one's understanding/design of AGI
and intelligence in general.

On 12/5/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:


John,

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:
>
> I don't believe that the singularity is near, or that it will even
occur.  I
> am working very hard at developing real artificial general intelligence,
but
> from what I know, it will not come quickly.  It will be slow and
> incremental.  The idea that very soon we can create a system that can
> understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?


Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
minutes" fast, but at least "Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation" fast ...
leading up to the phase of "Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ..."

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore

Ben Goertzel wrote:

If, on the other hand, all we have is the present approach to AI then I
tend to agree with you John:  ludicrous.




Richard Loosemore


IMO it is not sensible to speak of "the present approach to AI"

There are a lot of approaches out there... not an orthodoxy by any means...


I'm aware of the different approaches, and of how very, very different 
they are from one another.


But by contrast with the approach I am advocating, they all look like 
"orthodoxy".  There is a *big* difference between the two sets of ideas.


In that context, and only in that context, it makes sense to talk about 
"the present approach to AI".




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

If, on the other hand, all we have is the present approach to AI then I
tend to agree with you John:  ludicrous.




Richard Loosemore


IMO it is not sensible to speak of "the present approach to AI"

There are a lot of approaches out there... not an orthodoxy by any means...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore

John Scanlon wrote:

Alright, I have to say this.
 
I don't believe that the singularity is near, or that it will even 
occur.  I am working very hard at developing real artificial general 
intelligence, but from what I know, it will not come quickly.  It will 
be slow and incremental.  The idea that very soon we can create a system 
that can understand its own code and start programming itself is ludicrous.
 
Any arguments?


Back in 17th century Europe, people stood at the end of a long period of 
history (basically, all of previous history) during which curious humans 
had tried to understand how the world worked, but had largely failed to 
make substantial progress.


They had been suffering from an attitude problem:  there was something 
about their entire way of approaching the knowledge-discovery process 
that was wrong.  We now characterize their fault as being the lack of an 
"objective scientific method".


Then, all of a sudden, people got it.

Once it started happening, it spread like wildfire.  Then it went into 
overdrive when Isaac Newton cross-bred the new attitude with a vigorous 
dose of mathematical invention.


My point?  That you can keep banging the rocks together for a very long 
time and feel like you are just getting nowhere, but then all of a 
sudden you can do something as simple as change your attitude or your 
methodology slightly, and wham!, everything starts happening at once.


For what it is worth, I do not buy most of Kurzweil's arguments about 
the general progress of the technology curves.


I don't believe in that argument for the singularity at all, I believe 
that it will happen for a specific technological reason.


I think that there is something wrong with the "attitude" we have been 
adopting toward AI research, which is comparable to the attitude problem 
that divided the pre- and post-Enlightenment periods.


I have summarized a part of this argument in the paper that I wrote for 
the first AGIRI workshop.  The argument in that paper can be summarized 
as:  the first 30 years of AI was all about "scruffy" engineering, then 
the second 20 years of AI was all about "neat" mathematics, but because 
of the complex systems problem neither of these approaches would be 
expected to work, and what we need instead is a new attitude that is 
neither engineering nor math, but science. [This paper is due to be 
published in the AGIRI proceedings next year, but if anyone wants to 
contact me I will be able to send a not-for-circulation copy].


However, there is another, more broad-ranging way to look at the present 
situation, and that is that we have three research communities who do 
not communicate with one another:  AI Programmers, Cognitive Scientists 
(or Cognitive Psychologists) and Software Engineers.  What we need is a 
new science that merges these areas in a way that is NOT a lowest common 
denominator kind of merge.  We need people who truly understand all of 
them, not cross-travelling experts who mostly reside in one and (with 
the best will in world) think they know enough about the others.


This merging of the fields has never happened before.  More importantly, 
the specific technical issue related to the complex systems problem (the 
need for science, rather than engineering or math) has also never been 
fully appreciated before.


Everything I say in this post may be wrong, but one thing is for sure: 
this new approach/attitude has not been tried before, so the 
consequences of taking it seriously and trying it are lying out there in 
the future, completely unknown.


I believe that this is something we just don't get yet.  When we do, I 
think we will start to see the last fifty years of AI research as 
equivalent to the era before 1665.  I think that AI will start to take 
off in at breathtaking speed once the new attitude finally clicks.


The one thing that stops it from happening is the ego problem.  Too many 
people with too much invested in the supremacy they have within their 
own domain.  Frankly, I think it might only start to happen if we can 
take some people fresh out of high school and get them through a 
completely new curriculum, then get 'em through their Ph.D.s before they 
realise that all of the existing communities are going to treat them 
like lepers because they refuse to play the game. ;-)  But that would 
only take six years.


After we get it, in other words, *that* is when the singularity starts 
to happen.


If, on the other hand, all we have is the present approach to AI then I 
tend to agree with you John:  ludicrous.





Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

John,

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:


I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?


Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily "hard takeoff in 5
minutes" fast, but at least "Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation" fast ...
leading up to the phase of "Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ..."

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
"formal stage" of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the "human child like" intuition of the AGI system will
be able to synergize with its "computer like" ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon <[EMAIL PROTECTED]> wrote:



Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

"real artificial intelligence"

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is "fake artificial intelligence".u'e(wonder). Of course
if you could define "fake artificial intelligence" then you define
what "real artificial intelligence" is.

Once you define what "real artificial intelligence" means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each "sentence" has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOviet.)

The point being, that there  are a very finite number of functions
that have to be coded in order to allow the computer to be able to
interpret and act upon anything being said to it(Lojban is already
more expressive than a large amount of Natural Languages) .

How is this all going to be programmed?

Declarative statements: mu'a FANva zo VALsi la.ENGlic. la.LOJban.
zoi.gy. word .gy.
meaning the translation of word VALsi to ENGlic from LOJban is "word".

Now the computer knows this fact (held in a Prolog database until
there becomes a logical-speakable language compiler).

I will create a version of the interpreter in the lojban-prolog hybrid
language (More or less finished Lojban parser written in Prolog, am
now working on Lojban-Prolog hybrid language).

Yes I know I've dragged this out very far but it was necessary for me
to reply to:


The idea that very soon we can create a system that can understand its own code

Such as the one above described.


and start programming itself is ludicrous.



Depends on what you see as the goal of "p

Re: [agi] the Singularity Summit and regulation of AI

2006-05-11 Thread Bill Hibbard
Thank you for your responses.

Jeff, I have taken your suggestion and sent a couple
questions to the Summit. My concern is motivated by
noticing that the Summit includes speakers who have
been very clear about their opposition to regulating
AI, but none who I am aware of who have advocated it
(except Bill McKibben, who wants a total ban).

Ben, I was surprised not to see you, or several other
frequent AGI contributors, among the speakers.

Eliezer, glad to hear that you tried to get Bill Joy.
But like Bill McKibben, he favors a total ban on AI,
nanotechnology and genetic engineering. James Hughes,
and others such as myself, want the benefits of these
technologies but to regulate them to avoid potential
catastrophes.

Hopefully some of the non-speaking participants at
the Summit will express the point of view in favor
of proceeding with AI but regulating it.

Bill
http://www.ssec.wisc.edu/~billh/g/Singularity_Notes.html

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Jeff Medina

Ben is pretty spot on here.  There are many possible approaches and
views that will not be covered; there simply isn't enough time.  I
can't speak for the speakers, nor for the extent to which any one of
them will focus his or her time on regulation.  But please note that
the Summit has an open invitation for questions from the public
(sss.stanford.edu, lower left-hand column):

What's Your Question?
"Would you like to participate as more than an audience member? A
selection of questions submitted will be answered at the summit. You
can address your question generally or to a specific participant. Let
us know what you want answered and whether we may use your name."

I encourage anyone with concerns about regulation who would like to
increase the chance of this topic being mentioned to submit them in
question form to [EMAIL PROTECTED]  (Questions on
other topics are of course also welcome.)  Some questions will be
answered at the summit, and others may be answered afterwards on the
site.

And regardless of what side of the various issues you come down on, I
thank you for your interest in and concern for the safety and
prosperity of our shared future.

Best,
--
Jeff Medina
http://www.painfullyclear.com/

Associate Director
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

Relationships & Community Fellow
Institute for Ethics & Emerging Technologies
http://www.ieet.org/

School of Philosophy, Birkbeck, University of London
http://www.bbk.ac.uk/phil/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Russell Wallace
On 5/10/06, Bill Hibbard <[EMAIL PROTECTED]> wrote:
The Singularity Summit should include all points ofview, including advocates for regulation of intelligentmachines. It will weaken the Summit to exclude thispoint of view.
Then it would be better if the Summit were not held at all. Nanotech,
AGI etc advanced enough that constructive discussion of regulations
would be possible, even if one agreed with them in principle, are still
a very long way from even being on the horizon; talk of Singularity
right now is wildly premature as anything other than inspiring science
fiction; and blindly slapping on regulations at this point increases
the probability that humanity will simply die without ever getting near
the Singularity.

Will the Summit include that point of view?


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Ben Goertzel

On 5/10/06, Bill Hibbard <[EMAIL PROTECTED]> wrote:

I am concerned that the Singularity Summit will not include
any speaker advocating government regulation of intelligent
machines. The purpose of this message is not to convince you
of the need for such regulation, but just to say that the
Summit should include someone speaking in favor of it.

...


The Singularity Summit should include all points of
view, including advocates for regulation of intelligent
machines. It will weaken the Summit to exclude this
point of view.


In fairness to the organizers, I would note that it is a brief event
and all possible points of view cannot possibly be represented within
such a brief period of time.

As an aside, I certainly would have liked to be invited to speak
regarding the implication of AGI for the Singularity, but I understand
that they simply had a very small number of speaking slots: it's a
one-day conference.

I agree that if they have a series of similar events, then in time one
of them should include someone advocating government regulation of
intelligent machines, as this is a meaningful viewpoint deserving to
be heard.   I don't agree that this issue is so high-priority that
leaving it out of this initial one-day event is a big problem...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Mark Walker


- Original Message - 
From: "Bill Hibbard" Subject: [agi] the Singularity Summit and regulation of 
AI




I am concerned that the Singularity Summit will not include
any speaker advocating government regulation of intelligent
machines. The purpose of this message is not to convince you
of the need for such regulation, but just to say that the
Summit should include someone speaking in favor of it. Note
that, to be effective, regulation should be linked to a
widespread public movement like the environmental and
consumer safety movements. Intelligent weapons could be
regulated by treaties similar to those for nuclear, chemical
and biological weapons.

The obvious choice to advocate this position would be James
Hughes, and it is puzzling that he is not included among the
speakers. >



Bill Hibbard is another obvious choice.

Cheers,
Mark


Dr. Mark Walker
Department of Philosophy
University Hall 310
McMaster University
1280 Main Street West
Hamilton, Ontario, L8S 4K1
Canada

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]