Re: [agi] Re: [singularity] Help get the SIAI video on Digg Videos right now

2007-05-21 Thread Bob Mottram

The SIAI videos which are up on google so far look ok.  I didn't know
that they were actually trying to *build* an AI, as opposed to just
raising the generally relevant issues.

To the layman this will just look like bunk, since many of these
issues aren't yet within the popular zeitgeist.  If I asked around in
my neighbourhood I bet few people would know what the singularity
was, and if they did they would probably refer to it as a physics
concept.

Even amongst the computer science academics whom I'm met over the
years most believe that human-like intelligence in machines is many
decades or centuries away.  Some think it's impossible even in
principle.  I've been listening to the talking robots podcast for as
long as it's been going, and when questioned most academics working
within the robotics field give an extremely conservative view of what
the state of the art will be like 20 years from now.  Most just
predict a more of the same kind of scenario (i.e. robots will remain
mostly in universities or factories, no domestic/utility robots beyond
Roomba-type machines, etc).



On 21/05/07, Bruce Klein [EMAIL PROTECTED] wrote:


 The SIAI video made it to video homepage of Digg:
http://www.digg.com/videos

 Message from Tyler:

 It took a small miracle (it's very difficult to pull this off), but we are
there.

 There's a good chance that the video will be buried (taken off the main
page) because of naysayers, since  it's now being seen by a broader range of
people, many of whom will never have heard about SIAI or the notion of the
singularity, and will thus consider all of this...a little funny.

 So, we really need a lot more Diggs, 5-star ratings, positive comments, and
blog posts/links to ensure the video stays on the main page of Digg Videos,
so that thousands of people on Monday and throughout the coming week can
learn about the Singularity Institute, and begin the process of learning
more about transhumanist topics.

http://www.digg.com/videos/educational/Singularity_Institute_for_Artificial_Intelligence

 Thank you to everyone who helped us so far. Let's keep pushing this
momentum we've created.

 With best wishes,

 --
 Tyler Emerson | Executive Director
 Singularity Institute for Artificial Intelligence
 P.O . Box 50182, Palo Alto, CA 94303 USA
 650-353-6063 | [EMAIL PROTECTED] | singinst.org


 Bruce Klein wrote:
The Singularity Institute is making a big push right now to get our
 new video on the front page of Digg Videos. Doing so will ensure the
 video is seen by thousands of new people.

 Please Digg the following as soon as you get this email:

http://www.digg.com/videos/educational/Singularity_Institute_for_Artificial_Intelligence

 In order to maximize exposure, we need 200 people to click Digg it
 within the initial few hours. The video was submitted at 5PM Pacific /
 8PM Eastern. Please be one of the 200 we need.

 Please also consider adding a positive comment about SIAI at the Digg URL.

 Thanks so much,
 Bruce

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Re: [singularity] Why do you think your AGI design will work?

2007-04-25 Thread James Ratcliff

Well, I intentionally **didn't** suggest just passing the exams. 

My version of the University of Phoenix test requires some
real-time human social interaction as well -- some classes require
participation in discussions online...

Also, some writing of essays is required, not just exams... 

This really gets back to the Turing test then.
Just not the *slightly twisted* Turing test where junk is inputed... I think 
the heart of the Turing test, to hold a conversation as a human then, is fine, 
and the real-time social interaction would have to do this as well.

For Below:
  Given a single goal and environment below, and finding the complexity or 
hardness of achieving, could you not generate a list of these, with ranging 
complexity levels, and then grade an AGI based on the list.
A sample test could be, of the 1000 tasks, test the AGI on 100 different ones, 
and see how well it does.
  We coudl then determine what thigns are in common for many of the cases, and 
what actual use cases are the most important from there as well.

I am kind of stuck (as you were I guess when considering the 3D avatars) on 
what exact usage of the AGI should be.  I know we all want intelligence, but 
what exactly it is supposed to do, other than either a limited single task 
(compression) or the overwhelming goal of Everything, is eluding me.


Benjamin Goertzel [EMAIL PROTECTED] wrote: 
Well, in my 1993 book The Structure of Intelligence I defined intelligence as 

The ability to achieve complex goals in complex environments.

I followed this up with a mathematical definition of complexity grounded in 
algorithmic information theory (roughly: the complexity of X is the amount of
pattern immanent in X or emergent between X and other Y's in its environment).

This was closely related to what Hutter and Legg did last year, in a more 
rigorous 
paper that gave an algorithmic information theory based definition of 
intelligence.

Having put some time into this sort of definitional work, I then moved on to 
more
interesting things like figuring out how to actually make an intelligent 
software system 
given feasible computational resources.

The catch with the above definition is that a truly general intelligence is 
possible
only w/ infinitely many computational resources.  So, different AGIs may be able
 to achieve different sorts of complex goals in different sorts of complex 
environments.
And if an AGI is sufficiently different from us humans, we may not even be able
to comprehend the complexity of the goals or environments that are most 
relevant 
to it.

So, there is a general theory of what AGI is, it's just not very useful.

To make it pragmatic one has to specify some particular classes of goals and
environments.  For example

goal = getting good grades 
environment = online universities

Then, to connect this kind of pragmatic definition with the mathematical
definition, one would have the prove the complexity of the goal (getting good
grades) and the environment (online universities) based on some relevant 
computational model.  But the latter seems very tedious and boring work...

And IMO, all this does not move us very far toward AGI, though it may help
avoid some conceptual pitfalls that could have been fallen into otherwise... 

-- Ben G
On 4/24/07, Mike Tintner [EMAIL PROTECTED] wrote:Hi,
  
 I strongly disagree - there is a need to provide a  definition of AGI - not 
necessarily the right or optimal definition, but one  that poses concrete 
challenges and focusses the mind - even if  it's only a starting-point. The 
reason the Turing Test has been such a  successful/ popular idea is that it 
focusses the mind.
  
 (BTW I immediately noticed your lack of a good  definition on going through 
your site and papers, and it immediately raised  doubts in my mind. In general, 
the more or less focussed your definition/  mission statement, I would argue, 
the more or less seriously people will tend to  take you). 
  
 Ironically, I was just trying to take Marvin Minsky  to task for this on 
another forum. I suddenly realised that although he has been  talking about the 
problem of AGI for decades, he has only waved at it, and not  really engaged 
with it. He talks  about how  having  different ways of thinking about a 
problem like the human mind does, is  important for AGI  - and that's certainly 
one central problem/ goal -  but he doesn't really focus it. 
  
 Here's my first crack at a definition - very crude  - offered strictly in 
brainstorming mode - but I think it does  focus a couple of AGI challenges at 
least - and fits with some of the stuff  you say.
  
  AN AGI MACHINE - a truly adaptive, truly  learning machine - is one that will 
be able to:
  
 1) conduct a set of goal-seeking  activities
  
 - where it starts with only a rough,  incomplete idea of how to reach its 
goals,
  
 - i.e. knows only some of the steps it must take,   some of the rules that 
govern those steps
  
 - and can find its 

Re: [agi] AGI != Singularity

2007-04-16 Thread Bob Mottram

I mostly agree with this.  There are a set of jobs which most creatures need
to be able to perform.  Finding food, navigating through space, avoiding
harm, practising skills, predicting near future events, reproducing and so
on.  Intelligence could be said to be a function of this combined set of job
skills.  Necessarily under finite resource constraints there will be
trade-offs between jobs, otherwise you just have an expert system.

I think as Ben said previously it's better to talk about systems being
human-like (i.e. having some qualitative similarity with the types of jobs
which humans do) rather than being at human level.



On 16/04/07, Russell Wallace [EMAIL PROTECTED] wrote:


Furthermore, even if you postulate AGI0 that could create AGI1 unaided in
a vacuum, there remains the fact that AGI0 won't be in a vacuum, nor if it
were would it have any motive for creating AGI1, nor any reason to prefer
one bit stream rather than another as a design for AGI1. There is after all
no such function as:

float intelligence(program p)

There is, however, a family of functions (albeit incomputable in the
general case):

float intelligence(program p, job j)

In other words, intelligence is useful - and can be said to even exist -
only in the context of the jobs the putatively intelligent agent is doing.
And jobs are supplied by the real world - which is run by humans. Even in
the absence of technical issues about the shape of capabilities, this alone
would suffice to require humans to stay in the loop.

The point of all this isn't to pour cold water on people's ideas, it's to
point out that we will make more progress if we stop thinking of AGI as a
human child. It's a completely different kind of thing, and more akin to
existing software in that it must function as an extension of, rather than
replacement for, the human mind. That means we have to understand it in
order to continue improving it - black box methods have to be confined to
isolated modules. It means user interface will continue to be of central
importance, just as it is today. It means the Lamarckian evolutionary path
of AGI will have to be based, just as current software is, on increased
usefulness to humans at each step.
--
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:

Your message appeared at first to be rambling and incoherent, but I see that
that's probably because English is a second language for you.  But that's
not a problem if your ideas are solid.


English is my second language. My first language is Russian but I've
lived in Canada for just over 13 years -- I don't speak Russian on a
day to day basis.  Lojban I have only known about since last spring.
Currently I use Lojban on a day-to-day basis. Perhaps Lojban is
changing the way in which I think and the changes are expressing
themselves in my English. I admit I like using attitudinals
.ui(happiness).


And yes, language is an essential part of any intelligent system.  But there
there is another part you haven't mentioned -- the actual intelligence that
can understand and manipulate language.  Intelligence is not just parsing
and logic.  It is imagination and visualization that relates words to their
referents in the real world.

What is your idea of how this imagination and visualization that relates
language to phenomena in the real world can be engineered in software


If you mean how will pattern recognition work in the
visual/auditory/sense system of the AI:
- I don't need cameras for keyboard input, OCR, or voice recognition
can handle other forms of language input.
- Cameras and detecting real things isn't really my goal. I just
want to increase productivity through automation of the things people
do.
- There are lots of people interested in graphics and pattern
recognition. They can always extend the system. The design goal is
really to make an easily extendable sustainable scalable complex
computer/network that takes care of itself.

If you mean something else you will need to elaborate for me to reply
as I'm having trouble understanding what it can mean.


in such a way that the singularity will be brought about?

I believe in hard-determinism implying anything you or I do is leading
to the Singularity -- if it is meant to be.

The point at which should start growing very fast is shortly after
there are over 150 developers/users on a social augmentation
network.

MULno  JIKca seZENbaTCAna
Complete Social Augmentation Network:  sa'u(simply speaking) is a
network that allows for the automation of social activities such as
fact/state exchange to allow for creative endeavours to be the sole
occupation of the users (all/most other processes having been
automated) for entertainment. Mind-altering tools are definitely going
to be very popular in such a world.

 My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT automation
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).




Andrii (lOkadin) Zvorygin wrote:

 On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:

 Alright, I have to say this.

 I don't believe that the singularity is near, or that it will even occur.
 I
 am working very hard at developing real artificial general intelligence,
 but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

 Any arguments?
  

 Have you read Ray Kurzweil? He doesn't just make things up. There are
 plenty of reasons to believe in the Singularity.  Other than disaster
 theories there really is no negative evidence I've ever come across.

 real artificial intelligence

 .u'i(amusement) A little bit of an oxymoron there.  It also seems to
 imply there is fake artificial intelligence.u'e(wonder). Of course
 if you could define fake artificial intelligence then you define
 what real artificial intelligence is.

 Once you define what real artificial intelligence means, or at least
 what symptoms you would be willing to satisfy for (Turing test).

 If it's the Turing test you're after as am I, then language is the
 key(I like stating the obvious please humour me).

 Once we established the goal -- a discussion between yourself and the
 computer in the language of choice.

 We look at the options that we have available: natural languages;
 artificial languages. Natural languages tend to be pretty ambiguous
 hard to parse, hard to code for -- you can do it if you are a
 masochist I don't mind .ui(happiness).

 Many/Most artificial languages suffer from similar if not the same
 kind of ambiguity, though because they are created they by definition
 can only have as many exceptions as were designed in.

 There is a promising subset of artificial languages: logical
 languages.  Logical languages adhere to some form of logic(usually
 predicate) and are a relatively new 

Re: [agi] The Singularity

2006-12-06 Thread Andrii (lOkadin) Zvorygin

  My aim is not a General AI, currently it's MJZT.  General AI just
seems to be a side effect .u'i(amuzement). Nodes in a JZT communicate
through language (and whatever form it may take), automation occurs of
the communication. After a certain point a typical JZT automation
would be able to have a conversation with an ordinary human and the
human will have trouble seeing the JZT as an inferior entity(revised
Turing test).



I'd like to note that as a believer in Determinism there is no real
difference between the automation and the real person so
technically everything is an automation. Including yourself and all
those around you.

pe'i(I opine) that this universe does not exist independantly and so
is interconnected to other universes.  Meaning we may not have to
suffer the fate of our universe and live even after it has ended it's
life cycle by uploading ourselves to outside universes. This will only
be achievable in a post-Singularity world as we wouldn't have the
technological capacity to do so.

koJMIveko (be alive by your own standards)

--
ta'o(by the way)  We With You Network at: http://lokiworld.org .i(and)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-06 Thread John Scanlon
Hank - do you have any theories or AGI designs?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:



Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

real artificial intelligence

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is fake artificial intelligence.u'e(wonder). Of course
if you could define fake artificial intelligence then you define
what real artificial intelligence is.

Once you define what real artificial intelligence means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each sentence has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOviet.)

The point being, that there  are a very finite number of functions
that have to be coded in order to allow the computer to be able to
interpret and act upon anything being said to it(Lojban is already
more expressive than a large amount of Natural Languages) .

How is this all going to be programmed?

Declarative statements: mu'a FANva zo VALsi la.ENGlic. la.LOJban.
zoi.gy. word .gy.
meaning the translation of word VALsi to ENGlic from LOJban is word.

Now the computer knows this fact (held in a Prolog database until
there becomes a logical-speakable language compiler).

I will create a version of the interpreter in the lojban-prolog hybrid
language (More or less finished Lojban parser written in Prolog, am
now working on Lojban-Prolog hybrid language).

Yes I know I've dragged this out very far but it was necessary for me
to reply to:


The idea that very soon we can create a system that can understand its own code

Such as the one above described.


and start programming itself is ludicrous.



Depends on what you see as the goal of programming. If 

Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


I don't believe that the singularity is near, or that it will even occur.  I
am working very hard at developing real artificial general intelligence, but
from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Richard Loosemore

John Scanlon wrote:

Alright, I have to say this.
 
I don't believe that the singularity is near, or that it will even 
occur.  I am working very hard at developing real artificial general 
intelligence, but from what I know, it will not come quickly.  It will 
be slow and incremental.  The idea that very soon we can create a system 
that can understand its own code and start programming itself is ludicrous.
 
Any arguments?


Back in 17th century Europe, people stood at the end of a long period of 
history (basically, all of previous history) during which curious humans 
had tried to understand how the world worked, but had largely failed to 
make substantial progress.


They had been suffering from an attitude problem:  there was something 
about their entire way of approaching the knowledge-discovery process 
that was wrong.  We now characterize their fault as being the lack of an 
objective scientific method.


Then, all of a sudden, people got it.

Once it started happening, it spread like wildfire.  Then it went into 
overdrive when Isaac Newton cross-bred the new attitude with a vigorous 
dose of mathematical invention.


My point?  That you can keep banging the rocks together for a very long 
time and feel like you are just getting nowhere, but then all of a 
sudden you can do something as simple as change your attitude or your 
methodology slightly, and wham!, everything starts happening at once.


For what it is worth, I do not buy most of Kurzweil's arguments about 
the general progress of the technology curves.


I don't believe in that argument for the singularity at all, I believe 
that it will happen for a specific technological reason.


I think that there is something wrong with the attitude we have been 
adopting toward AI research, which is comparable to the attitude problem 
that divided the pre- and post-Enlightenment periods.


I have summarized a part of this argument in the paper that I wrote for 
the first AGIRI workshop.  The argument in that paper can be summarized 
as:  the first 30 years of AI was all about scruffy engineering, then 
the second 20 years of AI was all about neat mathematics, but because 
of the complex systems problem neither of these approaches would be 
expected to work, and what we need instead is a new attitude that is 
neither engineering nor math, but science. [This paper is due to be 
published in the AGIRI proceedings next year, but if anyone wants to 
contact me I will be able to send a not-for-circulation copy].


However, there is another, more broad-ranging way to look at the present 
situation, and that is that we have three research communities who do 
not communicate with one another:  AI Programmers, Cognitive Scientists 
(or Cognitive Psychologists) and Software Engineers.  What we need is a 
new science that merges these areas in a way that is NOT a lowest common 
denominator kind of merge.  We need people who truly understand all of 
them, not cross-travelling experts who mostly reside in one and (with 
the best will in world) think they know enough about the others.


This merging of the fields has never happened before.  More importantly, 
the specific technical issue related to the complex systems problem (the 
need for science, rather than engineering or math) has also never been 
fully appreciated before.


Everything I say in this post may be wrong, but one thing is for sure: 
this new approach/attitude has not been tried before, so the 
consequences of taking it seriously and trying it are lying out there in 
the future, completely unknown.


I believe that this is something we just don't get yet.  When we do, I 
think we will start to see the last fifty years of AI research as 
equivalent to the era before 1665.  I think that AI will start to take 
off in at breathtaking speed once the new attitude finally clicks.


The one thing that stops it from happening is the ego problem.  Too many 
people with too much invested in the supremacy they have within their 
own domain.  Frankly, I think it might only start to happen if we can 
take some people fresh out of high school and get them through a 
completely new curriculum, then get 'em through their Ph.D.s before they 
realise that all of the existing communities are going to treat them 
like lepers because they refuse to play the game. ;-)  But that would 
only take six years.


After we get it, in other words, *that* is when the singularity starts 
to happen.


If, on the other hand, all we have is the present approach to AI then I 
tend to agree with you John:  ludicrous.





Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

If, on the other hand, all we have is the present approach to AI then I
tend to agree with you John:  ludicrous.




Richard Loosemore


IMO it is not sensible to speak of the present approach to AI

There are a lot of approaches out there... not an orthodoxy by any means...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Hank Conn

Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?

It has been my experience that one's expectations on the future of
AI/Singularity is directly dependent upon one's understanding/design of AGI
and intelligence in general.

On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote:


John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:

 I don't believe that the singularity is near, or that it will even
occur.  I
 am working very hard at developing real artificial general intelligence,
but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Charles D Hixson

Ben Goertzel wrote:

...
According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben
I do, however, have some question about it being a hard takeoff.  That 
depends largely on

1) how efficient the program is, and
2) what computer resources are available.

To me it seems quite plausible that an AGI might start out as slightly 
less intelligent than a normal person, or even considerably less 
intelligent, with the limitation being due to the available computer 
time.  Naturally, this would change fairly rapidly over time, but not 
exponentially so, or at least not super-exponentially so.


If, however, the singularity is delayed because the programs aren't 
ready, or are too inefficient, then we might see a true hard-takeoff.  
In that case by the time the program was ready, the computer resources 
that it needs would already be plentifully available.   This isn't 
impossible, if the program comes into existence in a few decades, but if 
the program comes into existence within the current decade, then there 
would be a soft-takeoff.  If it comes into existence within the next 
half-decade then I would expect the original AGI to be sub-normal, due 
to lack of available resources.


Naturally all of this is dependent on many different things.  If Vista 
really does require as much of and immense retooling to more powerful 
computers as some predict, then  programs that aren't dependent on Vista 
will have more resources available, as computer designs are forced to be 
faster and more capacious.  (Wasn't Intel promising 50 cores on a single 
chip in a decade?  If each of those cores is as capable as a current 
single core, then it will take far fewer computers netted together to 
pool the same computing capacity...for those programs so structured as 
to use the capacity.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Andrii (lOkadin) Zvorygin

On 12/5/06, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:
 If, on the other hand, all we have is the present approach to AI then I
 tend to agree with you John:  ludicrous.




 Richard Loosemore

 IMO it is not sensible to speak of the present approach to AI

 There are a lot of approaches out there... not an orthodoxy by any means...

I'm aware of the different approaches, and of how very, very different
they are from one another.

But by contrast with the approach I am advocating, they all look like
orthodoxy.  There is a *big* difference between the two sets of ideas.


In that context, and only in that context, it makes sense to talk about
the present approach to AI.



Richard Loosemore.



Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Pei Wang

See http://www.agiri.org/forum/index.php?showtopic=44 and
http://www.cis.temple.edu/~pwang/203-AI/Lecture/AGI.htm

Pei

On 12/5/06, Andrii (lOkadin) Zvorygin [EMAIL PROTECTED] wrote:


Is there anywhere I could find a list and description of these
different kinds of AI?.a'u(interest) I'm sure I could learn a lot as
I'm rather new to the f ield.  I'm in
Second year undergard,
Majoring in Cognitive Sciences,
Specializing in Artificial Intelligence,
York University, Toronto, Canada.

So I think such a list would be very beneficial for beginners like me
.ui(happiness)
ki'e(thanks) in advance.

--
ta'o(by the way)
more on Lojban: http://lojban.org
mu'oimi'e lOkadin (Over, my name is lOkadin)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread Matt Mahoney

--- John Scanlon [EMAIL PROTECTED] wrote:

 Alright, I have to say this.
 
 I don't believe that the singularity is near, or that it will even occur.  I
 am working very hard at developing real artificial general intelligence, but
 from what I know, it will not come quickly.  It will be slow and
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.
 
 Any arguments?

Not very soon, maybe 10 or 20 years.  General programming skills will first
require an adult level language model and intelligence, something that could
pass the Turing test.

Currently we can write program-writing programs only in very restricted
environments with simple, well defined goals (e.g. genetic algorithms).  This
is not sufficient for recursive self improvement.  The AGI will first need to
be at the intellectual level of the humans who built it.  This means
sufficient skills to do research, and to write programs from ambiguous natural
language specificiations and have enough world knowledge to figure out what
the customer really wanted.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Your message appeared at first to be rambling and incoherent, but I see that 
that's probably because English is a second language for you.  But that's 
not a problem if your ideas are solid.


Yes, there is fake artificial intelligence out there, systems that are 
proposed to be intelligent but aren't and can't be because they are dead 
ends.  A big example of this is Cyc.  And there are others.


The Turing test is a bad test for AI.  The reasons for this have already 
been brought up on this mailing list.  I could go into the criticisms 
myself, but there are other people here who have already spoken well on the 
subject.


And yes, language is an essential part of any intelligent system.  But there 
there is another part you haven't mentioned -- the actual intelligence that 
can understand and manipulate language.  Intelligence is not just parsing 
and logic.  It is imagination and visualization that relates words to their 
referents in the real world.


What is your idea of how this imagination and visualization that relates 
language to phenomena in the real world can be engineered in software in 
such a way that the singularity will be brought about?



Andrii (lOkadin) Zvorygin wrote:


On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


Alright, I have to say this.

I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.

Any arguments?
 


Have you read Ray Kurzweil? He doesn't just make things up. There are
plenty of reasons to believe in the Singularity.  Other than disaster
theories there really is no negative evidence I've ever come across.

real artificial intelligence

.u'i(amusement) A little bit of an oxymoron there.  It also seems to
imply there is fake artificial intelligence.u'e(wonder). Of course
if you could define fake artificial intelligence then you define
what real artificial intelligence is.

Once you define what real artificial intelligence means, or at least
what symptoms you would be willing to satisfy for (Turing test).

If it's the Turing test you're after as am I, then language is the
key(I like stating the obvious please humour me).

Once we established the goal -- a discussion between yourself and the
computer in the language of choice.

We look at the options that we have available: natural languages;
artificial languages. Natural languages tend to be pretty ambiguous
hard to parse, hard to code for -- you can do it if you are a
masochist I don't mind .ui(happiness).

Many/Most artificial languages suffer from similar if not the same
kind of ambiguity, though because they are created they by definition
can only have as many exceptions as were designed in.

There is a promising subset of artificial languages: logical
languages.  Logical languages adhere to some form of logic(usually
predicate) and are a relatively new phenomenon(1955 first paper on
Loglan. All logical languages I'm aware of are derivatives).

Problem with Loglan is that it is proprietary, so that brings us to
Lojban. Lojban will probably not be the final solution either as there
is still some ambiguity in the lujvo (compound words).

A Lojban-Prolog hybrid language is currently being worked on by myself.

In predicate logic(as with logical languages) each sentence has a
predicate(function .i.e. KLAma). Each predicate takes
arguments(SUMti).

If you are to type a logical sentence to an inter perter depending on
the kind of sentence it can perform different actions.

Imperative statement: mu'a(for example) ko FANva zo VALsi
  meaning: be the translator of word VALsi

This isn't really enough information for you or I to give a reply with
any certainty as we don't know the language to translate from and the
language to translate to, which brings us to.

Questions: mu'a  .i FANva zo VALsi ma ma
meaning: translation of word VALsi into what language from what language?
(.e'o(request) make an effort to look at the Lojban, I know it's hard
but it's essential for conveying the simplicity with which you can
make well articulated unambiguous statements in Lojban that are easy
to parse and interpret.)

To this question the user could reply: la.ENGlic. la.LOJban.
meaning: That which is named ENGlic That which is named LOJban.

If the computer has the information about the translation it will
return it. If not it will ask the user to fill in the blank by asking
another question (mu'a .iFANva fuma)

There are almost 1300 root words(GISmu) in Lojban with several hundred
CMAvo.  For my implementation of the language I will probably remove a
large amount of these as they are not necessary(mu'a SOFto which means
Soviet) and should really go into name(CMEne) space(mu'a la.SOviet.)

The point 

Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
I'm a little bit familiar with Piaget, and I'm guessing that the formal 
stage of development is something on the level of a four-year-old child. 
If we could create an AI system with the intelligence of a four-year-old 
child, then we would have a huge breakthrough, far beyond anything done so 
far in a computer.  And we would be approaching a possible singularity. 
It's just that I see no evidence anywhere of this kind of breakthrough, or 
anything close to it.


My ideas are certainly inadequate in themselves at the present time.  My 
Gnoljinn project is just about at the point where I can start writing the 
code for the intelligence engine.  The architecture is in place, the 
interface language, Jinnteera, is being parsed, images are being sent into 
the Gnoljinn server (along with linguistic statements) and are being 
pre-processed.  The development of the intelligence engine will take time, a 
lot of coding, experimentation, and re-coding, until I get it right.  It's 
all experimental, and will take time.


I see a singularity, if it occurs at all, to be at least a hundred years 
out.  I know you have a much shorter time frame.  But what is it about 
Novamente that will allow it in a few years time to comprehend its own 
computer code and intelligently re-write it (especially a system as complex 
as Novamente)?  The artificial intelligence problem is much more difficult 
than most people imagine it to be.



Ben Goertzel wrote:


John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote:


I don't believe that the singularity is near, or that it will even occur. 
I
am working very hard at developing real artificial general intelligence, 
but

from what I know, it will not come quickly.  It will be slow and
incremental.  The idea that very soon we can create a system that can
understand its own code and start programming itself is ludicrous.


First, since my birthday is just a few days off, I'll permit myself an
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] The Singularity

2006-12-05 Thread Ben Goertzel

I see a singularity, if it occurs at all, to be at least a hundred years
out.


To use Kurzweil's language, you're not thinking in exponential time  ;-)


The artificial intelligence problem is much more difficult
than most people imagine it to be.


Most people have close to zero basis to even think about the topic
in a useful way.

And most professional, academic or industry AI folks are more
pessimistic than you are.


 But what is it about
Novamente that will allow it in a few years time to comprehend its own
computer code and intelligently re-write it (especially a system as complex
as Novamente)?


I'm not going to try to summarize the key ideas underlying Novamente
in an email.  I have been asked to write a nontechnical overview of
the NM approach to AGI for a popular website, and may find time for it
later this month... if so, I'll post a link to this list.

Obviously, I think I have solved some fundamental issues related to
implementing general cognition on contemporary computers.  I believe
the cognitive mechanisms designed for NM will be adequate to lead to
the emergence within the system of the key emergent structures of mind
(self, will, focused awareness), and from these key emergent
structures comes the capability for ever-increasing intelligence.

Specific timing estimates for NM are hard to come by -- especially
because of funding vagaries (currently progress is steady but slow for
this reason), and because of the general difficulty of estimating the
rate of progress of any large-scale software project .. not to mention
various research uncertainties.  But 100 years is way off.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] The Singularity

2006-12-05 Thread John Scanlon
Hank,

Do you have a personal understanding/design of AGI and intelligence in 
general that predicts a soon-to-come singularity?  Do you have theories or a 
design for an AGI?

John



Hank Conn wrote:

  It has been my experience that one's expectations on the future of 
AI/Singularity is directly dependent upon one's understanding/design of AGI and 
intelligence in general.
   
  On 12/5/06, Ben Goertzel [EMAIL PROTECTED] wrote: 
John,

On 12/5/06, John Scanlon [EMAIL PROTECTED] wrote: 

 I don't believe that the singularity is near, or that it will even occur. 
 I
 am working very hard at developing real artificial general intelligence, 
but
 from what I know, it will not come quickly.  It will be slow and 
 incremental.  The idea that very soon we can create a system that can
 understand its own code and start programming itself is ludicrous.

First, since my birthday is just a few days off, I'll permit myself an 
obnoxious reply:
grin
Ummm... perhaps your skepticism has more to do with the inadequacies
of **your own** AGI design than with the limitations of AGI designs in
general?
/grin

Seriously: I agree that progress toward AGI will be incremental, but 
the question is how long each increment will take.  My bet is that
progress will seem slow for a while -- and then, all of a sudden,
it'll seem shockingly fast.  Not necessarily hard takeoff in 5
minutes fast, but at least Wow, this system is getting a lot smarter 
every single week -- I've lost my urge to go on vacation fast ...
leading up to the phase of Suddenly the hard takeoff is a topic for
discussion **with the AI system itself** ...

According to my understanding of the Novamente design and artificial 
developmental psychology, the breakthrough from slow to fast
incremental progress will occur when the AGI system reaches Piaget's
formal stage of development:

http://www.agiri.org/wiki/index.php/Formal_Stage

At this point, the human child like intuition of the AGI system will
be able to synergize with its computer like ability to do formal
syntactic analysis, and some really interesting stuff will start to
happen (deviating pretty far from our experience with human cognitive
development).

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel

Hi,


There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it:  I am proposing a general
*class* of architectures for an AI-with-motivational-system.  I am not
saying that this is a specific instance (with all the details nailed
down) of that architecture, but an entire class. an approach.

However, as I explain in detail below, most of your criticisms are that
there MIGHT be instances of that architecture that do not work.


No.   I don't see why there will be any instances of your architecture
that do work (in the sense of providing guaranteeable Friendliness
under conditions of radical, intelligence-increasing
self-modification).

And you have not given any sort of rigorous argument that such
instances will exist

Just some very hand-wavy, intuitive suggestions, centering on the
notion that (to paraphrase) because there are a lot of constraints, a
miracle happens  ;-)

I don't find your intuitive suggestions foolish or anything, just
highly sketchy and unconvincing.

I would say the same about Eliezer's attempt to make a Friendly AI
architecture in his old, now-repudiated-by-him essay Creating a
Friendly AI.  A lot in CFAI seemed plausible to me , and the intuitive
arguments were more fully fleshed out than your in your email
(naturally, because it was an article, not an email) ... but in the
end I felt unconvinced, and Eliezer eventually came to agree with me
(though not on the best approach to fixing the problems)...


  In a radically self-improving AGI built according to your
  architecture, the set of constraints would constantly be increasing in
  number and complexity ... in a pattern based on stimuli from the
  environment as well as internal stimuli ... and it seems to me you
  have no way to guarantee based on the smaller **initial** set of
  constraints, that the eventual larger set of constraints is going to
  preserve Friendliness or any other criterion.

On the contrary, this is a system that grows by adding new ideas whose
motivatonal status must be consistent with ALL of the previous ones, and
the longer the system is allowed to develop, the deeper the new ideas
are constrained by the sum total of what has gone before.


This does not sound realistic.  Within realistic computational
constraints, I don't see how an AI system is going to verify that each
of its new ideas is consistent with all of its previous ideas.

This is a specific issue that has required attention within the
Novamente system.  In Novamente, each new idea is specifically NOT
required to be verified for consistency against all previous ideas
existing in the system, because this would make the process of
knowledge acquisition computationally intractable.  Rather, it is
checked for consistency against those other pieces of knowledge with
which it directly interacts.  If an inconsistency is noticed, in
real-time, during the course of thought, then it is resolved
(sometimes by a biased random decision, if there is not enough
evidence to choose between two inconsistent alternatives; or
sometimes, if the matter is important enough, by explicitly
maintaining two inconsistent perspectives in the system, with separate
labels, and an instruction to pay attention to resolving the
inconsistency as more evidence comes in.)

The kind of distributed system you are describing seems NOT to solve
the computational problem of verifying the consistency of each new
knowledge item with each other knowledge item.



Thus:  if the system has grown up and acquired a huge number of examples
and ideas about what constitutes good behavior according to its internal
system of values, then any new ideas about new values must, because of
the way the system is designed, prove themselves by being compared
against all of the old ones.


If each idea must be compared against all other ideas, then cognition
has order n^2 where n is the number of ideas.  This is not workable.
Some heuristic shortcuts must be used to decrease the number of
comparisons, and such heuristics introduce the possibility of error...


And I said ridiculously small chance advisedly:  if 10,000 previous
constraints apply to each new motivational idea, and if 9,900 of them
say 'Hey, this is inconsistent with what I think is a good thing to do',
then it doesn't have a snowball's chance in hell of getting accepted.
THIS is the deep potential well I keep referring to.


The problem, as I said, is posing a set of constraints that is both
loose enough to allow innovative new behaviors, and tight enough to
prevent the wrong behaviors...


I maintain that we can, during early experimental work, understand the
structure of the motivational system well enough to get it up to a
threshold of acceptably friendly behavior, and that beyond that point
its stability will be self-reinforcing, for the above reasons.


Well, I hope so ;-)

I don't rule out the possibility, but I don't feel you've argued for
it convincingly, 

Re: [agi] the Singularity Summit and regulation of AI

2006-05-11 Thread Bill Hibbard
Thank you for your responses.

Jeff, I have taken your suggestion and sent a couple
questions to the Summit. My concern is motivated by
noticing that the Summit includes speakers who have
been very clear about their opposition to regulating
AI, but none who I am aware of who have advocated it
(except Bill McKibben, who wants a total ban).

Ben, I was surprised not to see you, or several other
frequent AGI contributors, among the speakers.

Eliezer, glad to hear that you tried to get Bill Joy.
But like Bill McKibben, he favors a total ban on AI,
nanotechnology and genetic engineering. James Hughes,
and others such as myself, want the benefits of these
technologies but to regulate them to avoid potential
catastrophes.

Hopefully some of the non-speaking participants at
the Summit will express the point of view in favor
of proceeding with AI but regulating it.

Bill
http://www.ssec.wisc.edu/~billh/g/Singularity_Notes.html

---
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Mark Walker


- Original Message - 
From: Bill Hibbard Subject: [agi] the Singularity Summit and regulation of 
AI




I am concerned that the Singularity Summit will not include
any speaker advocating government regulation of intelligent
machines. The purpose of this message is not to convince you
of the need for such regulation, but just to say that the
Summit should include someone speaking in favor of it. Note
that, to be effective, regulation should be linked to a
widespread public movement like the environmental and
consumer safety movements. Intelligent weapons could be
regulated by treaties similar to those for nuclear, chemical
and biological weapons.

The obvious choice to advocate this position would be James
Hughes, and it is puzzling that he is not included among the
speakers. 



Bill Hibbard is another obvious choice.

Cheers,
Mark


Dr. Mark Walker
Department of Philosophy
University Hall 310
McMaster University
1280 Main Street West
Hamilton, Ontario, L8S 4K1
Canada

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Ben Goertzel

On 5/10/06, Bill Hibbard [EMAIL PROTECTED] wrote:

I am concerned that the Singularity Summit will not include
any speaker advocating government regulation of intelligent
machines. The purpose of this message is not to convince you
of the need for such regulation, but just to say that the
Summit should include someone speaking in favor of it.

...


The Singularity Summit should include all points of
view, including advocates for regulation of intelligent
machines. It will weaken the Summit to exclude this
point of view.


In fairness to the organizers, I would note that it is a brief event
and all possible points of view cannot possibly be represented within
such a brief period of time.

As an aside, I certainly would have liked to be invited to speak
regarding the implication of AGI for the Singularity, but I understand
that they simply had a very small number of speaking slots: it's a
one-day conference.

I agree that if they have a series of similar events, then in time one
of them should include someone advocating government regulation of
intelligent machines, as this is a meaningful viewpoint deserving to
be heard.   I don't agree that this issue is so high-priority that
leaving it out of this initial one-day event is a big problem...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] the Singularity Summit and regulation of AI

2006-05-10 Thread Russell Wallace
On 5/10/06, Bill Hibbard [EMAIL PROTECTED] wrote:
The Singularity Summit should include all points ofview, including advocates for regulation of intelligentmachines. It will weaken the Summit to exclude thispoint of view.
Then it would be better if the Summit were not held at all. Nanotech,
AGI etc advanced enough that constructive discussion of regulations
would be possible, even if one agreed with them in principle, are still
a very long way from even being on the horizon; talk of Singularity
right now is wildly premature as anything other than inspiring science
fiction; and blindly slapping on regulations at this point increases
the probability that humanity will simply die without ever getting near
the Singularity.

Will the Summit include that point of view?


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]