Directions for AGI research, was Re: [agi] Why so few AGI projects?

2006-09-14 Thread sam kayley



Joshua Fox wrote:


What would s/he say if I asked "Why do you not pursue or support AGI 
research? Even if you believe that implementation is a long way off, 
surely academia can study, and has studied for thousands of years, 
impractical but interesting pie-in-the-sky topics, including human 
cognition? And AGI, if nothing else, models (however partially and 
imperfectly with our contemporary technology) essential aspects of 
some philosophically very important problems."
It seems to me the reason for this lack of activity is a lack of 
credible lines of research, other than continuing existing narrow AI and 
cognitive science work, hopefully with extra efforts to encourage 
cross-pollination.


A list of ideas for what academia should be doing, other than giving 
people million dollar grants for programming systems they cannot make a 
good case will do anything interesting might help, I list a few off the 
top of my head below, feel free to revise my list:


tractable subcases of bayesian/KC/decision theory methods, as pursued by 
Marcus Hutter
Reflectivity in bayesian/KC/decision theory methods, as pursued by 
Eliezer Yudkowsky

Dynamics of concepts, Douglas Hofstadter
Brain simulation, blue brain project
Common sense reasoning
AI intelligence tests, Shane Legg

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Christophe Devine
Joshua Fox <[EMAIL PROTECTED]> wrote:

> I'd like to raise a FAQ: Why is so little AGI research and development being
> done?

Perhaps it's just a matter of faith -- some believe in it, and some don't ;-)

-- Christophe

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Shane Legg
Eliezer,Shane, what would you do if you had your headway?  Say, you won the
lottery tomorrow (ignoring the fact that no rational person would buy aticket).  Not just "AGI" - what specifically would you sit down and doall day?I've got a list of things I'd like to be working on.  For example, I'd like to
try to build a universal test of machine intelligence, I've also got ideas inthe area of genetic algorithms, neural network architectures, and somemore theoretical things related to complexity theory and AI.  I also want to
spend more time learning neuroscience.  I think my best shot at buildingan AGI will involve bringing ideas from many of these areas together.
Indeed not.  It takes your first five years simply to figure out whichway is up.  But Shane, if you restrict yourself to results you canregularly publish, you couldn't work on what you really wanted to do,even if you had a million dollars.
If I had a million dollars I wouldn't care so much about my "career"as I wouldn't be dependent on the academic system to pay my bills.As such I'd only publish once, or perhaps twice, a year and would
spend more time on areas of research that were more likely to failor would require large time investments before seeing results.Shane

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Joshua Fox
Thanks, all, for those insightful answers. In combination with the published discussion of the topic, this thread is enlightening. Still, to push the point, I am fantasizing a conversation with a Hypothetical Open-Minded World-Renowned Eloquent Cognitive Scientist (Howecs). Surely there must be a few of these out there. Daniel Dennett comes to mind, though I hesitate to focus on any one person's ideas.
I am setting aside the herd-followers, the nine-to-fivers, and the outliers for the purposes of this discussion.Using Pei's points as a convenient summary, Professor Howecs would relate as follows to common objections.
1. "AGI is impossible" & 2. "There is no such a thing as general intelligence"-- Howecs can recognize that AGI is probably possible in principle -- and if it is impossible, that unsuccessful attempts will bring insights on fundamental philosophical questions which  scholars have been working on for centuries.
3. "General-purpose systems are not as good as special-purpose ones" -- Howecs would recognize that performance and efficiency are not needed for philosophical questions, which is what he is professionally most interested in.
4. "AGI is already included in the current AI" ---Howecs would recognize that if AI subfield X is the secret to AGI, then X is just the correct path to take to AGI, and X research is the equivalent of AGI research.
5. "It is too early to work on AGI" --- Howecs is either a philosophy professor or so advanced in his field that his work impinges on philosophy, so working on pie-in-the-sky topics does not bother him at all.

6. "AGI is nothing but hype" --- Howecs knows to separate hype from reality and knows that past over-hyped projects do not obviate the value of a scientific field. Carl Sagan dealt heavily in SETI, even though this has attracted lots of sci-fi, lots of weirdos, and lots of failure -- and surely Sagan would qualify as a Howecs in his field.
7. "AGI research is not fruitful --- it is hard to get result, support, reward, ..." -- Howecs can muster funding for himself and his students at will, and is fearless of public opinion.  He can choose sub-topics which will give interim results; he, as an opinion-leader, will make the world respect these. (Note that in academia,  a well-argued paper in itself can be considered a "result." Implementable technologies or rigorous proofs are not always needed, as long as the relevant academic community is interested in the ideas.)
8. "AGI is dangerous" --- Think of how the greatest of  nuclear physicists and microbiologists reacted to potentially dangerous technologies. Howecs, first, is too scientifically curious to let the fear drive him away; and second, he knows the importance of mitigating the dangers.
So, where are all the Howecses speaking up for AGI research?Joshua



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Eliezer S. Yudkowsky

Shane Legg wrote:


Funding is certainly a problem.  I'd like to work on my own AGI ideas
 after my PhD is over next year... but can I get money to do that?
Probably not.  So as a compromise I'll have to work on something else
in AI during the day, and spend my weekends doing the stuff I'd
really like to be doing. Currently I code my AI at nights and
weekends.


Shane, what would you do if you had your headway?  Say, you won the 
lottery tomorrow (ignoring the fact that no rational person would buy a 
ticket).  Not just "AGI" - what specifically would you sit down and do 
all day?  If there's somewhere online that already answers this or a 
previous AGI message I should read, just point.



Pressure to publish is also a problem.  I need results on a regular
basis that I can publish otherwise my career is over.  AGI is not
really short term results friendly.


Indeed not.  It takes your first five years simply to figure out which 
way is up.  But Shane, if you restrict yourself to results you can 
regularly publish, you couldn't work on what you really wanted to do, 
even if you had a million dollars.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Bob Mottram

I think that's an insightful summary which really matches very well my
experience of people doing academic research on AI.  There are
really exceptionally few of the "hard core" people who are just
relentlessly persuing it year after year.  Many people doing
computer science courses take an interest for a year or two, and then
decide its too hard and go on to do more sensible things such as
database programming instead, which is more likely to pay the bills and
give you a respectable career.

- Bob
On 13/09/06, Shane Legg <[EMAIL PROTECTED]> wrote:
This is a question that I've thought about from time to time.  The conclusionI've come to is that there isn't really one or two reasons, there are many.Surprisingly, most people in academic AI aren't really all that into AI.
It's a job.  It's more interesting than doing database programming ina bank, but at the end of the day it's just a job.  They're not out tochange the world or do anything amazing, it's hard enough just trying

to get a paper into conference X or Y.  It's true that they are skepticalabout whether AI will make large progress towards human levelintelligence in their life times, however I think the more important point
is that they simply don't even think about this question.  They're just not
interested.  I'd say that this is about 19 out of every 20 people in academicAI.  Of course there are thousands of people working in academic AIaround the world, so 1 out of 19 is still a sizable number of people in total.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Shane Legg
This is a question that I've thought about from time to time.  The conclusionI've come to is that there isn't really one or two reasons, there are many.Surprisingly, most people in academic AI aren't really all that into AI.
It's a job.  It's more interesting than doing database programming ina bank, but at the end of the day it's just a job.  They're not out tochange the world or do anything amazing, it's hard enough just trying
to get a paper into conference X or Y.  It's true that they are skepticalabout whether AI will make large progress towards human levelintelligence in their life times, however I think the more important pointis that they simply don't even think about this question.  They're just not
interested.  I'd say that this is about 19 out of every 20 people in academicAI.  Of course there are thousands of people working in academic AIaround the world, so 1 out of 19 is still a sizable number of people in total.
Funding is certainly a problem.  I'd like to work on my own AGI ideasafter my PhD is over next year... but can I get money to do that?  Probablynot.  So as a compromise I'll have to work on something else in AI during
the day, and spend my weekends doing the stuff I'd really like to be doing.Currently I code my AI at nights and weekends.Pressure to publish is also a problem.  I need results on a regular basisthat I can publish otherwise my career is over.  AGI is not really short term
results friendly.Another thing is visibility.  Of the academic people I know who are tryingto build a general artificial intelligence (although probably not saying quitethat in their papers), I would be surprised if any of them were known to
anybody on this list.  These a non-famous young researchers, and becausethey can't publish papers saying that they want build a thinking machine,you'd only know this if you were to meet them in person.
One thing that people who are not involved in academic AI often don'tappreciate is just how fractured the field is.  I've seen plenty of exampleswhere there are two sub-fields that are doing almost the same thing
but which are using different words for things, go to different conferences,and cite different sets of people.  I bring this up because I sometimesget the feeling that some people think that "academic AI" is some sort
of definable group.  In reality, most academics lack of knowledge aboutAGI is no different to their lack of knowledge of many other areas of AI.In other words, they aren't ignoring AGI any more than they are ignoring
twenty other areas in the field.Shane

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss
AGI ideas that are well developed can be quite concrete, as well as having
payoffs in the near future. Our project's business plan aims to do both.

Peter


-Original Message-
From: Eliezer S. Yudkowsky [mailto:[EMAIL PROTECTED] 

Additional factor:  AGI ideas are often vague or analogical.  Even the 
ideas with mathematically describable internals are often vague in the 
explanation of what they are supposed to do, or why they are supposed to 
be "intelligent".  It would be harder to cooperate on a project like 
that, than on developing a faster sorting algorithm.  Fuzzy beliefs are 
harder to communicate; communication is the essence of cooperation.


-Original Message-
From: Neil H. [mailto:[EMAIL PROTECTED] 

Of course, one might also argue that they simply didn't venture far
enough to see the proverbial "light at the end of the tunnel." I
suppose one of the downsides about AGI is that, unlike more focused AI
research (vision, NLP, etc), there really aren't any intermediate
payoffs between now and the "holy grail."


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Neil H.

I recall a recruiter from a CS PhD program (maybe UW?) citing that AI
students take one year longer on average to complete their PhD because
they spend the first year convinced that they've struck upon an idea
which is going to be the solution to general AI. I agree with this
assessment -- I spent most of last year in such a phase. I'd argue
that this is the reason for the lack of interest -- they've already
ventured down that road and have found it to be fruitless.

Of course, one might also argue that they simply didn't venture far
enough to see the proverbial "light at the end of the tunnel." I
suppose one of the downsides about AGI is that, unlike more focused AI
research (vision, NLP, etc), there really aren't any intermediate
payoffs between now and the "holy grail."

On 9/13/06, Joshua Fox <[EMAIL PROTECTED]> wrote:

I'd like to raise a FAQ: Why is so little AGI research and development being
done?

The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on
this (no need to repeat them here), and I've read Are We Spiritual Machines,
but I come away unsatisfied. (Still, if there is nothing more to say on this
question, please do the AGIRI-equivalent of sniping this thread
immediately.)

I respect existing AGI researchers, but I am surprised that more members of
the "establishment" are not on board. I just can't believe that , for
example, almost all leading
computer-science/cognitive-science professors are
herd-following closed-minded stuck-in-the-muds. The leading universities do
have their share of creative, free-thinking, inquisitive people, and the
same goes for other parts of the "establishment".


To clarify what I am looking for, I should describe a recent conversation. I
spoke to an open-minded and intelligent friend who has a PhD from, and does
research in, a top university. The research is in exactly the sort of
technologies used in brain-scanning. I asked him about Kurzweil's trends on
the accelerating advance of human-brain-scanning technologies. He did not
agree with Kurzweil's conclusions, and explained why.

 Likewise, I'm looking for input from a open-minded, intelligent,
computer/cognitive scientist (who does not strongly support AGI research) on
the above question. I don't know where to find them, so perhaps someone on
this list could role-play one.

What would s/he say if I asked "Why do you not pursue or support AGI
research? Even if you believe that implementation is a long way off, surely
academia can study, and has studied for thousands of years, impractical but
interesting pie-in-the-sky topics, including human cognition? And AGI, if
nothing else, models (however partially and imperfectly with our
contemporary technology) essential aspects of some philosophically very
important problems."


Thanks,

Joshua
  
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Eliezer S. Yudkowsky

Pei Wang wrote:

Why in other fields of AI, or CS in general, do many people work on
other people's ideas?

I guess the AGI ideas are still not convincing and attractive enough
to other people.


Additional factor:  AGI ideas are often vague or analogical.  Even the 
ideas with mathematically describable internals are often vague in the 
explanation of what they are supposed to do, or why they are supposed to 
be "intelligent".  It would be harder to cooperate on a project like 
that, than on developing a faster sorting algorithm.  Fuzzy beliefs are 
harder to communicate; communication is the essence of cooperation.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Pei Wang

Why in other fields of AI, or CS in general, do many people work on
other people's ideas?

I guess the AGI ideas are still not convincing and attractive enough
to other people.

Pei

On 9/13/06, Andrew Babian <[EMAIL PROTECTED]> wrote:

> PS. http://adaptiveai.com/company/opportunities.htm

This also reminds me of something, and I know it's true of myself, and I think
it might be generally true.  It seems like people tend to have their own ideas
of what they want to be done, and they are just not very interested in working
on someone else's idea or concept.  I know that's why I am not working on
Stan's project.  It could also be why I haven't been aggressive enough to
really go after working on one of the other projects that are out there, a2i2
included.   It seems like there are quite a few lone AI hackers out there.
And  this is a specific case of something I have found:  nobody likes to be
told what to do--some people tolerate it more than others, but nobody likes it.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why so few AGI projects?

2006-09-13 Thread Andrew Babian
> PS. http://adaptiveai.com/company/opportunities.htm

This also reminds me of something, and I know it's true of myself, and I think
it might be generally true.  It seems like people tend to have their own ideas
of what they want to be done, and they are just not very interested in working
on someone else's idea or concept.  I know that's why I am not working on
Stan's project.  It could also be why I haven't been aggressive enough to
really go after working on one of the other projects that are out there, a2i2
included.   It seems like there are quite a few lone AI hackers out there. 
And  this is a specific case of something I have found:  nobody likes to be
told what to do--some people tolerate it more than others, but nobody likes it.

andi

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Charles D Hixson

Joshua Fox wrote:
I'd like to raise a FAQ: Why is so little AGI research and development 
being done?


...
Thanks,

Joshua
 
What proportion of the work that is being done do you believe you are 
aware of?  On what basis?
My suspicion is that most people on the track of something new tend to 
be rather close about it.  I'll agree that this probably slows down 
progress, but from an individual person or corporation's point of view 
it is quite sensible.  For one thing, it minimizes the risks of humiliation.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Bob Mottram
This is also something which has baffled me for a long time. 
I've been mucking around with AI and related software since I was a
kid, and as the years went by it gradually dawned upon me what the real
problems were which needed to be overcome if any significant progress
was going to be made.  One of the really important things in my
opinion is just being able to see the world at a reasonably high
fidelity and level of confidence, so that you can then begin to make
some decisions about what to do and how to move around.

There's been very little done, both in the academic and industrial
arenas, on trying to produce a good all-purpose vision system which you
can just bolt onto the side of some machine and have it be able to see
what's in front of it.  I think many people have shied away from
such a task because it's traditionally regarded as difficult, and
instead there have been many narrow-AI type vision systems employing an
assortment of dodgy heuristics to pick out limited aspects of the
visual scene.

However, producing a system which just reverse engineers camera images
and produces good 3D models isn't impossible.  In fact, it's quite
straightforward, and doesn't involve having to resort to any
particularly exotic algorithms - just regular geometry and probability
theory.  People such as Moravec have been working on this kind of
stuff for years, and it's surprising how little attention this approach
has received.

However, I think the good news is that soon there will be very generic
vision systems of the bolt-on kind, which will work within a wide range
of environments.  I see this as a critical component which will
facilitate the eventual realisation of AGIs capable of operating in the
real world.

To go back to the original question I think the answer is that if you
want to be a good researcher, keep paying the bills and work your way
through the ranks you should avoid difficult or unconventional
subjects.  Stick to stuff that's easy and don't try to rock the
boat and you'll have an easier life.  Also there's the peer
pressure factor.  If some notable professor X comes out and says
that general purpose vision is a problem to which there are no
solutions on the horizon, then lesser mortals are likely to assume he
knows what he's talking about and avoid getting involved with a subject
which is unlikely to yield results.

- Bob

On 13/09/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
I'd like to raise a FAQ: Why is so little AGI research and development being done?The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on this (no need to repeat them here), and I've read 
Are We Spiritual Machines, but
I come away unsatisfied. (Still, if there is nothing more to say on
this question, please do the AGIRI-equivalent of sniping this thread
immediately.)
I respect existing AGI researchers, but I am surprised that more members of the "establishment" are not on board. I just can't believe that


, for example, almost 
all leading computer-science/cognitive-science
professors are herd-following closed-minded stuck-in-the-muds. The
leading universities do have their share of creative, free-thinking,
inquisitive people, and the same goes for other parts of the
"establishment".


 
To clarify what I am looking for, I should describe a recent conversation. I spoke to an open-minded and intelligent
friend who has a PhD from, and does research in, a top university. The
research is in exactly the sort of technologies used in brain-scanning.
I asked him about Kurzweil's trends on the accelerating advance of human-brain-scanning technologies.   He did not agree with Kurzweil's conclusions, and explained why. 


Likewise, I'm looking for input from a 
open-minded, intelligent, computer/cognitive scientist (who
does not strongly support AGI research) on the above question. I don't
know where to find them, so perhaps someone on this list could
role-play one. What would s/he say if I asked "Why do you not pursue or
support AGI research? Even if you believe that implementation is a long
way off, surely academia can study, and has studied for thousands of
years, impractical but interesting pie-in-the-sky topics, including
human cognition? And AGI, if nothing else, models (however partially
and imperfectly with our contemporary technology) essential aspects of
some philosophically very important problems." 
 
Thanks,Joshua 





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Pei Wang

Yes, that is what No. 6 is about. The situation is made worse by the
"AI has been solved" claims on the web from different places.

Pei

On 9/13/06, Stephen Reed <[EMAIL PROTECTED]> wrote:

I would add that previous more-or-less general AI
projects have not greatly exceeded their modest
expectations.  So given this experience perhaps there
is a tendency among potential sponsors to classify new
AGI projects as crackpot schemes.
-Steve

--- Pei Wang <[EMAIL PROTECTED]> wrote:

> Good question.
>
> I and Ben are drafting an introductory chapter for
> the AGIRI Workshop
> Proceedings, and in it we want to list the major
> objections to AGI
> research, then reject them one by one. Now the list
> includes the
> following:
>
> 1. "AGI is impossible" --- such as the opinions from
> Lucas, Dreyfus,
> and Penrose
>
> 2. "There is no such a thing as general
> intelligence" ---
> psychological arguments against any "g factor", and
> AI arguments
> against any "general problem solver"
>
> 3. "General-purpose systems are not as good as
> special-purpose ones"
> --- in terms of performance, efficiency, etc.
>
> 4. "AGI is already included in the current AI" ---
> "Since X plays an
> important role in intelligence, studying X
> contributes to the study of
> intelligence in general", where X can be replaced by
> reasoning,
> learning, planning, perceiving, acting, etc.
>
> 5. "It is too early to work on AGI" --- we should
> wait for more
> results from individual AI sub-fields, brain
> research, hardware
> innovations, ...
>
> 6. "AGI is nothing but hype" --- no AGI claim has
> got any supporting
> evidence in history
>
> 7. "AGI research is not fruitful" --- it is hard to
> get result,
> support, reward, ...
>
> 8. "AGI is dangerous" --- Terminator, Matrix, ...
>
> Anything else?
>
> Pei
>
>
> On 9/13/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
> > I'd like to raise a FAQ: Why is so little AGI
> research and development being
> > done?
> >
> > The answers of Goertzel, Moravec, Kurzweil, Voss,
> and others all agree on
> > this (no need to repeat them here), and I've read
> Are We Spiritual Machines,
> > but I come away unsatisfied. (Still, if there is
> nothing more to say on this
> > question, please do the AGIRI-equivalent of
> sniping this thread
> > immediately.)
> >
> > I respect existing AGI researchers, but I am
> surprised that more members of
> > the "establishment" are not on board. I just can't
> believe that , for
> > example, almost all leading
> > computer-science/cognitive-science professors are
> > herd-following closed-minded stuck-in-the-muds.
> The leading universities do
> > have their share of creative, free-thinking,
> inquisitive people, and the
> > same goes for other parts of the "establishment".
> >
> >
> > To clarify what I am looking for, I should
> describe a recent conversation. I
> > spoke to an open-minded and intelligent friend who
> has a PhD from, and does
> > research in, a top university. The research is in
> exactly the sort of
> > technologies used in brain-scanning. I asked him
> about Kurzweil's trends on
> > the accelerating advance of human-brain-scanning
> technologies. He did not
> > agree with Kurzweil's conclusions, and explained
> why.
> >
> >  Likewise, I'm looking for input from a
> open-minded, intelligent,
> > computer/cognitive scientist (who does not
> strongly support AGI research) on
> > the above question. I don't know where to find
> them, so perhaps someone on
> > this list could role-play one.
> >
> > What would s/he say if I asked "Why do you not
> pursue or support AGI
> > research? Even if you believe that implementation
> is a long way off, surely
> > academia can study, and has studied for thousands
> of years, impractical but
> > interesting pie-in-the-sky topics, including human
> cognition? And AGI, if
> > nothing else, models (however partially and
> imperfectly with our
> > contemporary technology) essential aspects of some
> philosophically very
> > important problems."
> >
> >
> > Thanks,
> >
> > Joshua
> >   
> >  This list is sponsored by AGIRI:
> http://www.agiri.org/email To unsubscribe
> > or change your options, please go to:
> >
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
>


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around
http://mail.yahoo.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Russell Wallace
On 9/13/06, Stephen Reed <[EMAIL PROTECTED]> wrote:
I would add that previous more-or-less general AIprojects have not greatly exceeded their modestexpectations.  So given this experience perhaps thereis a tendency among potential sponsors to classify newAGI projects as crackpot schemes.

And let's be honest, a lot of the talk under the heading of AGI really
is in crackpot territory; given the lack of concrete results, it's not
unreasonable for a potential sponsor to conclude it's all like that.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Stephen Reed
I would add that previous more-or-less general AI
projects have not greatly exceeded their modest
expectations.  So given this experience perhaps there
is a tendency among potential sponsors to classify new
AGI projects as crackpot schemes.
-Steve

--- Pei Wang <[EMAIL PROTECTED]> wrote:

> Good question.
> 
> I and Ben are drafting an introductory chapter for
> the AGIRI Workshop
> Proceedings, and in it we want to list the major
> objections to AGI
> research, then reject them one by one. Now the list
> includes the
> following:
> 
> 1. "AGI is impossible" --- such as the opinions from
> Lucas, Dreyfus,
> and Penrose
> 
> 2. "There is no such a thing as general
> intelligence" ---
> psychological arguments against any "g factor", and
> AI arguments
> against any "general problem solver"
> 
> 3. "General-purpose systems are not as good as
> special-purpose ones"
> --- in terms of performance, efficiency, etc.
> 
> 4. "AGI is already included in the current AI" ---
> "Since X plays an
> important role in intelligence, studying X
> contributes to the study of
> intelligence in general", where X can be replaced by
> reasoning,
> learning, planning, perceiving, acting, etc.
> 
> 5. "It is too early to work on AGI" --- we should
> wait for more
> results from individual AI sub-fields, brain
> research, hardware
> innovations, ...
> 
> 6. "AGI is nothing but hype" --- no AGI claim has
> got any supporting
> evidence in history
> 
> 7. "AGI research is not fruitful" --- it is hard to
> get result,
> support, reward, ...
> 
> 8. "AGI is dangerous" --- Terminator, Matrix, ...
> 
> Anything else?
> 
> Pei
> 
> 
> On 9/13/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
> > I'd like to raise a FAQ: Why is so little AGI
> research and development being
> > done?
> >
> > The answers of Goertzel, Moravec, Kurzweil, Voss,
> and others all agree on
> > this (no need to repeat them here), and I've read
> Are We Spiritual Machines,
> > but I come away unsatisfied. (Still, if there is
> nothing more to say on this
> > question, please do the AGIRI-equivalent of
> sniping this thread
> > immediately.)
> >
> > I respect existing AGI researchers, but I am
> surprised that more members of
> > the "establishment" are not on board. I just can't
> believe that , for
> > example, almost all leading
> > computer-science/cognitive-science professors are
> > herd-following closed-minded stuck-in-the-muds.
> The leading universities do
> > have their share of creative, free-thinking,
> inquisitive people, and the
> > same goes for other parts of the "establishment".
> >
> >
> > To clarify what I am looking for, I should
> describe a recent conversation. I
> > spoke to an open-minded and intelligent friend who
> has a PhD from, and does
> > research in, a top university. The research is in
> exactly the sort of
> > technologies used in brain-scanning. I asked him
> about Kurzweil's trends on
> > the accelerating advance of human-brain-scanning
> technologies. He did not
> > agree with Kurzweil's conclusions, and explained
> why.
> >
> >  Likewise, I'm looking for input from a
> open-minded, intelligent,
> > computer/cognitive scientist (who does not
> strongly support AGI research) on
> > the above question. I don't know where to find
> them, so perhaps someone on
> > this list could role-play one.
> >
> > What would s/he say if I asked "Why do you not
> pursue or support AGI
> > research? Even if you believe that implementation
> is a long way off, surely
> > academia can study, and has studied for thousands
> of years, impractical but
> > interesting pie-in-the-sky topics, including human
> cognition? And AGI, if
> > nothing else, models (however partially and
> imperfectly with our
> > contemporary technology) essential aspects of some
> philosophically very
> > important problems."
> >
> >
> > Thanks,
> >
> > Joshua
> >   
> >  This list is sponsored by AGIRI:
> http://www.agiri.org/email To unsubscribe
> > or change your options, please go to:
> >
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
> 
> -
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Ben Goertzel

Hi,


P.S. Ben, did you consider trying to invite Minsky to an AGI workshop?
Certainly it's hard and perhaps not possible yet, but that would be
a large advertisement for AGI.


Marvin Minsky was invited to the AGIRI workshop but elected not to attend.

We did have a number of respected AI academics however, not only
youngsters like Nick Cassimatis but old hands like Stan Franklin and
Hugo de Garis.

I believe the AGI meme is spreading through academia at an exponential
rate, but the exponent is not that large.  The number of AGI-ish
workshops and journal issues is increasing each year.

My prediction is that 10 years from now AGI will be a flourishing
subfield of academic AI -- though maybe under some other name such as
"Human-Level AI" rather than under the name AGI.

(Of course, this may happen because of some mavericks on the fringe
of, or totally outside of, the academic establishment making an AGI
breakthrough that wakes up the academic AI establishment to the
near-term possibilities!!)

And, I agree with Luke's comment that in private many academic AI
researchers are not so down on AGI as one might think.  But in public,
they need to publish papers and get grants, and AGI is not very good
for that.  I know that my own academic CV would probably look "Better"
from a university point of view if I removed all AGI stuff from it and
just sold myself as a bioinformatics and language processing expert!!!
So, I think there is a fair bit of enthusiasm lurking under the
surface among academic narrow-AI experts, and once AGI becomes a bit
more legitimated within the community, a lot of folks will jump very
eagerly on the bandwagon...

-- Ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Russell Wallace
On 9/13/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
I'd like to raise a FAQ: Why is so little AGI research and development being done?

Time and money. AGI takes too long. When people spend several years on
something for no result whatsoever, they quite reasonably find
something more productive to do with their time. Even if they were
inclined to continue working on AGI, where would the money come from?
People with money won't pay for continuing work in an area where years
have passed with no result.

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Lukasz Kaiser

Hi.


the "establishment" are not on board. I just can't believe that , for
example, almost all leading
computer-science/cognitive-science professors are
herd-following closed-minded stuck-in-the-muds. The leading universities do
have their share of creative, free-thinking, inquisitive people, and the
same goes for other parts of the "establishment".


I'm not that convinced that they are "not on board" when you talk
with them in private, the fact is just that AGI is still a bit vague
and very hard to sell (both in industry and as university research).
I think the road with AGIRI and workshop that Ben has taken
is very promising and might lead to a change in attitude. Recently
I read an interview with Minsky and he seems to make a few
points related to your question. Here is a link:
http://www.technologyreview.com/read_article.aspx?id=17164&ch=infotech&sc=&pg=2

- lk

P.S. Ben, did you consider trying to invite Minsky to an AGI workshop?
Certainly it's hard and perhaps not possible yet, but that would be
a large advertisement for AGI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Pei Wang

Good question.

I and Ben are drafting an introductory chapter for the AGIRI Workshop
Proceedings, and in it we want to list the major objections to AGI
research, then reject them one by one. Now the list includes the
following:

1. "AGI is impossible" --- such as the opinions from Lucas, Dreyfus,
and Penrose

2. "There is no such a thing as general intelligence" ---
psychological arguments against any "g factor", and AI arguments
against any "general problem solver"

3. "General-purpose systems are not as good as special-purpose ones"
--- in terms of performance, efficiency, etc.

4. "AGI is already included in the current AI" --- "Since X plays an
important role in intelligence, studying X contributes to the study of
intelligence in general", where X can be replaced by reasoning,
learning, planning, perceiving, acting, etc.

5. "It is too early to work on AGI" --- we should wait for more
results from individual AI sub-fields, brain research, hardware
innovations, ...

6. "AGI is nothing but hype" --- no AGI claim has got any supporting
evidence in history

7. "AGI research is not fruitful" --- it is hard to get result,
support, reward, ...

8. "AGI is dangerous" --- Terminator, Matrix, ...

Anything else?

Pei


On 9/13/06, Joshua Fox <[EMAIL PROTECTED]> wrote:

I'd like to raise a FAQ: Why is so little AGI research and development being
done?

The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on
this (no need to repeat them here), and I've read Are We Spiritual Machines,
but I come away unsatisfied. (Still, if there is nothing more to say on this
question, please do the AGIRI-equivalent of sniping this thread
immediately.)

I respect existing AGI researchers, but I am surprised that more members of
the "establishment" are not on board. I just can't believe that , for
example, almost all leading
computer-science/cognitive-science professors are
herd-following closed-minded stuck-in-the-muds. The leading universities do
have their share of creative, free-thinking, inquisitive people, and the
same goes for other parts of the "establishment".


To clarify what I am looking for, I should describe a recent conversation. I
spoke to an open-minded and intelligent friend who has a PhD from, and does
research in, a top university. The research is in exactly the sort of
technologies used in brain-scanning. I asked him about Kurzweil's trends on
the accelerating advance of human-brain-scanning technologies. He did not
agree with Kurzweil's conclusions, and explained why.

 Likewise, I'm looking for input from a open-minded, intelligent,
computer/cognitive scientist (who does not strongly support AGI research) on
the above question. I don't know where to find them, so perhaps someone on
this list could role-play one.

What would s/he say if I asked "Why do you not pursue or support AGI
research? Even if you believe that implementation is a long way off, surely
academia can study, and has studied for thousands of years, impractical but
interesting pie-in-the-sky topics, including human cognition? And AGI, if
nothing else, models (however partially and imperfectly with our
contemporary technology) essential aspects of some philosophically very
important problems."


Thanks,

Joshua
  
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








Yes, an important point. For our project
we invented a new profession: AI psychologist. 

 

It is very hard to find computer
scientists who are comfortable thinking about a program (AGI) in terms of teaching,
training and psychology. Conversely, developmental and cognitive psychologists
usually don’t have an interest in computers/ programming.

 

Peter

 

PS. http://adaptiveai.com/company/opportunities.htm


 











From: Andrew Babian
[mailto:[EMAIL PROTECTED] 



On Wed, 13 Sep 2006 18:04:31 +0300,
Joshua Fox wrote 
> I'd like to raise a FAQ: Why is so little AGI research and development
being done? 

I think this is a very good question.  Maybe the problem has just been
daunting.  It seems like only recently have there really started to be
some good theoretical models, and maybe people just haven't realized that it
may have just become reasonable.  So maybe some is inertia.  I'm in
town here with Stan Franklin, who is one of those working on a general model,
though I don't work with his group.  He's had a relationship with the
cognitive science people at the university here, and is glad to be able to do
"real science".  And it does seem like the computer people and
psychologists really are in separate worlds and are not that into reaching
out.  I remember talking to a cog psych graduate student who seemed to
have interests in understanding how mings might work.  But I'm from an
engineering background, and talking to her, it seemed like she came out and
said she was only interested in how people work, and had no interest in how to
get a machine to do it.  A matter of priorities and interest, then,
perhaps.  As for the principles, I also seem to remember that they had
some trouble getting the primary cognitive psychologist to get that interested
in helping with the theoretical psychology because he had so many other things
he was working on.  My exposure to that group was very limited though, but
I remember getting that feeling.  And, they have a cog sci seminar where
really try to get the computer people to work with the psychologists, but a
semester is too short.  I suppose I need to find out if there are any
deeper collaborations going on. …





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





RE: [agi] Why so few AGI projects?

2006-09-13 Thread Peter Voss








I considered and researched this issue thoroughly
a few years ago.

 

For a summary: http://adaptiveai.com/faq/index.htm#few_researchers


For detail: http://adaptiveai.com/research/index.htm
(section 8)

 

In addition to asking researchers you also
need to look at psychological and hidden motives, as well as the dynamics of
funding sources (DARPA, etc), business and academia. 

 

Peter

 

 











From: Joshua Fox [mailto:[EMAIL PROTECTED]




I'd like to raise a FAQ: Why is so little AGI research and development
being done?...





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





Re: [agi] Why so few AGI projects?

2006-09-13 Thread Richard Loosemore

Joshua Fox wrote:
I'd like to raise a FAQ: Why is so little AGI research and development 
being done?


The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree 
on this (no need to repeat them here), and I've read Are We Spiritual 
Machines, but I come away unsatisfied. (Still, if there is nothing more 
to say on this question, please do the AGIRI-equivalent of sniping this 
thread immediately.)


I respect existing AGI researchers, but I am surprised that more members 
of the "establishment" are not on board. I just can't believe that , for 
example, almost all leading computer-science/cognitive-science 
professors are herd-following closed-minded stuck-in-the-muds. The 
leading universities do have their share of creative, free-thinking, 
inquisitive people, and the same goes for other parts of the 
"establishment".
 
To clarify what I am looking for, I should describe a recent 
conversation. I spoke to an open-minded and intelligent friend who has a 
PhD from, and does research in, a top university. The research is in 
exactly the sort of technologies used in brain-scanning. I asked him 
about Kurzweil's trends on the accelerating advance of 
human-brain-scanning technologies. He did not agree with Kurzweil's 
conclusions, and explained why.


Likewise, I'm looking for input from a open-minded, intelligent, 
computer/cognitive scientist (who does not strongly support AGI 
research) on the above question. I don't know where to find them, so 
perhaps someone on this list could role-play one.


What would s/he say if I asked "Why do you not pursue or support AGI 
research? Even if you believe that implementation is a long way off, 
surely academia can study, and has studied for thousands of years, 
impractical but interesting pie-in-the-sky topics, including human 
cognition? And AGI, if nothing else, models (however partially and 
imperfectly with our contemporary technology) essential aspects of some 
philosophically very important problems."


I think the simple answer (all I got time for now :-)) is twofold:

1) If you ask why Kurzweil's ideas are not immediately infectious, it is 
because his claims (and all singularity claims) are not just a few steps 
beyond the current state of the art, they *look* like a wild leap into 
the realms of speculation.  Not much to be done about this:  slowly, 
over the next few years, it will become mor respectable, and then one 
day you will wake up to find every researcher on the planet trying to 
get grants in the new "singularity" field-cum-bandwagon.


2) Researchers need small, biteable, 6-months-to-publishable-paper 
projects to get their teeth into.  They would say that their Narrow-AI 
research projects ARE the biteable chunks for today that will lead to 
AGI tomorrow.  Why do they do this?  Because the people higher up from 
them will crucify them if their work starts to get oriented towards 
anything else but high publication rate in "respectable" journals ... 
don't do this, and they will start to find promotions slipping, or 
they'll just be dumped.  Short term results pressure in other words.


Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-13 Thread Andrew Babian




On Wed, 13 Sep 2006 18:04:31 +0300, Joshua Fox wrote
> I'd like to raise a FAQ: Why is so little AGI research and development 
being done?

I think this is a very good question.  Maybe the problem has just been daunting.  It seems like only recently have there really started to be some good theoretical models, and maybe people just haven't realized that it may have just become reasonable.  So maybe some is inertia.  I'm in town here with Stan Franklin, who is one of those working on a general model, though I don't work with his group.  He's had a relationship with the cognitive science people at the university here, and is glad to be able to do "real science".  And it does seem like the computer people and psychologists really are in separate worlds and are not that into reaching out.  I remember talking to a cog psych graduate student who seemed to have interests in understanding how mings might work.  But I'm from an engineering background, and talking to her, it seemed like she came out and said she was only interested in how people work, and had no interest in how to get a machine to do it.  A matter of priorities and interest, then, perhaps.  As for the principles, I also seem to remember that they had some trouble getting the primary cognitive psychologist to get that interested in helping with the theoretical psychology because he had so many other things he was working on.  My exposure to that group was very limited though, but I remember getting that feeling.  And, they have a cog sci seminar where really try to get the computer people to work with the psychologists, but a semester is too short.  I suppose I need to find out if there are any deeper collaborations going on.

> The answers of 
Goertzel, Moravec, Kurzweil, Voss, and others all agree on this (no need to 
repeat them here), and I've read 

Are We Spiritual Machines,   but I come 
away unsatisfied. (Still, if there is nothing more to say on this question, 
please do the AGIRI-equivalent of sniping this thread 
immediately.)


I haven't looked at them recently or deeply enough to know what their common conclusion must be, so I would like to hear what you mean by this.


> What would s/he say if I asked "Why do you not pursue or 
support AGI research? Even if you believe that implementation is a long way off, 
surely academia can study, and has studied for thousands of years, impractical 
but interesting pie-in-the-sky topics, including human cognition? And AGI, if 
nothing else, models (however partially and imperfectly with our contemporary 
technology) essential aspects of some philosophically very important 
problems." 


But maybe it is just because the  noticeable results and applications (and therefore the money) don't seem to be there.  I guess that's probably my excuse.  

But I need to thank Ben for putting the Agiri conference video stuff up.  One thing I got from them is that maybe we do have the theory and computer power now, so it is a doable thing.  And maybe a single person can do a reasonable general project, so I'm thinking of getting back started on doing some things, though it's going to have to be after hours stuff.  I still need a day job.

andi



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





[agi] Why so few AGI projects?

2006-09-13 Thread Joshua Fox
I'd like to raise a FAQ: Why is so little AGI research and development being done?The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on this (no need to repeat them here), and I've read 
Are We Spiritual Machines,   but I come away unsatisfied. (Still, if there is nothing more to say on this question, please do the AGIRI-equivalent of sniping this thread immediately.)
I respect existing AGI researchers, but I am surprised that more members of the "establishment" are not on board. I just can't believe that

, for example, almost 
all leading computer-science/cognitive-science professors are herd-following closed-minded stuck-in-the-muds. The leading universities do have their share of creative, free-thinking, inquisitive people, and the same goes for other parts of the "establishment".


 
To clarify what I am looking for, I should describe a recent conversation. I spoke to an open-minded and intelligent friend who has a PhD from, and does research in, a top university. The research is in exactly the sort of technologies used in brain-scanning. 
I asked him about Kurzweil's trends on the accelerating advance of human-brain-scanning technologies.   He did not agree with Kurzweil's conclusions, and explained why. 

Likewise, I'm looking for input from a 
open-minded, intelligent, computer/cognitive scientist (who does not strongly support AGI research) on the above question. I don't know where to find them, so perhaps someone on this list could role-play one. 
What would s/he say if I asked "Why do you not pursue or support AGI research? Even if you believe that implementation is a long way off, surely academia can study, and has studied for thousands of years, impractical but interesting pie-in-the-sky topics, including human cognition? And AGI, if nothing else, models (however partially and imperfectly with our contemporary technology) essential aspects of some philosophically very important problems." 

 
Thanks,Joshua 





This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]