[agi] subscribe

2006-09-14 Thread Brandon Reinhart








Subscribe

 

Brandon Reinhart

[EMAIL PROTECTED]

 




This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]





Directions for AGI research, was Re: [agi] Why so few AGI projects?

2006-09-14 Thread sam kayley



Joshua Fox wrote:


What would s/he say if I asked "Why do you not pursue or support AGI 
research? Even if you believe that implementation is a long way off, 
surely academia can study, and has studied for thousands of years, 
impractical but interesting pie-in-the-sky topics, including human 
cognition? And AGI, if nothing else, models (however partially and 
imperfectly with our contemporary technology) essential aspects of 
some philosophically very important problems."
It seems to me the reason for this lack of activity is a lack of 
credible lines of research, other than continuing existing narrow AI and 
cognitive science work, hopefully with extra efforts to encourage 
cross-pollination.


A list of ideas for what academia should be doing, other than giving 
people million dollar grants for programming systems they cannot make a 
good case will do anything interesting might help, I list a few off the 
top of my head below, feel free to revise my list:


tractable subcases of bayesian/KC/decision theory methods, as pursued by 
Marcus Hutter
Reflectivity in bayesian/KC/decision theory methods, as pursued by 
Eliezer Yudkowsky

Dynamics of concepts, Douglas Hofstadter
Brain simulation, blue brain project
Common sense reasoning
AI intelligence tests, Shane Legg

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Christophe Devine
Joshua Fox <[EMAIL PROTECTED]> wrote:

> I'd like to raise a FAQ: Why is so little AGI research and development being
> done?

Perhaps it's just a matter of faith -- some believe in it, and some don't ;-)

-- Christophe

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Shane Legg
Eliezer,Shane, what would you do if you had your headway?  Say, you won the
lottery tomorrow (ignoring the fact that no rational person would buy aticket).  Not just "AGI" - what specifically would you sit down and doall day?I've got a list of things I'd like to be working on.  For example, I'd like to
try to build a universal test of machine intelligence, I've also got ideas inthe area of genetic algorithms, neural network architectures, and somemore theoretical things related to complexity theory and AI.  I also want to
spend more time learning neuroscience.  I think my best shot at buildingan AGI will involve bringing ideas from many of these areas together.
Indeed not.  It takes your first five years simply to figure out whichway is up.  But Shane, if you restrict yourself to results you canregularly publish, you couldn't work on what you really wanted to do,even if you had a million dollars.
If I had a million dollars I wouldn't care so much about my "career"as I wouldn't be dependent on the academic system to pay my bills.As such I'd only publish once, or perhaps twice, a year and would
spend more time on areas of research that were more likely to failor would require large time investments before seeing results.Shane

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Why so few AGI projects?

2006-09-14 Thread Joshua Fox
Thanks, all, for those insightful answers. In combination with the published discussion of the topic, this thread is enlightening. Still, to push the point, I am fantasizing a conversation with a Hypothetical Open-Minded World-Renowned Eloquent Cognitive Scientist (Howecs). Surely there must be a few of these out there. Daniel Dennett comes to mind, though I hesitate to focus on any one person's ideas.
I am setting aside the herd-followers, the nine-to-fivers, and the outliers for the purposes of this discussion.Using Pei's points as a convenient summary, Professor Howecs would relate as follows to common objections.
1. "AGI is impossible" & 2. "There is no such a thing as general intelligence"-- Howecs can recognize that AGI is probably possible in principle -- and if it is impossible, that unsuccessful attempts will bring insights on fundamental philosophical questions which  scholars have been working on for centuries.
3. "General-purpose systems are not as good as special-purpose ones" -- Howecs would recognize that performance and efficiency are not needed for philosophical questions, which is what he is professionally most interested in.
4. "AGI is already included in the current AI" ---Howecs would recognize that if AI subfield X is the secret to AGI, then X is just the correct path to take to AGI, and X research is the equivalent of AGI research.
5. "It is too early to work on AGI" --- Howecs is either a philosophy professor or so advanced in his field that his work impinges on philosophy, so working on pie-in-the-sky topics does not bother him at all.

6. "AGI is nothing but hype" --- Howecs knows to separate hype from reality and knows that past over-hyped projects do not obviate the value of a scientific field. Carl Sagan dealt heavily in SETI, even though this has attracted lots of sci-fi, lots of weirdos, and lots of failure -- and surely Sagan would qualify as a Howecs in his field.
7. "AGI research is not fruitful --- it is hard to get result, support, reward, ..." -- Howecs can muster funding for himself and his students at will, and is fearless of public opinion.  He can choose sub-topics which will give interim results; he, as an opinion-leader, will make the world respect these. (Note that in academia,  a well-argued paper in itself can be considered a "result." Implementable technologies or rigorous proofs are not always needed, as long as the relevant academic community is interested in the ideas.)
8. "AGI is dangerous" --- Think of how the greatest of  nuclear physicists and microbiologists reacted to potentially dangerous technologies. Howecs, first, is too scientifically curious to let the fear drive him away; and second, he knows the importance of mitigating the dangers.
So, where are all the Howecses speaking up for AGI research?Joshua



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]