Top ten signs the singularity has arrived
http://www.deanesmay.com/posts/1152629462.shtml
---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Just some quick comments. It appears to me that perhaps the primary topic in question is an ability to generalize or abstract knowledge to varieties of situations. I would say that for the most part Soar is very good at *representing* and *using* composable (and therefore generalized) knowledge
Danny, I just read an interesting article that goes through a more formal proof of intelligence "A formal measure of Machine Intelligence"which I may have gotten from a link from this group or another.http://www.vetta.org/documents/ui_benelearn.pdfJames Ratcliff[EMAIL PROTECTED] wrote: Does anyone
James
Many thanks for the link on Computing Intelligence.
Dan Goe
From : James Ratcliff [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Computing Intelligence? How too? .
ping
Date : Thu, 13 Jul 2006 07:44:26
James,
Many thanks for the link on Computing Intelligence.
Dan Goe
From : James Ratcliff [EMAIL PROTECTED]
To : agi@v2.listbox.com
Subject : Re: [agi] Computing Intelligence? How too? .
ping
Date : Thu, 13 Jul 2006
Soar, like other cognitive architectures (such as ACT-R), is not
designed to directly deal with domain problems. Instead, it is a
high-level platform on which a program can be built for a specific
problem.
On the contrary, Novamente, like other AGI systems (such as NARS),
is designed to directly
My personal guesstimate is that
what are commonly considered the higher order cognitive functions useway
less than 1% of the total power estimated for the brain (and also, that the
brain does them very inefficiently so a better implementation would use even
less power).
On the other
Joshua Fox wrote:
Greetings, I am new to the list. I hope that the following question adds
something of value.
Estimates for the total processing speed of intelligence in the human
brain are often used as crude guides to understanding the timeline
towards human-equivalent intelligence.
James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and criticism. There is also
another guy who has recently come to my attention who is doing
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Shane,Do you mean Warren Smith?Yes.Shane
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Shane, Thanks, I would appreciate that greatly.On the topic of measuring intelligence, what do you think about the actual structure of comparison of some of today's AI systems. I would like to see someone come up with and get support for a general fairly widespread set of test s for general AI
I think that public learning/training of an AGI would be a terrible disaster...
Look at what happened with OpenMind and MindPixel These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.
Ben, Yes, but OpenMind did get quite a bit of usable information into it as well, and mainly they learned a lot about the process. I believe, and they are looking at as well, different ways of grading the participants themselves, so the obviously juvienile ones could be graded down and out of the
I agree that using the Net to recruit a team of volunteer AGI
teachers would be a good idea.
But opening the process up to random web-surfers is, IMO, asking for trouble...!
-- Ben
On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:
Ben,
Yes, but OpenMind did get quite a bit of usable
Ben,
Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while
I think this one was the granddaddy:
http://yudkowsky.net/humor/signs-singularity.txt
--
Eliezer S. Yudkowsky http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence
---
To unsubscribe, change your address, or temporarily deactivate
16 matches
Mail list logo