[agi] Why so few AGI projects?

2006-09-13 Thread Joshua Fox
I'd like to raise a FAQ: Why is so little AGI research and development being done?The answers of Goertzel, Moravec, Kurzweil, Voss, and others all agree on this (no need to repeat them here), and I've read Are We Spiritual Machines, but I come away unsatisfied. (Still, if there is nothing more

Re: [agi] Why so few AGI projects?

2006-09-14 Thread Joshua Fox
Thanks, all, for those insightful answers. In combination with the published discussion of the topic, this thread is enlightening. Still, to push the point, I am fantasizing a conversation with a Hypothetical Open-Minded World-Renowned Eloquent Cognitive Scientist (Howecs). Surely there must be a

Re: [agi] Books

2007-06-11 Thread Joshua Fox
Josh, Your point about layering makes perfect sense. I just ordered your book, but, impatient as I am, could I ask a question about this, though I've asked a similar question before: Why have not the elite of intelligent and open-minded leading AI researchers not attempted a multi-layered

Re: [agi] Books

2007-06-11 Thread Joshua Fox
Josh, Thanks for that answer on the layering of mind. It's not that any existing level is wrong, but there aren't enough of them, so that the higher ones aren't being built on the right primitives in current systems. Word-level concepts in the mind are much more elastic and plastic than

Re: [agi] What's wrong with being biased?

2007-06-27 Thread Joshua Fox
Stefan, Biases fall into all these categories. Certainly some biases were useful in the ancestral environment, and even today. The key difference however is that an optical illusion is relatively easy to recognize where a cognitive bias is not. I'm not so sure. I'd say that optical

Re: [agi] Minsky and the AI emergency

2007-10-30 Thread Joshua Fox
Surely Marvin Minsky -- a top MIT professor, with a world-beating reputation in multiple fields -- can snap his fingers and get all the required funding, whether commercial or non-profit, for AGI projects which he initiates or supports? Joshua 2007/10/28, Bob Mottram [EMAIL PROTECTED]: This

Re: [agi] Re: How valuable is Solmononoff Induction for real world AGI?

2007-11-10 Thread Joshua Fox
Also, perhaps the left hand side of the equation in defn 1.3.3 should have a union symbol rather than a sum? Page 11 .. section 1.6, Fourth sentence of first paragraph, a psi is missing a backslash \psi (so its spelled out, instead of a symbol being printed.) Likewise in page 6, def 1.2.2.

Re: [agi] Funding AGI research

2007-11-18 Thread Joshua Fox
What you are advocating is to fund Development but not Research. Ben, I favor funding for both R and D. Would you put the Novamente project in the R or the D phase? If a prototype is a good way to distinguish the two, is there a prototype for Novamente? And if it is still in the research phase,

Re: [agi] AGI first mention on NPR!

2007-12-04 Thread Joshua Fox
I actually thought that that was one of the more positive pieces I've found. Listeners may come out with a bad (mis-)impression, but NPR did nothing to abet that. Joshua 2007/12/3, Bob Mottram [EMAIL PROTECTED]: Perhaps a good word of warning is that it will be really easy to

[agi] Re: [singularity] The establishment line on AGI

2008-01-15 Thread Joshua Fox
paradigm ...I still find it a fuzzy term, Kuhn reviews this fuzziness in his epilogue on the 3rd edition. But one definition of paradigm is the shared examples which drive a field. For example, chess for GOFAI, or the Turing Test for AI. These two are notparadigms for a new field of AGI -- in

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Joshua Fox
Turing also committed suicide. And Chislenko. Each of these people had different circumstances, and suicide strikes everywhere, but I wonder if there is a common thread. Joshua - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go

Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-29 Thread Joshua Fox
When transhumanists talk about indefinite life extension, they often take care to say it's optional to forestall one common objection. Yet I feel that most suicides we see should have been prevented -- that the person should have been taken into custody and treated if possible, even against their

[agi] Other AGI-like communities

2008-04-23 Thread Joshua Fox
To return to the old question of why AGI research seems so rare, Samsonovich et al. say ( http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf) 'In fact, there are several scientific communities pursuing the same or similar goals, each unified under their own unique slogan:

Re: [agi] META: A possible re-focusing of this list

2008-10-15 Thread Joshua Fox
I emphatically agree. I want to see intelligent targeted discussion of AGI. Actually, I wouldn't mind the is AGI possible discussion if it was smart and focused, but I think that narrowing the topic would increase the quality. Joshua On Wed, Oct 15, 2008 at 5:01 PM, Ben Goertzel [EMAIL PROTECTED]

Re: [agi] Should I get a PhD?

2008-12-17 Thread Joshua Fox
About graduate programs and AGI: It seems that Temple University has an affinity for AGI people--Ben Goertzel, Pei Wang, and now Peter de Blanc. Is this just a coincidence? Joshua On Wed, Dec 17, 2008 at 5:48 PM, Ben Goertzel b...@goertzel.org wrote: Can I start the PhD directly without

[agi] Reward function vs utility

2010-06-27 Thread Joshua Fox
This has probably been discussed at length, so I will appreciate a reference on this: Why does Legg's definition of intelligence (following on Hutters' AIXI and related work) involve a reward function rather than a utility function? For this purpose, reward is a function of the word state/history

Re: [agi] Reward function vs utility

2010-07-02 Thread Joshua Fox
by some additive constant, in the long run ;) ben On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox joshuat...@gmail.com wrote: This has probably been discussed at length, so I will appreciate a reference on this: Why does Legg's definition of intelligence (following on Hutters' AIXI and related

Re: [agi] Reward function vs utility

2010-07-04 Thread Joshua Fox
or unknown. Joshua On Fri, Jul 2, 2010 at 7:23 PM, Joshua Fox joshuat...@gmail.com wrote: I found the answer as given by Legg, *Machine Superintelligence*, p. 72, copied below. A reward function is used to bypass potential difficulty in communicating a utility function to the agent. Joshua

Re: [agi] Reward function vs utility

2010-07-05 Thread Joshua Fox
, the AI would be very pleased to knock the human out of the loop and push its own buttons. Similar comments would apply to automated reward calculations. --Abram On Sun, Jul 4, 2010 at 4:40 AM, Joshua Fox joshuat...@gmail.com wrote: Another point. I'm probably repeating the obvious

Re: [agi] A Course on Foundations of Theoretical Psychology...

2007-04-15 Thread Joshua Fox יהושע פוקס
I'll second that. I'd love to have the many fields necessary for AGI neatly summarized for me -- or should I say spoon-fed to me :-) This can come in the form of a book, a good website, an online course, or an onlined course with video and lecture notes. Joshua 2007/4/13, Ryan McCall [EMAIL