[agi] How long until human-level AI?

2010-09-19 Thread Ben Goertzel
Our paper How long until human-level AI? Results from an expert assessment (based on a survey done at AGI-09) was finally accepted for publication, in the journal Technological Forecasting Social Change ... See the preprint at http://sethbaum.com/ac/fc_AI-Experts.html -- Ben Goertzel -- Ben

[agi] Video of talk I gave yesterday about Cosmism

2010-09-13 Thread Ben Goertzel
Hi all, I gave a talk in Teleplace yesterday, about Cosmist philosophy and future technology. A video of the talk is here: http://telexlr8.wordpress.com/2010/09/12/ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12/ I also put my practice version of the talk, that I did before

[agi] I'm giving a talk on Cosmist philosophy (and related advanced technology) in the Teleplace virtual world...

2010-09-09 Thread Ben Goertzel
and more focused on presentation/collaboration...] Thanks much to the great Giulio Prisco for setting it up ;) Ben Goertzel on The Cosmist Manifesto in Teleplace, September 12, 10am PST http://telexlr8.wordpress.com/2010/09/09/reminder-ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september

[agi] Fwd: [singularity] NEWS: Max More is Running for Board of Humanity+

2010-08-12 Thread Ben Goertzel
://www.natasha.cc/ (If you have any questions, please email me off list.) *singularity* | Archiveshttps://www.listbox.com/member/archive/11983/=now https://www.listbox.com/member/archive/rss/11983/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
We have those fruit fly populations also, and analysis of their genetics refutes your claim ;p ... Where? References? The last I looked, all they had in addition to their long-lived groups were uncontrolled control groups, and no groups bred only from young flies. Michael rose's UCI lab

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield steve.richfi...@gmail.com wrote: Ben, It seems COMPLETELY obvious (to me) that almost any mutation would shorten lifespan, so we shouldn't expect to learn much from it. Why then do the Methuselah flies live 5x as long as normal flies?

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
, On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote: I'm speaking there, on Ai applied to life extension; and participating in a panel discussion on narrow vs. general AI... Having some interest, expertise, and experience in both areas, I find it hard to imagine much interplay

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
I should dredge up and forward past threads with them. There are some flaws in their chain of reasoning, so that it won't be all that simple to sort the few relevant from the many irrelevant mutations. There is both a huge amount of noise, and irrelevant adaptations to their environment and

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
The human visual system doesn't evolve like that on the fly. This can be proven by the fact that we all see the same visual illusions. We all exhibit the same visual limitations in the same way. There is much evidence that the system doesn't evolve accidentally. It has a limited set of rules

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
for AGI, but I think they're only a moderate portion of the problem, and not the hardest part... Which is? *From:* Ben Goertzel b...@goertzel.org *Sent:* Monday, August 09, 2010 4:57 PM *To:* agi agi@v2.listbox.com *Subject:* Re: [agi] How To Create General AI Draft2 On Mon, Aug 9, 2010

Re: [agi] Anyone going to the Singularity Summit?

2010-08-09 Thread Ben Goertzel
://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor

[agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
Hi, A fellow AGI researcher sent me this request, so I figured I'd throw it out to you guys I'm putting together an AGI pitch for investors and thinking of low hanging fruit applications to argue for. I'm intentionally not involving any mechanics (robots, moving parts, etc.). I'm

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
-- *From:* Ben Goertzel b...@goertzel.org *To:* agi agi@v2.listbox.com *Sent:* Sat, August 7, 2010 9:10:23 PM *Subject:* [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps Hi, A fellow AGI researcher sent me this request, so I figured I'd

[agi] Brief mention of bio-AGI in the Boston Globe...

2010-08-02 Thread Ben Goertzel
/alerts?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:another alert. Managehttp://www.google.com/alerts/manage?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:your alerts. -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor

Re: [agi] AGI Alife

2010-07-27 Thread Ben Goertzel
have been presented by Ben Goertzel and are also another topic of this forum. There are other approaches in AGI that uses some digital evolutionary approach for AGI. For me it is a clear clue that both are related in some instance. By ALife I mean the life-as-it-could-be approach (not simulate

Re: [agi] Pretty worldchanging

2010-07-24 Thread Ben Goertzel
* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity

[agi] Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Hi all, My new futurist tract The Cosmist Manifesto is now available on Amazon.com, courtesy of Humanity+ Press: http://www.amazon.com/gp/product/0984609709/ Thanks to Natasha Vita-More for the beautiful cover, and David Orban for helping make the book happen... -- Ben -- Ben Goertzel, PhD

[agi] Re: Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Oh... and, a PDF version of the book is also available for free at http://goertzel.org/CosmistManifesto_July2010.pdf ;-) ... ben On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel b...@goertzel.org wrote: Hi all, My new futurist tract The Cosmist Manifesto is now available on Amazon.com

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Ben Goertzel
. --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote: Abram, Solomoff Induction would produce poor predictions if it could be used to compute them. Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 8:38 AM, Matt Mahoney matmaho...@yahoo.com wrote: Ben Goertzel wrote: Secondly, since it cannot be computed it is useless. Third, it is not the sort of thing that is useful for AGI in the first place. I agree with these two statements The principle of Solomonoff

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
of that paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote: On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote: If you're going to argue against a mathematical theorem, your argument must be mathematical not verbal. Please

Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
or inference that works for everything! Dave On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote: To make this discussion more concrete, please look at http://www.vetta.org/documents/disSol.pdf Section 2.5 gives a simple version of the proof that Solomonoff induction

[agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
:46 PM, Ben Goertzel b...@goertzel.org wrote: http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303

[agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-05 Thread Ben Goertzel
;-) -- Ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org “When nothing seems to help, I go look

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Ben Goertzel
http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org “When nothing seems to help, I go

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
mentioned in the coming days and weeks. Dave *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
to solve. The theory has been there a while... How to effectively implement it in a general way though, as far as I can tell, has never been solved. Dave On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel b...@goertzel.org wrote: Hi, I certainly agree with this method, but of course it's

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
To put it more succinctly, Dave Ben Hutter are doing the wrong subject - narrow AI. Looking for the one right prediction/ explanation is narrow AI. Being able to generate more and more possible explanations, wh. could all be valid, is AGI. The former is rational, uniform thinking. The

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
where they may. On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote: Hutter's AIXI for instance works [very roughly speaking] by choosing the most compact program that, based on historical data, would have yielded maximum reward ... and there it is! What did I see? Example

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
of the difference between the two types of functions here? Joshua *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
to provide, and what types to put adjacent to what other types, rather than the more detailed concept now usually thought to exist. Thanks for helping me wring my thought out here. Steve = On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote: Hi Steve, A few

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
problem) indicate, these problems can be as simple and accessible as fairly easy narrow AI problems. *From:* Ben Goertzel b...@goertzel.org *Sent:* Sunday, June 27, 2010 7:33 PM *To:* agi agi@v2.listbox.com *Subject:* Re: [agi] Huge Progress on the Core of AGI That's a rather bizarre

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield steve.richfi...@gmail.comwrote: Ben, On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote: know what dimensional analysis is, but it would be great if you could give an example of how it's useful for everyday commonsense

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ben Goertzel
/ *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
, 2009 at 5:56 AM, William Pearson wil.pear...@gmail.com wrote: 2009/1/9 Ben Goertzel b...@goertzel.org: This is an attempt to articulate a virtual world infrastructure that will be adequate for the development of human-level AGI http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf goertzel.org

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Hi, Since I can now get to the paper some further thoughts. Concepts that would seem hard to form in your world is organic growth and phase changes of materials. Also naive chemistry would seem to be somewhat important (cooking, dissolving materials, burning: these are things that a

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org This is no place to stop -- half way between ape and angel -- Benjamin Disraeli --- agi Archives: https

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
... On Tue, Jan 13, 2009 at 1:13 PM, Philip Hunt cabala...@googlemail.com wrote: 2009/1/9 Ben Goertzel b...@goertzel.org: Hi all, I intend to submit the following paper to JAGI shortly, but I figured I'd run it past you folks on this list first, and incorporate any useful feedback into the draft I

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
that you can't simulate the high complexity of thousands of computers and human users with anything less than that. Simple problems have simple solutions, but that's not AGI. -- Matt Mahoney, matmaho...@yahoo.com --- On Fri, 1/9/09, Ben Goertzel b...@goertzel.org wrote: From: Ben Goertzel b

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Ben Goertzel
Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org This is no place to stop -- half way

[agi] initial reaction to A2I2's call center product

2009-01-12 Thread Ben Goertzel
AGI company A2I2 has released a product for automating call center functionality, see... http://www.smartaction.com/index.html Based on reading the website here is my initial reaction Certainly, automating a higher and higher percentage of call center functionality is a worthy goal, and a place

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Ben Goertzel
://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org This is no place to stop -- half way between

[agi] time-sensitive issue: voting members sought to participate in upcoming election for H+ (World Transhumanist Association)

2009-01-11 Thread Ben Goertzel
at the URL: Sonia Arrison, George Dvorsky, Patri Friedman, Ben Goertzel (big surprise), Stephane Gounari, Todd Huffman, Jonas Lamis, and Mike LaTorra. Sorry for the short notice, but if you see this in time and have the interest, I hope you'll become a member by tonight so that you can vote next

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook nathan.c...@gmail.com wrote: What about vibration? We have specialized mechanoreceptors to detect vibration (actually vibration and pressure - presumably there's processing to separate the two). It's vibration that lets us feel fine texture, via the

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
The model feels underspecified to me, but I'm OK with that, the ideas conveyed. It doesn't feel fair to insist there's no fluid dynamics modeled though ;-) Yes, the next step would be to write out detailed equations for the model. I didn't do that in the paper because I figured that would be

[agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
virtual world infrastructure an effective AGI preschool would minimally require. thx Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
It's actually mentioned there, though not emphasized... there's a section on senses... ben g On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton brila...@gmail.com wrote: Goertzel this is an interesting line of investigation. What about in world sound perception? On 1/9/09, Ben Goertzel b

Re: [agi] The Smushaby of Flatway.

2009-01-07 Thread Ben Goertzel
If it was just a matter of writing the code, then it would have been done 50 years ago. if proving Fermat's Last theorem was just a matter of doing math, it would have been done 150 years ago ;-p obviously, all hard problems that can be solved have already been solved... ???

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
wil.pear...@gmail.comwrote: 2008/12/29 Ben Goertzel b...@goertzel.org: Hi, I expanded a previous blog entry of mine on hypercomputation and AGI into a conference paper on the topic ... here is a rough draft, on which I'd appreciate commentary from anyone who's knowledgeable

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
I'm heading off on a vacation for 4-5 days [with occasional email access] and will probably respond to this when i get back ... just wanted to let you know I'm not ignoring the question ;-) ben On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wil.pear...@gmail.comwrote: 2008/12/30 Ben Goertzel

[agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
, imitation or intuition... -- Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive

Re: [agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
... -- ben g On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers and...@ceruleansystems.com wrote: On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote: I expanded a previous blog entry of mine on hypercomputation and AGI into a conference paper on the topic ... here is a rough draft, on which I'd

Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Ben Goertzel
/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
...@cogical.com wrote: On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote: I wrote down my thoughts on this in a little more detail here (with some pastings from these emails plus some new info): http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html -- Ben On Sat, Dec 27, 2008 at 8:28 AM, Ben Goertzel b...@goertzel.org wrote: David, Good point... I'll revise the essay to account for it... The truth is, we just don't know -- but in taking the virtual world

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
Most compression tests are like defining intelligence as the ability to catch mice. They measure the ability of compressors to compress specific files. This tends to lead to hacks that are tuned to the benchmarks. For the generic intelligence test, all you know about the source is that it has

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
Suppose I take the universal prior and condition it on some real-world training data. For example, if you're interested in real-world vision, take 1000 frames of real video, and then the proposed probability distribution is the portion of the universal prior that explains the real video.

Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-26 Thread Ben Goertzel
Much of AI and pretty much all of AGI is built on the proposition that we humans must code knowledge because the stupid machines can't efficiently learn it on their own, in short, that UNsupervised learning is difficult. No, in fact almost **no** AGI is based on this proposition. Cyc is

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
I wrote down my thoughts on this in a little more detail here (with some pastings from these emails plus some new info): http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel b...@goertzel.org wrote

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
of the theoretical speculations one reads in the neuroscience literature... and I can't really think of any recent neuroscience data that refutes any of his key hypotheses... On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore r...@lightlink.comwrote: Ben Goertzel wrote: Richard, I'm curious what you

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303

Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Ben Goertzel
/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi, So if the researcher on this project have been learning some of your ideas, and some of the better speculative thinking and neural simulations that have been done in brains science --- either directly or indirectly --- it might be incorrect to say that there is no 'design for a thinking

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote: Ben, Thanks for the reply. It is a shame the brain science people aren't more interested in AGI. It seems to me there is a lot of potential for cross-fertilization. I don't think many of these folks have a

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
...@pgrad.unimelb.edu.au] *Sent:* Monday, December 22, 2008 6:19 PM *To:* agi@v2.listbox.com *Subject:* Re: [agi] SyNAPSE might not be a joke was Building a machine that can learn from experience Ben Goertzel wrote: On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ben Goertzel
://www.listbox.com -- *agi* | Archives https://www.listbox.com/member/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn derekz...@msn.com wrote: Ben: Right. My intuition is that we don't need to simulate the dynamics of fluids, powders and the like in our virtual world to make it adequate for teaching AGIs humanlike, human-level AGI. But this could be wrong.

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
It's an interesting idea, but I suspect it too will rapidly break down. Which activities can be known about in a rich, better-than-blind-Cyc way *without* a knowledge of objects and object manipulation? How can an agent know about reading a book,for example, if it can't pick up and

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt cabala...@googlemail.comwrote: 2008/12/20 Ben Goertzel b...@goertzel.org: Well, it's completely obvious to me, based on my knowledge of virtual worlds and robotics, that building a high quality virtual world is orders of magnitude easier

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
and odd problem... ben On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt cabala...@googlemail.comwrote: 2008/12/20 Ben Goertzel b...@goertzel.org: It doesn't have to be humanoid ... but apart from rolling instead of walking, I don't see any really significant simplifications obtainable from

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
Consider an object, such as a sock or a book or a cat. These objects can all be recognised by young children, even though the visual input coming from trhem chasnges from what angle they're viewed at. More fundamentally, all these objects can change shape, yet humans can still effortlessly

Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel
--- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC

Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel
colleagues in the past who favored such a style of discourse ;-) ben On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com wrote: On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org wrote: IMHO, Mike Tintner is not often rude, and is not exactly a troll because I feel he

Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel
In my opinion you are being too generous and your generosity is being taken advantage of. That is quite possible; it's certainly happened before... As well as trying to be nice to Mike, you have to bear list quality in mind and decide whether his ramblings are of some benefit to all the

[agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
-- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
Colin, It is of course possible that human intelligence relies upon electromagnetic-field sensing that goes beyond the traditional five senses. However, this argument Functionally, the key behaviour I use to test my approach is scientific behaviour. If you sacrifice the full EM field, an AGI

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
beyond the scope of contemporary AGI designs (at least according to some experts, like me), which is what makes it more interesting in the present moment... ben g -- Ben G On Fri, Dec 19, 2008 at 5:12 PM, Philip Hunt cabala...@googlemail.comwrote: 2008/12/19 Ben Goertzel b...@goertzel.org: What

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
, 2008 at 5:29 PM, Richard Loosemore r...@lightlink.comwrote: Ben Goertzel wrote: yeah ... that's not a matter of the English language but rather a matter of the American Way ;-p Through working with many non-Americans I have noted that what Americans often intend as a playful obnoxiousness

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
/archive/303/=now https://www.listbox.com/member/archive/rss/303/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com/ -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
* d) 75 years of computer-based-AGI failure - has sent me a message that no amount of hubris on my part can overcome. As a scientist I must be informed by empirical outcomes, not dogma or wishful thinking. * That argument really is a foolish one not worth paying attention to. I mean, it

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
know. In that case I'll try my best to learn his way of communication, at least when talking to British and American people --- who knows, it may even improve my marketing ability. ;-) Pei On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel b...@goertzel.org wrote: And when a Chinese doesn't

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 7:51 PM, Ben Goertzel b...@goertzel.org wrote: Well, I think you might have overreacted to his writing style for cultural reasons However, I also think that -- to be Americanly blunt -- you're very unlikely to learn anything from conversing with Mike, On AGI

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
/ | Modifyhttps://www.listbox.com/member/?;Your Subscription http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live forever, or die trying. -- Groucho Marx

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
You can't deliver any evidence at all that the processes I am investigating are invalid. True, and you can't deliver any evidence that once AGIs reach an IQ of 1000, aliens will contact them and welcome them to the Trans-Universal Club of Really Clever Beings. In fact, I won't be at all

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
You, like the rest of us, are incapable of discussing anything else. Email cannot carry non-algorithmic ideas or concepts. Just because you do not consider your system algorithmic does not mean that it is not. Nature is algorithmic, your chip is algorithmic, everything is algorithmic. That

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 8:42 PM, Philip Hunt cabala...@googlemail.comwrote: 2008/12/20 Ben Goertzel b...@goertzel.org: I.e., I doubt one needs serious fluid dynamics in one's simulation ... I doubt one needs bodies with detailed internal musculature ... but I think one does need basic

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?; Powered by Listbox: http://www.listbox.com -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org I intend to live

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
physics... ben g On Fri, Dec 19, 2008 at 8:56 PM, Philip Hunt cabala...@googlemail.comwrote: 2008/12/20 Ben Goertzel b...@goertzel.org: 3. to provide a toy domain for the AI to think about and become proficient in. Not just to become proficient in the domain, but become proficient

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers and...@ceruleansystems.com wrote: On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote: The problem is that **there is no way for science to ever establish the existence of a nonalgorithmic process**, because science deals only with finite sets

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Ben Goertzel b...@goertzel.org: Baking a cake is a harder example. An AGI trained in a virtual world could certainly follow a recipe to make a passable cake. But it would never learn to be a **really good** baker in the virtual world, unless the virtual world were fabulously

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
deaf, I suppose ;-) On Fri, Dec 19, 2008 at 9:42 PM, Ben Goertzel b...@goertzel.org wrote: Ahhh... ***that's*** why everyone always hates my cakes!!! I never realized you were supposed to **taste** the stuff ... I thought it was just supposed to look funky after you throw it in somebody's

  1   2   3   4   5   6   7   8   9   10   >