[agi] How long until human-level AI?

2010-09-19 Thread Ben Goertzel
Our paper "How long until human-level AI? Results from an expert assessment" (based on a survey done at AGI-09) was finally accepted for publication, in the journal "Technological Forecasting & Social Change" ... See the preprint at http://sethbaum.com/ac/fc_AI-Exper

[agi] Video of talk I gave yesterday about Cosmism

2010-09-13 Thread Ben Goertzel
Hi all, I gave a talk in Teleplace yesterday, about Cosmist philosophy and future technology. A video of the talk is here: http://telexlr8.wordpress.com/2010/09/12/ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12/ I also put my "practice version" of the talk, that I

[agi] I'm giving a talk on Cosmist philosophy (and related advanced technology) in the "Teleplace" virtual world...

2010-09-09 Thread Ben Goertzel
cond Life but simpler and more focused on presentation/collaboration...] Thanks much to the great Giulio Prisco for setting it up ;) Ben Goertzel on The Cosmist Manifesto in Teleplace, September 12, 10am PST http://telexlr8.wordpress.com/2010/09/09/reminder-ben-goertzel-on-the-cosmist-manifes

[agi] Fwd: [singularity] NEWS: Max More is Running for Board of Humanity+

2010-08-12 Thread Ben Goertzel
ou have any questions, please email me off list.) *singularity* | Archives<https://www.listbox.com/member/archive/11983/=now> <https://www.listbox.com/member/archive/rss/11983/> | Modify<https://www.listbox.com/member/?&;>Your Subscription <http://www.listbox.com> -- B

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield wrote: > Ben, > > It seems COMPLETELY obvious (to me) that almost any mutation would shorten > lifespan, so we shouldn't expect to learn much from it. Why then do the Methuselah flies live 5x as long as normal flies? You're conjecturing this i

Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
> We have those fruit fly populations also, and analysis of their genetics >> refutes your claim ;p ... >> > > Where? References? The last I looked, all they had in addition to their > long-lived groups were uncontrolled control groups, and no groups bred only > from young flies. > Michael rose's

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
> I should dredge up and forward past threads with them. There are some flaws > in their chain of reasoning, so that it won't be all that simple to sort the > few relevant from the many irrelevant mutations. There is both a huge amount > of noise, and irrelevant adaptations to their environment and

Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
> On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel wrote: > >> >> I'm speaking there, on Ai applied to life extension; and participating in >> a panel discussion on narrow vs. general AI... >> >> Having some interest, expertise, and experience in both areas,

Re: [agi] Anyone going to the Singularity Summit?

2010-08-09 Thread Ben Goertzel
agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO,

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
think they're only a moderate portion of the problem, and not the > hardest part... > > Which is? > > > *From:* Ben Goertzel > *Sent:* Monday, August 09, 2010 4:57 PM > *To:* agi > *Subject:* Re: [agi] How To Create General AI Draft2 > > > > On Mon, Aug 9,

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
> > The human visual system doesn't evolve like that on the fly. This can be > proven by the fact that we all see the same visual illusions. We all exhibit > the same visual limitations in the same way. There is much evidence that the > system doesn't evolve accidentally. It has a limited set of ru

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
processing? > >*agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, Ph

Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO,

Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
, matmaho...@yahoo.com > > > ------ > *From:* Ben Goertzel > *To:* agi > *Sent:* Sat, August 7, 2010 9:10:23 PM > *Subject:* [agi] Help requested: Making a list of (non-robotic) AGI low > hanging fruit apps > > Hi, > > A fellow AGI researcher sent me

[agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
Hi, A fellow AGI researcher sent me this request, so I figured I'd throw it out to you guys I'm putting together an AGI pitch for investors and thinking of low hanging fruit applications to argue for. I'm intentionally not involving any mechanics (robots, moving parts, etc.). I'm focusin

[agi] Brief mention of bio-AGI in the Boston Globe...

2010-08-02 Thread Ben Goertzel
&gl=&source=alertsmail&cd=sfIgD21-SMc&cad=:s1:f2:v0:d1:>this alert. Create<http://www.google.com/alerts?hl=en&gl=&source=alertsmail&cd=sfIgD21-SMc&cad=:s1:f2:v0:d1:>another alert. Manage<http://www.google.com/alerts/manage?hl=en&gl=&source=alertsmail&

Re: [agi] AGI & Alife

2010-07-27 Thread Ben Goertzel
y learning and analytical learning). In AGI 2010, virtual pets >>have been presented by Ben Goertzel and are also another topic of this forum. >>There are other approaches in AGI that uses some digital evolutionary >>approach for AGI. For me it is a clear clue that both are relat

Re: [agi] Pretty worldchanging

2010-07-24 Thread Ben Goertzel
-- Ben G >*agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel,

[agi] Re: "Cosmist Manifesto" available via Amazon.com

2010-07-21 Thread Ben Goertzel
Oh... and, a PDF version of the book is also available for free at http://goertzel.org/CosmistManifesto_July2010.pdf ;-) ... ben On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel wrote: > Hi all, > > My new futurist tract "The Cosmist Manifesto" is now available on > A

[agi] "Cosmist Manifesto" available via Amazon.com

2010-07-20 Thread Ben Goertzel
Hi all, My new futurist tract "The Cosmist Manifesto" is now available on Amazon.com, courtesy of Humanity+ Press: http://www.amazon.com/gp/product/0984609709/ Thanks to Natasha Vita-More for the beautiful cover, and David Orban for helping make the book happen... -- Ben -- Ben Goe

Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Ben Goertzel
; 0.9) && !(x > 1.1)   expanding gives ( getting rid of "!" and "&&") > (x > 0.9) == ((x > 1.1) == 0) == 1    note "!x" can be defined in terms > of "==" like so x == 0. > > (b) is a generalisation, and expansion of the

Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
com >*agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente

[agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com

Re: [agi] Solomonoff Induction is Not "Universal" and Probability is not "Prediction"

2010-07-09 Thread Ben Goertzel
; > This is perfectly OK! You don't have to find a silver bullet method of > induction or inference that works for everything! > > Dave > > > > On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel wrote: > >> >> To make this discussion more concrete, please lo

Re: [agi] Solomonoff Induction is Not "Universal" and Probability is not "Prediction"

2010-07-09 Thread Ben Goertzel
paper do you think is wrong? thx ben On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer wrote: > On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel wrote: > > If you're going to argue against a mathematical theorem, your argument must > be mathematical not verbal. Please explain one of >

Re: [agi] Solomonoff Induction is Not "Universal" and Probability is not "Prediction"

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 8:38 AM, Matt Mahoney wrote: > Ben Goertzel wrote: >> > Secondly, since it cannot be computed it is useless. Third, it is not >> the sort of thing that is useful for AGI in the first place. >> > > > I agree with these two statement

Re: [agi] Solomonoff Induction is Not "Universal" and Probability is not "Prediction"

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer wrote: > Abram, > Solomoff Induction would produce poor "predictions" if it could be used to > compute them. > Solomonoff induction is a mathematical, not verbal, construct. Based on the most obvious mapping from the verbal terms you've used above into

[agi] New KurzweilAI.net site... with my silly article & sillier chatbot ;-p ;) ....

2010-07-05 Thread Ben Goertzel
etty funny ;-) -- Ben -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC CTO, Genescient Corp Vice Chairman, Humanity+ Advisor, Singularity University and Singularity Institute External Research Professor, Xiamen University, China b...@goertzel.org " “When nothing seems to help,

Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Ben Goertzel
severe problem for contemporary AGI. > > Jim Bromer > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.c

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield wrote: > Ben, > > On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel wrote: > >> know what dimensional analysis is, but it would be great if you could >> give an example of how it's useful for everyday commonsense reasoni

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
rds the centre of the court so that you will be prepared to >> cover a ball to the extreme, near right side - or do you move more slowly? >> If you don't move rapidly, you won't be able to cover that ball if it comes. >> But if you do move rapidly, your opponent can play

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
ir own complex rules regarding > what other types of neurons they can connect to, and how they process > information. "Architecture" might involve deciding how many of each type to > provide, and what types to put adjacent to what other types, rather than the > more detailed con

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
. > > What is the real significance of the difference between the two types of > functions here? > > Joshua >*agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://ww

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
ily correct. Once I convey my vision, > then let the chips fall where they may. > > On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel wrote: > >> Hutter's AIXI for instance works [very roughly speaking] by choosing the >> most compact program that, based on historical data,

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
e entire problem of dealing with complicated situations is that these >> narrow AI methods haven't really worked. That is the core issue. >> >> Jim Bromer >> >> >> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://ww

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
> > To put it more succinctly, Dave & Ben & Hutter are doing the wrong subject > - narrow AI. Looking for the one right prediction/ explanation is narrow > AI. Being able to generate more and more possible explanations, wh. could > all be valid, is AGI. The former is rational, uniform thinking.

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
hoping > to solve. The theory has been there a while... How to effectively implement > it in a general way though, as far as I can tell, has never been solved. > > Dave > > On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel wrote: > >> >> Hi, >> >> I certain

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
able. > > I thought I'd share my progress with you all. I'll be testing the ideas on > test cases such as the ones I mentioned in the coming days and weeks. > > Dave >*agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com

Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ben Goertzel
;> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> >> <https://www.listbox.com/member/archive/rss/303/> | >> Modify<https://www.listbox.com/member/?&;>Your Subscription >> <http://www.listbox.com/> >> > >*agi* | A

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Ben Goertzel
f houses and to pictures of flying, would have the >> ability to eventually draw a picture of a flying house (along with a >> lot of other creative efforts that you have not) even thought of. But >> the thing is, that I can do this without using advanced AGI >> technique

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
x27;t find. It's not that the test set > developers weren't careful. They spent probably $1 million developing it > (several people over 2 years). It's that you can't simulate the high > complexity of thousands of computers and human users with anything less than >

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
x27;t... On Tue, Jan 13, 2009 at 1:13 PM, Philip Hunt wrote: > 2009/1/9 Ben Goertzel : >> Hi all, >> >> I intend to submit the following paper to JAGI shortly, but I figured >> I'd run it past you folks on this list first, and incorporate any >> useful feedback in

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org "This is no place to stop -- half way between ape and angel" -- Benjamin Disraeli --

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Hi, > Since I can now get to the paper some further thoughts. Concepts that > would seem hard to form in your world is organic growth and phase > changes of materials. Also naive chemistry would seem to be somewhat > important (cooking, dissolving materials, burning: these are things > that a pre-

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
ue, Jan 13, 2009 at 5:56 AM, William Pearson wrote: > 2009/1/9 Ben Goertzel : >> This is an attempt to articulate a virtual world infrastructure that >> will be adequate for the development of human-level AGI >> >> http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf &g

Re: [agi] fuzzy-probabilistic logic again

2009-01-12 Thread Ben Goertzel
> The last one ("John has cybersex with 1000 women") is very hard to > think of a replacement that is equally convincing... I'm not offended by sexual references at all ... but I have to say, this comment of yours bespeaks a VERY highly biased imagination on your pard 8-DD ben -

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Ben Goertzel
ttps://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of

[agi] initial reaction to A2I2's call center product

2009-01-12 Thread Ben Goertzel
AGI company A2I2 has released a product for automating call center functionality, see... http://www.smartaction.com/index.html Based on reading the website here is my initial reaction Certainly, automating a higher and higher percentage of call center functionality is a worthy goal, and a place

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Ben Goertzel
> agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertze

[agi] time-sensitive issue: voting members sought to participate in upcoming election for H+ (World Transhumanist Association)

2009-01-11 Thread Ben Goertzel
nd supporting the first eight candidates listed at the URL: Sonia Arrison, George Dvorsky, Patri Friedman, Ben Goertzel (big surprise), Stephane Gounari, Todd Huffman, Jonas Lamis, and Mike LaTorra. Sorry for the short notice, but if you see this in time and have the interest, I hope you'll be

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Ben Goertzel
> > I outlined the basic principle in this paper: > http://www.comirit.com/papers/commonsense07.pdf > Since then, I've changed some of the details a bit (some were described in > my AGI-08 paper), added convex hulls and experimented with more laws of > physics; but the bas

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
> The model feels underspecified to me, but I'm OK with that, the ideas > conveyed. It doesn't feel fair to insist there's no fluid dynamics > modeled though ;-) Yes, the next step would be to write out detailed equations for the model. I didn't do that in the paper because I figured that would b

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook wrote: > What about vibration? We have specialized mechanoreceptors to detect > vibration (actually vibration and pressure - presumably there's processing > to separate the two). It's vibration that lets us feel fine texture, via the > stick-slip fricti

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
It's actually mentioned there, though not emphasized... there's a section on senses... ben g On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton wrote: > Goertzel this is an interesting line of investigation. What about in > world sound perception? > > On 1/9/09, Ben Goer

[agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
lying virtual world infrastructure an effective AGI preschool would minimally require. thx Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org "I intend to live forever, or die trying." -- Groucho Marx

Re: [agi] The Smushaby of Flatway.

2009-01-07 Thread Ben Goertzel
> If it was just a matter of writing the code, then it would have been done > 50 years ago. if proving Fermat's Last theorem was just a matter of doing math, it would have been done 150 years ago ;-p obviously, all hard problems that can be solved have already been solved... ??? --

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
I'm heading off on a vacation for 4-5 days [with occasional email access] and will probably respond to this when i get back ... just wanted to let you know I'm not ignoring the question ;-) ben On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wrote: > 2008/12/30 Ben Goertzel : >

Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
iam Pearson wrote: > 2008/12/29 Ben Goertzel : > > > > Hi, > > > > I expanded a previous blog entry of mine on hypercomputation and AGI into > a > > conference paper on the topic ... here is a rough draft, on which I'd > > appreciate commentary from anyon

Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Ben Goertzel
oo.com > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered

Re: [agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
seem... -- ben g On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers < and...@ceruleansystems.com> wrote: > > On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote: > >> I expanded a previous blog entry of mine on hypercomputation and AGI into >> a conference paper on the topic ...

[agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
such as chance, imitation or intuition... -- Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI b...@goertzel.org "I intend to live forever, or die trying." -- Groucho Marx --- agi Archives: https

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
, or sand, or throws it at another ball in mid-air, or > (as we've partly discussed) plays with it like an infant ?] > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.li

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
hysical world. http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html -- Ben On Sat, Dec 27, 2008 at 8:28 AM, Ben Goertzel wrote: > > David, > > Good point... I'll revise the essay to account for it... > > The truth is, we just don't know

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
Sat, Dec 27, 2008 at 6:46 AM, David Hart wrote: > On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel wrote: > >> >> I wrote down my thoughts on this in a little more detail here (with some >> pastings from these emails plus some new info): >> >> >> http://multiver

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
I wrote down my thoughts on this in a little more detail here (with some pastings from these emails plus some new info): http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel wrote: > > >> Suppos

Re: [agi] Introducing Steve's "Theory of Everything" in cognition.

2008-12-26 Thread Ben Goertzel
> Much of AI and pretty much all of AGI is built on the proposition that we > humans must code knowledge because the stupid machines can't efficiently > learn it on their own, in short, that UNsupervised learning is difficult. > No, in fact almost **no** AGI is based on this proposition. Cyc is b

Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
> > Suppose I take the universal prior and condition it on some real-world > training data. For example, if you're interested in real-world > vision, take 1000 frames of real video, and then the proposed > probability distribution is the portion of the universal prior that > explains the real vide

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
> Most compression tests are like defining intelligence as the ability to > catch mice. They measure the ability of compressors to compress specific > files. This tends to lead to hacks that are tuned to the benchmarks. For the > generic intelligence test, all you know about the source is that it h

Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
rom > it all, and by all means let me know if you eventually come to a different > conclusion. > > > > > Richard Loosemore > > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.li

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
oronic. If you can > deliver general intelligence then you are not delivering a model of it, you > are delivering *actual* general intelligence. To use models as a basis for > it you need to have a scientific basis for a claim that the models that have > been used to implement the AGI can (in the

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
f the theoretical speculations one reads in the neuroscience literature... and I can't really think of any recent neuroscience data that refutes any of his key hypotheses... On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore wrote: > Ben Goertzel wrote: > >> >> Richard, >>

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
usion. >> >> >> >> Richard Loosemore >> >> >> ------- >> agi >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/ >

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
on for it. In addition, if you attend > events at either MIT's brain study center or its AI center, you will find > many of the people who are there are from the other of these two centers, > and that there is a considerable degree of cross-fertilization there that I > have hear

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter wrote: > Ben, > > > > Thanks for the reply. > > > > It is a shame the brain science people aren't more interested in AGI. It > seems to me there is a lot of potential for cross-fertilization. > I don't think many of these folks have a principled or

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi, > > > So if the researcher on this project have been learning some of your ideas, > and some of the better speculative thinking and neural simulations that have > been done in brains science --- either directly or indirectly --- it might > be incorrect to say that "there is no 'design for a th

Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Ben Goertzel
oosemore > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ben Goertzel
ason. There's always 2009! You never > > > know > > > > You talked about building your 'chips'. Just curious what are you > > working on? Is it hardware-related? > > > > YKY > > > > > > --

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
> > > Consider an object, such as a sock or a book or a cat. These objects > can all be recognised by young children, even though the visual input > coming from trhem chasnges from what angle they're viewed at. More > fundamentally, all these objects can change shape, yet humans can > still effortl

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
s form), current robots face a hard and odd problem... ben On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt wrote: > 2008/12/20 Ben Goertzel : > > > > It doesn't have to be humanoid ... but apart from rolling instead of > > walking, > > I don't see any really sig

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt wrote: > 2008/12/20 Ben Goertzel : > > > > Well, it's completely obvious to me, based on my knowledge of virtual > worlds > > and robotics, that building a high quality virtual world is orders of > > magnitude eas

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
> -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > &

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
> > It's an interesting idea, but I suspect it too will rapidly break down. > Which activities can be known about in a rich, better-than-blind-Cyc way > *without* a knowledge of objects and object manipulation? How can an agent > know about reading a book,for example, if it can't pick up and manip

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn wrote: > Ben: > > > Right. My intuition is that we don't need to simulate the dynamics > > of fluids, powders and the like in our virtual world to make it adequate > > for teaching AGIs humanlike, human-level AGI. But this could be > > wrong. > > I s

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
Hi, >> >> Because some folks find that they are not subjectively sufficient to >> explain everything they subjectively experience... >> > That would be more convincing if such people were to show evidence that > they understand what algorithmic processes are and can do. I'm almost > tempted to c

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
deaf, I suppose ;-) On Fri, Dec 19, 2008 at 9:42 PM, Ben Goertzel wrote: > > Ahhh... ***that's*** why everyone always hates my cakes!!! I never > realized you were supposed to **taste** the stuff ... I thought it was just > supposed to look funky after you throw it i

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Ahhh... ***that's*** why everyone always hates my cakes!!! I never realized you were supposed to **taste** the stuff ... I thought it was just supposed to look funky after you throw it in somebody's face ;-) On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt wrote: > 2008/12/20

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers < and...@ceruleansystems.com> wrote: > > On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote: > >> The problem is that **there is no way for science to ever establish the >> existence of a nonalgorithmic process**, beca

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
s of naive physics... ben g On Fri, Dec 19, 2008 at 8:56 PM, Philip Hunt wrote: > 2008/12/20 Ben Goertzel : > > > >> > >> 3. to provide a "toy domain" for the AI to think about and become > >> proficient in. > > > > Not just to become prof

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
> > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: htt

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 8:42 PM, Philip Hunt wrote: > 2008/12/20 Ben Goertzel : > > > > I.e., I doubt one needs serious fluid dynamics in one's simulation ... I > > doubt one needs bodies with detailed internal musculature ... but I think > > one does need basic N

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
> You, like the rest of us, are incapable of discussing anything else. Email > cannot carry non-algorithmic ideas or concepts. Just because you do not > consider your system "algorithmic" does not mean that it is not. Nature is > algorithmic, your chip is algorithmic, everything is algorithmic.

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
> You can't deliver any evidence at all that the processes I am investigating > are invalid. > True, and you can't deliver any evidence that once AGIs reach an IQ of 1000, aliens will contact them and welcome them to the Trans-Universal Club of Really Clever Beings. In fact, I won't be at all sur

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
eniently > doable > I mean)? > > Thanks! > > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Yo

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 7:51 PM, Ben Goertzel wrote: > > Well, I think you might have overreacted to his writing style for cultural > reasons > > However, I also think that -- to be Americanly blunt -- you're very > unlikely to learn anything from conversing with Mike, O

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
e > know. In that case I'll try my best to learn his way of communication, > at least when talking to British and American people --- who knows, it > may even improve my marketing ability. ;-) > > Pei > > On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel wrote: > >

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
> * > d) 75 years of computer-based-AGI failure - has sent me a message that no > amount of hubris on my part can overcome. As a scientist I must be informed > by empirical outcomes, not dogma or wishful thinking. > > * > That argument really is a foolish one not worth paying attention to. I me

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
s' the balance sheet and > revenue statements. It panics about cash flows. Feels extatic when profit is > good. No longer do we need 'rules of incorporation'. The company literally > IS the AGI. If the company "goes bad" you take it out and shoot it. The > process of giving bi

Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
ben On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore wrote: > Ben Goertzel wrote: > >> >> yeah ... that's not a matter of the English language but rather a matter >> of the American Way ;-p >> >> Through working with many non-Americans I have noted that w

Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
t as *obviously* far beyond the scope of contemporary AGI designs (at least according to some experts, like me), which is what makes it more interesting in the present moment... ben g -- Ben G On Fri, Dec 19, 2008 at 5:12 PM, Philip Hunt wrote: > 2008/12/19 Ben Goertzel : > > > > What

Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
Colin, It is of course possible that human intelligence relies upon electromagnetic-field sensing that goes beyond the traditional "five senses." However, this argument > Functionally, the key behaviour I use to test my approach is "scientific > behaviour". If you sacrifice the full EM field, an

  1   2   3   4   5   6   7   8   9   10   >