[agi] How long until human-level AI?

2010-09-19 Thread Ben Goertzel
Our paper How long until human-level AI? Results from an expert
assessment (based on a survey done at AGI-09) was finally accepted
for publication, in the journal Technological Forecasting  Social
Change ...

See the preprint at

http://sethbaum.com/ac/fc_AI-Experts.html

-- Ben Goertzel

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
Adjunct Professor of Cognitive Science, Xiamen University, China
b...@goertzel.org

My humanity is a constant self-overcoming -- Friedrich Nietzsche


---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Video of talk I gave yesterday about Cosmism

2010-09-13 Thread Ben Goertzel
Hi all,

I gave a talk in Teleplace yesterday, about Cosmist philosophy and future
technology.  A video of the talk is here:

http://telexlr8.wordpress.com/2010/09/12/ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12/

I also put  my practice version of the talk, that I did before the real
talk, online here:

http://www.vimeo.com/14930325

(The practice version is slower-paced than the Teleplace version, and lacks
the QA at the end, but it goes through
some points in a little more depth.)

Of course, the Cosmist Manifesto book says it all in more detail ... links
to the book are given along with the first
video linked above.

thx
Ben Goertzel



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] I'm giving a talk on Cosmist philosophy (and related advanced technology) in the Teleplace virtual world...

2010-09-09 Thread Ben Goertzel
It's 10AM Pacific time, Sunday September 12 2010

Be there or don't ;-)

If you're interested to join the conversation, but haven't used Teleplace
before, be sure to download it perhaps 15-30 minutes before the talk, so you
can get used to the software.  [It's much like Second Life but simpler and
more focused on presentation/collaboration...]

Thanks much to the great Giulio Prisco for setting it up ;)


Ben Goertzel on The Cosmist Manifesto in Teleplace,
September 12, 10am PST
http://telexlr8.wordpress.com/2010/09/09/reminder-ben-goertzel-on-the-cosmist-manifesto-in-teleplace-september-12-10am-pst/


thx
Ben



---
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/8660244-d750797a
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Fwd: [singularity] NEWS: Max More is Running for Board of Humanity+

2010-08-12 Thread Ben Goertzel
-- Forwarded message --
From: Natasha Vita-More nata...@natasha.cc
Date: Thu, Aug 12, 2010 at 1:02 PM
Subject: [singularity] NEWS: Max More is Running for Board of Humanity+
To: singularity singular...@v2.listbox.com


 Friends,

It is my pleasure to endorse Max More's candidacy for joining the Board of
Directors of Humanity+.

Today is the last day to become a member of Humanity+ in order to vote for
Max as a new Board member.   Voting opens this weekend!

Please join now!  http://humanityplus.org/join/

Thank you for your support of Max!

Natasha


Natasha Vita-More http://www.natasha.cc/

(If you have any questions, please email me off list.)
  *singularity* | Archiveshttps://www.listbox.com/member/archive/11983/=now
https://www.listbox.com/member/archive/rss/11983/ |
Modifyhttps://www.listbox.com/member/?;Your
Subscription
http://www.listbox.com



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
Adjunct Professor of Cognitive Science, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
 We have those fruit fly populations also, and analysis of their genetics
 refutes your claim ;p ...


 Where? References? The last I looked, all they had in addition to their
 long-lived groups were uncontrolled control groups, and no groups bred only
 from young flies.



Michael rose's UCI lab has evolved flies specifically for short lifespan,
but the results may not be published yet...



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-11 Thread Ben Goertzel
On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield steve.richfi...@gmail.com
 wrote:

 Ben,

 It seems COMPLETELY obvious (to me) that almost any mutation would shorten
 lifespan, so we shouldn't expect to learn much from it.



Why then do the Methuselah flies live 5x as long as normal flies?  You're
conjecturing this is unrelated to the dramatically large number of SNPs with
very different frequencies in the two classes of populations???

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
I'm writing an article on the topic for H+ Magazine, which will appear in
the next couple weeks ... I'll post a link to it when it appears

I'm not advocating applying AI in the absence of new experiments of course.
I've been working closely with Genescient, applying AI tech to analyze the
genomics of their long-lived superflies, so part of my message is about the
virtuous cycle achievable via synergizing AI data analysis with
carefully-designed experimental evolution of model organisms...

-- Ben

On Tue, Aug 10, 2010 at 7:25 AM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:


 I'm speaking there, on Ai applied to life extension; and participating in
 a panel discussion on narrow vs. general AI...

 Having some interest, expertise, and experience in both areas, I find it
 hard to imagine much interplay at all.

 The present challenge is wrapped up in a lack of basic information,
 resulting from insufficient funds to do the needed experiments.
 Extrapolations have already gone WAY beyond the data, and new methods to
 push extrapolations even further wouldn't be worth nearly as much as just a
 little more hard data.

 Just look at Aubrey's long list of aging mechanisms. We don't now even know
 which predominate, or which cause others. Further, there are new candidates
 arising every year, e.g. Burzynski's theory that most aging is secondary to
 methylation of DNA receptor sites, or my theory that Aubrey's entire list
 could be explained by people dropping their body temperatures later in life.
 There are LOTS of other theories, and without experimental results, there is
 absolutely no way, AI or not, to sort the wheat from the chaff.

 Note that one of the front runners, the cosmic ray theory, could easily be
 tested by simply raising some mice in deep tunnels. This is high-school
 level stuff, yet with NO significant funding for aging research, it remains
 undone.

 Note my prior posting explaining my inability even to find a source of
 used mice for kids to use in high-school anti-aging experiments, all while
 university labs are now killing their vast numbers of such mice. So long as
 things remain THIS broken, anything that isn't part of the solution simply
 becomes a part of the very big problem, AIs included.

 The best that an AI could seemingly do is to pronounce Fund and facilitate
 basic aging research and then suspend execution pending an interrupt
 indicating that the needed experiments have been done.

 Could you provide some hint as to where you are going with this?

 Steve

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-10 Thread Ben Goertzel
 I should dredge up and forward past threads with them. There are some flaws
 in their chain of reasoning, so that it won't be all that simple to sort the
 few relevant from the many irrelevant mutations. There is both a huge amount
 of noise, and irrelevant adaptations to their environment and their
 treatment.


They have evolved many different populations in parallel, using the same
fitness criterion.  This provides powerful noise filtering



 Even when the relevant mutations are eventually identified, it isn't clear
 how that will map to usable therapies for the existing population.


yes, that's a complex matter



 Further, most of the things that kill us operate WAY too slowly to affect
 fruit flies, though there are some interesting dual-affecting problems.


Fruit flies get all the  major ailments that kill people frequently, except
cancer.  heart disease, neurodegenerative disease, respiratory problems,
immune problems, etc.



 As I have posted in the past, what we have here in the present human
 population is about the equivalent of a fruit fly population that was bred
 for the shortest possible lifespan.



Certainly not.  We have those fruit fly populations also, and analysis of
their genetics refutes your claim ;p ...



ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
Hi David,

I read the essay

I think it summarizes well some of the key issues involving the bridge
between perception and cognition, and the hierarchical decomposition of
natural concepts

I find the ideas very harmonious with those of Jeff Hawkins, Itamar Arel,
and other researchers focused on hierarchical deep learning approaches to
vision with longer-term AGI ambitions

I'm not sure there are any dramatic new ideas in the essay.  Do you think
there are?

My own view is that these ideas are basically right, but handle only a
modest percentage of what's needed to make a human-level, vaguely human-like
AGI   I.e. I don't agree that solving vision and the vision-cognition
bridge is *such* a huge part of AGI, though it's certainly a nontrivial
percentage...


-- Ben G

On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher...@gmail.com wrote:

 Hey Guys,

 I've been working on writing out my approach to create general AI to share
 and debate it with others in the field. I've attached my second draft of it
 in PDF format, if you guys are at all interested. It's still a work in
 progress and hasn't been fully edited. Please feel free to comment,
 positively or negatively, if you have a chance to read any of it. I'll be
 adding to and editing it over the next few days.

 I'll try to reply more professionally than I have been lately :) Sorry :S

 Cheers,

 Dave
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
On Mon, Aug 9, 2010 at 11:42 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben: I don't agree that solving vision and the vision-cognition bridge is
 *such* a huge part of AGI, though it's certainly a nontrivial percentage

 Presumably because you don't envisage your AGI/computer as an independent
 entity? All its info. is going to have to be entered into it in a specially
 prepared form - and it's still going to be massively and continuously
 dependent on human programmers?


I envisage my AGI as an independent entity, ingesting information from the
world in a similar manner to how humans do (as well as through additional
senses not available to humans)

You misunderstood my statement.  I think that vision and the
vision-cognition bridge are important for AGI, but I think they're only a
moderate portion of the problem, and not the hardest part...




 Humans and real AGI's receive virtually all their info. - certainly all
 their internet info - through heavily visual processing (with obvious
 exceptions like sound). You can't do maths and logic if you can't see them,
 and they have visual forms -  equations and logic have visual form and use
 visual ideogrammatic as well as visual numerical signs.

 Just wh. intelligent problemsolving operations is your AGI going to do,
 that do NOT involve visual processing OR - the alternative - massive human
 assistance to substitute for that processing?

*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel

 The human visual system doesn't evolve like that on the fly. This can be
 proven by the fact that we all see the same visual illusions. We all exhibit
 the same visual limitations in the same way. There is much evidence that the
 system doesn't evolve accidentally. It has a limited set of rules it uses to
 learn from perceptual data.



That is not a proof, of course.  It could be that given a general
architecture, and inputs with certain statistical properties, the same
internal structures inevitably self-organize




 I think a more deliberate approach would be more effective because we can
 understand why it does what it does, how it does it, and why its not working
 if it doesn't work. With such deliberate approaches, it is much more clear
 how to proceed and to reuse knowledge in many complementary ways. This is
 what I meant by emergence.



I understand the general concept.  I am reminded a bit of Poggio's
hierarchical visual cortex simulations -- which do attempt to emulate the
human brain's specific processing, on a neuronal cluster and inter-cluster
connectivity level

However, Poggio hasn't yet solved the problem of making this kind of
deliberately-engineered hierarchical vision network incorporate cognition==
perception feedback.  At this stage it seems basically a feedforward
system.

So I'm curious

-- what are the specific pattern-recognition modules that you will put into
your system, and how will you arrange them hierarchically?

-- how will you handle feedback connections (top-down) among the modules?

thx
ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] How To Create General AI Draft2

2010-08-09 Thread Ben Goertzel
IMO the hardest part is not any particular part, but rather integration:
getting all the parts to work together in a scalable, adaptive way...

On Mon, Aug 9, 2010 at 12:48 PM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben:I think that vision and the vision-cognition bridge are important for
 AGI, but I think they're only a moderate portion of the problem, and not the
 hardest part...

 Which is?


  *From:* Ben Goertzel b...@goertzel.org
 *Sent:* Monday, August 09, 2010 4:57 PM
 *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] How To Create General AI Draft2



 On Mon, Aug 9, 2010 at 11:42 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben: I don't agree that solving vision and the vision-cognition bridge
 is *such* a huge part of AGI, though it's certainly a nontrivial percentage

 Presumably because you don't envisage your AGI/computer as an independent
 entity? All its info. is going to have to be entered into it in a specially
 prepared form - and it's still going to be massively and continuously
 dependent on human programmers?


 I envisage my AGI as an independent entity, ingesting information from the
 world in a similar manner to how humans do (as well as through additional
 senses not available to humans)

 You misunderstood my statement.  I think that vision and the
 vision-cognition bridge are important for AGI, but I think they're only a
 moderate portion of the problem, and not the hardest part...




 Humans and real AGI's receive virtually all their info. - certainly all
 their internet info - through heavily visual processing (with obvious
 exceptions like sound). You can't do maths and logic if you can't see them,
 and they have visual forms -  equations and logic have visual form and use
 visual ideogrammatic as well as visual numerical signs.

 Just wh. intelligent problemsolving operations is your AGI going to do,
 that do NOT involve visual processing OR - the alternative - massive human
 assistance to substitute for that processing?

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if we are
 to give everything its due, two times two makes five is sometimes a very
 charming thing too. -- Fyodor Dostoevsky

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Anyone going to the Singularity Summit?

2010-08-09 Thread Ben Goertzel
I'm speaking there, on Ai applied to life extension; and participating in a
panel discussion on narrow vs. general AI...

ben g

On Mon, Aug 9, 2010 at 4:01 PM, David Jones davidher...@gmail.com wrote:

 I've decided to go. I was wondering if anyone else here is.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
Hi,

A fellow AGI researcher sent me this request, so I figured I'd throw it
out to you guys


I'm putting together an AGI pitch for investors and thinking of low
hanging fruit applications to argue for. I'm intentionally not
involving any mechanics (robots, moving parts, etc.). I'm focusing on
voice (i.e. conversational agents) and perhaps vision-based systems.
Hellen Keller AGI, if you will :)

Along those lines, I'd like any ideas you may have that would fall
under this description. I need to substantiate the case for such AGI
technology by making an argument for high-value apps. All ideas are
welcome.


All serious responses will be appreciated!!

Also, I would be grateful if we
could keep this thread closely focused on direct answers to this
question, rather than
digressive discussions on Helen Keller, the nature of AGI, the definition of AGI
versus narrow AI, the achievability or unachievability of AGI, etc.
etc.  If you think
the question is bad or meaningless or unclear or whatever, that's
fine, but please
start a new thread with a different subject line to make your point.

If the discussion is useful, my intention is to mine the answers into a compact
list to convey to him

Thanks!
Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Help requested: Making a list of (non-robotic) AGI low hanging fruit apps

2010-08-07 Thread Ben Goertzel
His request explicitly said he is focusing on voice and vision.  I think
that is enough specificity...

ben

On Sat, Aug 7, 2010 at 9:22 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 Wouldn't it depend on the other researcher's area of expertise?


 -- Matt Mahoney, matmaho...@yahoo.com


 --
 *From:* Ben Goertzel b...@goertzel.org
 *To:* agi agi@v2.listbox.com
 *Sent:* Sat, August 7, 2010 9:10:23 PM
 *Subject:* [agi] Help requested: Making a list of (non-robotic) AGI low
 hanging fruit apps

 Hi,

 A fellow AGI researcher sent me this request, so I figured I'd throw it
 out to you guys

 
 I'm putting together an AGI pitch for investors and thinking of low
 hanging fruit applications to argue for. I'm intentionally not
 involving any mechanics (robots, moving parts, etc.). I'm focusing on
 voice (i.e. conversational agents) and perhaps vision-based systems.
 Hellen Keller AGI, if you will :)

 Along those lines, I'd like any ideas you may have that would fall
 under this description. I need to substantiate the case for such AGI
 technology by making an argument for high-value apps. All ideas are
 welcome.
 

 All serious responses will be appreciated!!

 Also, I would be grateful if we
 could keep this thread closely focused on direct answers to this
 question, rather than
 digressive discussions on Helen Keller, the nature of AGI, the definition
 of AGI
 versus narrow AI, the achievability or unachievability of AGI, etc.
 etc.  If you think
 the question is bad or meaningless or unclear or whatever, that's
 fine, but please
 start a new thread with a different subject line to make your point.

 If the discussion is useful, my intention is to mine the answers into a
 compact
 list to convey to him

 Thanks!
 Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Brief mention of bio-AGI in the Boston Globe...

2010-08-02 Thread Ben Goertzel
Open science is, to some, humanity's best
hopehttp://www.google.com/url?sa=Xq=http://www.boston.com/business/healthcare/articles/2010/08/02/biotech_movement_hopes_to_spur_rise_of_citizen_scientists/ct=gacad=:s1:f2:v0:d1:i0:lt:e0:p0:t1280774083:cd=sfIgD21-SMcusg=AFQjCNHAxjADEHZpOGQP6cK4G6jyO3wj2g
Boston Globe
“What is really needed to cure diseases and extend life,'' *Goertzel* said,
“is to link together all available bio data in a vast public database, *...*

--
Tip: Use a minus sign (-) in front of terms in your query that you want to
exclude.Learn 
morehttp://www.google.com/support/websearch/bin/answer.py?answer=136861hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:
.

Removehttp://www.google.com/alerts/remove?s=AB2Xq4hUEKvcJpGOdQ3Ohxm954kNjKjX_dH0vGghl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:this
alert.
Createhttp://www.google.com/alerts?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:another
alert.
Managehttp://www.google.com/alerts/manage?hl=engl=source=alertsmailcd=sfIgD21-SMccad=:s1:f2:v0:d1:your
alerts.



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Alife

2010-07-27 Thread Ben Goertzel
Evolving AGI via an Alife approach would be possible, but would likely
take many orders of magnitude more resources than engineering AGI...

I worked on  Alife years ago and became frustrated that the artificial
biology and artificial chemistry one uses is never as fecund as the
real thing  We don't understand which aspects of bio and chem are
really important for the evolution of complex structures.  So,
approaching AGI via Alife just replaces one complex set of confusions
with another ;-) ...

I think that releasing some well-engineered AGI systems in an Alife
type environment, and letting them advance and evolve further, would
be an awesome experiment, though ;)

-- Ben G

On Mon, Jul 26, 2010 at 11:23 PM, Linas Vepstas linasveps...@gmail.com wrote:
 I saw the following post from Antonio Alberti, on the linked-in
 discussion group:

ALife and AGI

Dear group participants.

The relation among AGI and ALife greatly interests me. However, too few 
recent works try to relate them. For exemple, many papers presented in AGI-09 
(http://agi-conf.org/2009/) are about program learning algorithms (combining 
evolutionary learning and analytical learning). In AGI 2010, virtual pets 
have been presented by Ben Goertzel and are also another topic of this forum. 
There are other approaches in AGI that uses some digital evolutionary 
approach for AGI. For me it is a clear clue that both are related in some 
instance.


By ALife I mean the life-as-it-could-be approach (not simulate, but to use 
digital environment to evolve digital organisms using digital evolution 
(faster than Natural one - see 
http://www.hplusmagazine.com/articles/science/stephen-hawking-%E2%80%9Chumans-have-entered-new-stage-evolution%E2%80%9D).

So, I would like to propose some discussion topics regarding ALIfe and AGI:

1) What is the role of Digital Evolution (and ALife) in the AGI context?

2) Is it possible that some aspects of AGI could self-emerge from the digital 
evolution of intelligent autonomous agents?

3) Is there any research group trying to converge both approaches?

Best Regards,

  and my reply was below:

 For your question 3), I have no idea. For question 1) I can't say I've
 ever heard of anyone talk about this. For question 2), I imagine the
 answer is yes, although the boundaries between what's Alife and
 what's program learning (for example) may be blurry.

 So, imagine, for example, a population of many different species of
 neurons (or should I call them automata? or maybe I should call them
 virtual ants?) Most of the individuals have only a few friends (a
 narrow social circle) -- the friendship relationship can be viewed
 as an axon-dendrite connection -- these friendships are semi-stable;
 they evolve over time, and the type  quality of information exchanged
 in a friendship also varies. Is a social network of friends able to
 solve complex problems? The answer is seemingly yes, if the
 individuals are digital models of neurons. (To carry analogy further:
 different species of individuals would be analogous to different types
 of neurons e.g. purkinje cells vs pyramid cells vs granular vs. motor
 neurons. Individuals from one species may tend to be very gregarious,
 while those from other species might be generally xenophobic. etc.)

 I have no clue if anyone has ever explored genetic algorithms or
 related alife algos, factored together with the individuals being
 involved in a social network (with actual information exchange between
 friends). No clue as to how natural/artificial selection should work.
 Do anti-social individuals have a possibly redeeming role w.r.t. the
 organism as a whole? Do selection pressures on individuals (weak
 individuals are cullled) destroy social networks? Do such networks
 automatically evolve altruism, because a working social network with
 weak, altruistically-supported individuals is better than a shredded,
 dysfunctional social network consisting of only strong individuals?
 Dunno. Seems like there could be many many interesting questions.

 I'd be curious about the answers to Antonio's questions ...

 --linas


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your

Re: [agi] Pretty worldchanging

2010-07-24 Thread Ben Goertzel

On Sat, Jul 24, 2010 at 5:36 AM, Panu Horsmalahti nawi...@gmail.com wrote:
Availibility of the Internet actually makes school grades worse. Of course,
grades does not equal education, but I don't see anything worldchanging
about education because of this.

- Panu Horsmalahti


Hmmm  I do think the Internet has worldchanging implications for
education, many of which are being realized all around us as we speak...

School grades are a poor measure of intellectual achievement.  And of
course, the Internet can be used in either wonderful or idiotic ways -- it
obviously DOES have revolutionary implications for education, even if
statistically few make use of it in a way that significantly manifests these
implications.

I see this article

http://news.yahoo.com/s/ytech_wguy/20100714/tc_ytech_wguy/ytech_wguy_tc3118

linked from the above article, which provides some (not that much) data that
computer or Net access may decrease test scores in some low-income
families

But as the article itself states, this suggests the problem is not the
computers or Net, but rather the inability of many low-income parents to
guide their kids in educational use of computers and the Net ... or to give
their kids a broad enough general education to enable them to guide
themselves in this regard...

Similarly, reading has great potential to aid education -- but if all you
read are romance novels and People or Fat Biker Chick magazine, you're not
going to broaden your mind that much ;p ...

Maybe there are some students on this email list, who are wading through all
the BS and learning something about AGI, by following links and reading
papers mentioned here, etc.  Without the Net, how would these students learn
about AGI, in practice?  Such education would be far harder to come by and
less effective without the Net.  That's world-changing... ;-) ...

Learning about AGI via online resources may not improve your school grades
any, because AGI knowledge isn't tested much in school.  But students
learning about AGI online could change the world...

-- Ben G



*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Hi all,

My new futurist tract The Cosmist Manifesto is now available on
Amazon.com, courtesy of Humanity+ Press:

http://www.amazon.com/gp/product/0984609709/

Thanks to Natasha Vita-More for the beautiful cover, and David Orban
for helping make the book happen...

-- Ben


--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] Re: Cosmist Manifesto available via Amazon.com

2010-07-21 Thread Ben Goertzel
Oh... and, a PDF version of the book is also available for free at

http://goertzel.org/CosmistManifesto_July2010.pdf

;-) ...

ben

On Tue, Jul 20, 2010 at 11:30 PM, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 My new futurist tract The Cosmist Manifesto is now available on
 Amazon.com, courtesy of Humanity+ Press:

 http://www.amazon.com/gp/product/0984609709/

 Thanks to Natasha Vita-More for the beautiful cover, and David Orban
 for helping make the book happen...

 -- Ben


 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 CTO, Genescient Corp
 Vice Chairman, Humanity+
 Advisor, Singularity University and Singularity Institute
 External Research Professor, Xiamen University, China
 b...@goertzel.org

 I admit that two times two makes four is an excellent thing, but if
 we are to give everything its due, two times two makes five is
 sometimes a very charming thing too. -- Fyodor Dostoevsky




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What is the smallest set of operations that can potentially define everything and how do you combine them ?

2010-07-13 Thread Ben Goertzel
Well, if you want a simple but complete operator set, you can go with

-- Schonfinkel combinator plus two parentheses

or

-- S and K combinator plus two parentheses

and I suppose you could add

-- input
-- output
-- forget

statements to this, but I'm not sure what this gets you...

Actually, adding other operators doesn't necessarily
increase the search space your AI faces -- rather, it
**decreases** the search space **if** you choose the right operators, that
encapsulate regularities in the environment faced by the AI

Exemplifying this, writing programs doing humanly simple things
using S and K is a pain and involves piling a lot of S and K and parentheses
on top of each other, whereas if we introduce loops and conditionals and
such, these programs get shorter.  Because loops and conditionals happen
to match the stuff that our human-written programs need to do...

A better question IMO is what set of operators and structures has the
property that the compact expressions tend to be the ones that are useful
for survival and problem-solving in the environments that humans and human-
like AIs need to cope with...

-- Ben G

On Tue, Jul 13, 2010 at 1:43 AM, Michael Swan ms...@voyagergaming.com wrote:
 Hi,

 I'm interested in combining the simplest, most derivable operations
 ( eg operations that cannot be defined by other operations) for creating
 seed AGI's. The simplest operations combined in a multitude ways can
 form extremely complex patterns, but the underlying logic may be
 simple.

 I wonder if varying combinations of the smallest set of operations:

 {  , memory (= for memory assignment), ==, (a logical way to
 combine them), (input, output), () brackets  }

 can potentially learn and define everything.

 Assume all input is from numbers.

 We want the smallest set of elements, because less elements mean less
 combinations which mean less chance of hitting combinatorial explosion.

  helps for generalisation, reducing combinations.

 memory(=) is for hash look ups, what should one remember? What can be
 discarded?

 == This does a comparison between 2 values x == y is 1 if x and y are
 exactly the same. Returns 0 if they are not the same.

 (a logical way to combine them) Any non-narrow algorithm that reduces
 the raw data into a simpler state will do. Philosophically like
 Solomonoff Induction. This is the hardest part. What is the most optimal
 way of combining the above set of operations?

 () brackets are used to order operations.




 Conditionals (only if statements) + memory assignment are the only valid
 form of logic - ie no loops. Just repeat code if you want loops.


 If you think that the set above cannot define everything, then what is
 the smallest set of operations that can potentially define everything?

 --
 Some proofs / Thought experiments :

 1) Can , ==, (), and memory define other logical operations like 
 (AND gate) ?

 I propose that x==y==1 defines xy

 xy             x==y==1
 00 = 0         0==0==1 = 0
 10 = 0         1==0==1 = 0
 01 = 0         0==1==1 = 0
 11 = 1         1==1==1 = 1

 It means  can be completely defined using == therefore  is not
 one of the smallest possible general concepts.  can be potentially
 learnt from ==.

 -

 2) Write a algorithm that can define 1 using only ,==, ().

 Multiple answers
 a) discrete 1 could use
 x == 1

 b) continuous 1.0 could use this rule
 For those not familiar with C++, ! means not
 (x  0.9)  !(x  1.1)   expanding gives ( getting rid of ! and )
 (x  0.9) == ((x  1.1) == 0) == 1    note !x can be defined in terms
 of == like so x == 0.

 (b) is a generalisation, and expansion of the definition of (a) and can
 be scaled by changing the values 0.9 and 1.1 to fit what others
 would generally define as being 1.




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if
we are to give everything its due, two times two makes five is
sometimes a very charming thing too. -- Fyodor Dostoevsky


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used to
 compute them.


Solomonoff induction is a mathematical, not verbal, construct.  Based on the
most obvious mapping from the verbal terms you've used above into
mathematical definitions in terms of which Solomonoff induction is
constructed, the above statement of yours is FALSE.

If you're going to argue against a mathematical theorem, your argument must
be mathematical not verbal.  Please explain one of

1) which step in the proof about Solomonoff induction's effectiveness you
believe is in error

2) which of the assumptions of this proof you think is inapplicable to real
intelligence [apart from the assumption of infinite or massive compute
resources]

Otherwise, your statement is in the same category as the statement by the
protagonist of Dostoesvky's Notes from the Underground --

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too.

;-)



 Secondly, since it cannot be computed it is useless.  Third, it is not the
 sort of thing that is useful for AGI in the first place.


I agree with these two statements

-- ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
On Fri, Jul 9, 2010 at 8:38 AM, Matt Mahoney matmaho...@yahoo.com wrote:

 Ben Goertzel wrote:
  Secondly, since it cannot be computed it is useless.  Third, it is not
 the sort of thing that is useful for AGI in the first place.


  I agree with these two statements

 The principle of Solomonoff induction can be applied to computable subsets
 of the (infinite) hypothesis space. For example, if you are using a neural
 network to make predictions, the principle says to use the smallest network
 that computes the past training data.



Yes, of course various versions of Occam's Razor are useful in practice, and
we use an Occam bias in MOSES inside OpenCog for example  But as you
know, these are not exactly the same as Solomonoff Induction, though they're
based on the same idea...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
To make this discussion more concrete, please look at

http://www.vetta.org/documents/disSol.pdf

Section 2.5 gives a simple version of the proof that Solomonoff induction is
a powerful learning algorithm in principle, and Section 2.6 explains why it
is not practically useful.

What part of that paper do you think is wrong?

thx
ben


On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote:

 On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

 If you're going to argue against a mathematical theorem, your argument must
 be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to real
 intelligence [apart from the assumption of infinite or massive compute
 resources]
 

 Solomonoff Induction is not a provable Theorem, it is therefore a
 conjecture.  It cannot be computed, it cannot be verified.  There are many
 mathematical theorems that require the use of limits to prove them for
 example, and I accept those proofs.  (Some people might not.)  But there is
 no evidence that Solmonoff Induction would tend toward some limits.  Now
 maybe the conjectured abstraction can be verified through some other means,
 but I have yet to see an adequate explanation of that in any terms.  The
 idea that I have to answer your challenges using only the terms you specify
 is noise.

 Look at 2.  What does that say about your Theorem.

 I am working on 1 but I just said: I haven't yet been able to find a way
 that could be used to prove that Solomonoff Induction does not do what Matt
 claims it does.
   Z
 What is not clear is that no one has objected to my characterization of
 the conjecture as I have been able to work it out for myself.  It requires
 an infinite set of infinitely computed probabilities of each infinite
 string.  If this characterization is correct, then Matt has been using the
 term string ambiguously.  As a primary sample space: A particular string.
 And as a compound sample space: All the possible individual cases of the
 substring compounded into one.  No one has yet to tell of his mathematical
 experiments of using a Turing simulator to see what a finite iteration of
 all possible programs of a given length would actually look like.

 I will finish this later.




  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.com wrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used
 to compute them.


 Solomonoff induction is a mathematical, not verbal, construct.  Based on
 the most obvious mapping from the verbal terms you've used above into
 mathematical definitions in terms of which Solomonoff induction is
 constructed, the above statement of yours is FALSE.

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]

 Otherwise, your statement is in the same category as the statement by the
 protagonist of Dostoesvky's Notes from the Underground --

 I admit that two times two makes four is an excellent thing, but if we
 are to give everything its due, two times two makes five is sometimes a very
 charming thing too.

 ;-)



 Secondly, since it cannot be computed it is useless.  Third, it is not
 the sort of thing that is useful for AGI in the first place.


 I agree with these two statements

 -- ben G

   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction is Not Universal and Probability is not Prediction

2010-07-09 Thread Ben Goertzel
I don't think Solomonoff induction is a particularly useful direction for
AI, I was just taking issue with the statement made that it is not capable
of correct prediction given adequate resources...

On Fri, Jul 9, 2010 at 11:35 AM, David Jones davidher...@gmail.com wrote:

 Although I haven't studied Solomonoff induction yet, although I plan to
 read up on it, I've realized that people seem to be making the same mistake
 I was. People are trying to find one silver bullet method of induction or
 learning that works for everything. I've begun to realize that its OK if
 something doesn't work for everything. As long as it works on a large enough
 subset of problems to be useful. If you can figure out how to construct
 justifiable methods of induction for enough problems that you need to solve,
 then that is sufficient for AGI.

 This is the same mistake I made and it was the point I was trying to make
 in the recent email I sent. I kept trying to come up with algorithms for
 doing things and I could always find a test case to break it. So, now I've
 begun to realize that it's ok if it breaks sometimes! The question is, can
 you define an algorithm that breaks gracefully and which can figure out what
 problems it can be applied to and what problems it should not be applied to.
 If you can do that, then you can solve the problems where it is applicable,
 and avoid the problems where it is not.

 This is perfectly OK! You don't have to find a silver bullet method of
 induction or inference that works for everything!

 Dave



 On Fri, Jul 9, 2010 at 10:49 AM, Ben Goertzel b...@goertzel.org wrote:


 To make this discussion more concrete, please look at

 http://www.vetta.org/documents/disSol.pdf

 Section 2.5 gives a simple version of the proof that Solomonoff induction
 is a powerful learning algorithm in principle, and Section 2.6 explains why
 it is not practically useful.

 What part of that paper do you think is wrong?

 thx
 ben



 On Fri, Jul 9, 2010 at 9:54 AM, Jim Bromer jimbro...@gmail.com wrote:

  On Fri, Jul 9, 2010 at 7:56 AM, Ben Goertzel b...@goertzel.org wrote:

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness you
 believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]
  

 Solomonoff Induction is not a provable Theorem, it is therefore a
 conjecture.  It cannot be computed, it cannot be verified.  There are many
 mathematical theorems that require the use of limits to prove them for
 example, and I accept those proofs.  (Some people might not.)  But there is
 no evidence that Solmonoff Induction would tend toward some limits.  Now
 maybe the conjectured abstraction can be verified through some other means,
 but I have yet to see an adequate explanation of that in any terms.  The
 idea that I have to answer your challenges using only the terms you specify
 is noise.

 Look at 2.  What does that say about your Theorem.

 I am working on 1 but I just said: I haven't yet been able to find a way
 that could be used to prove that Solomonoff Induction does not do what Matt
 claims it does.
   Z
 What is not clear is that no one has objected to my characterization of
 the conjecture as I have been able to work it out for myself.  It requires
 an infinite set of infinitely computed probabilities of each infinite
 string.  If this characterization is correct, then Matt has been using the
 term string ambiguously.  As a primary sample space: A particular string.
 And as a compound sample space: All the possible individual cases of the
 substring compounded into one.  No one has yet to tell of his mathematical
 experiments of using a Turing simulator to see what a finite iteration of
 all possible programs of a given length would actually look like.

 I will finish this later.




  On Fri, Jul 9, 2010 at 7:49 AM, Jim Bromer jimbro...@gmail.comwrote:

 Abram,
 Solomoff Induction would produce poor predictions if it could be used
 to compute them.


 Solomonoff induction is a mathematical, not verbal, construct.  Based on
 the most obvious mapping from the verbal terms you've used above into
 mathematical definitions in terms of which Solomonoff induction is
 constructed, the above statement of yours is FALSE.

 If you're going to argue against a mathematical theorem, your argument
 must be mathematical not verbal.  Please explain one of

 1) which step in the proof about Solomonoff induction's effectiveness
 you believe is in error

 2) which of the assumptions of this proof you think is inapplicable to
 real intelligence [apart from the assumption of infinite or massive compute
 resources]

 Otherwise, your statement is in the same category as the statement by
 the protagonist of Dostoesvky's

[agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
 
http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] My Sing. U lecture on AGI blogged at Wired UK:

2010-07-09 Thread Ben Goertzel
I gave the lecture via Skype from my house in Maryland

I learned that NASA has a crap Internet connection 8-D

On Fri, Jul 9, 2010 at 2:50 PM, The Wizard key.unive...@gmail.com wrote:

 How was your overall experience there, anything you learn that is worth
 mentioning?

 On Fri, Jul 9, 2010 at 2:46 PM, Ben Goertzel b...@goertzel.org wrote:


 http://www.wired.co.uk/news/archive/2010-07/9/singularity-university-robotics-ai


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Carlos A Mejia

 Taking life one singularity at a time.
 www.Transalchemy.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

I admit that two times two makes four is an excellent thing, but if we are
to give everything its due, two times two makes five is sometimes a very
charming thing too. -- Fyodor Dostoevsky



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


[agi] New KurzweilAI.net site... with my silly article sillier chatbot ;-p ;) ....

2010-07-05 Thread Ben Goertzel
Check out my article on the H+ Summit

http://www.kurzweilai.net/h-summit-harvard-the-rise-of-the-citizen-scientist

and also the Ramona4 chatbot that Novamente LLC built for Ray Kurzweil
a while back

http://www.kurzweilai.net/ramona4/ramona.html

It's not AGI at all; but it's pretty funny ;-)

-- Ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

   
“When nothing seems to help, I go look at a stonecutter hammering away
at his rock, perhaps a hundred times without as much as a crack
showing in it. Yet at the hundred and first blow it will split in two,
and I know it was not that blow that did it, but all that had gone
before.”


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] A Primary Distinction for an AGI

2010-06-28 Thread Ben Goertzel
 AGI.

 Jim Bromer
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
 those that more incorrect
 expectations.

 The idea I came up with earlier this month regarding high frame rates to
 reduce uncertainty is still applicable. It is important that all generated
 hypotheses have as low uncertainty as possible given our constraints and
 resources available.

 I thought I'd share my progress with you all. I'll be testing the ideas on
 test cases such as the ones I mentioned in the coming days and weeks.

 Dave
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
For visual perception, there are many reasons to think that a hierarchical
architecture can be effective... this is one of the things you may find in
dealing with real visual data but not with these toy examples...

E.g. in a spatiotemporal predictive hierarchy, the idea would be to create a
predictive module (using an Occam heuristic, as you suggest) corresponding
to each of a host of observed spatiotemporal regions, with modules
corresponding to larger regions occurring higher up in the hierarchy...

ben

On Sun, Jun 27, 2010 at 10:09 AM, David Jones davidher...@gmail.com wrote:

 Thanks Ben,

 Right, explanatory reasoning not new at all (also called abduction and
 inference to the best explanation). But, what seems to be elusive is a
 precise and algorithm method for implementing explanatory reasoning and
 solving real problems, such as sensory perception. This is what I'm hoping
 to solve. The theory has been there a while... How to effectively implement
 it in a general way though, as far as I can tell, has never been solved.

 Dave

 On Sun, Jun 27, 2010 at 9:35 AM, Ben Goertzel b...@goertzel.org wrote:


 Hi,

 I certainly agree with this method, but of course it's not original at
 all, it's pretty much the basis of algorithmic learning theory, right?

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward

 So yeah, this is the right idea... and your simple examples of it are
 nice...

 Eric Baum's whole book What Is thought is sort of an explanation of this
 idea in a human biology and psychology and AI context ;)

 ben

 On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher...@gmail.comwrote:

 A method for comparing hypotheses in explanatory-based reasoning: *

 We prefer the hypothesis or explanation that ***expects* more
 observations. If both explanations expect the same observations, then the
 simpler of the two is preferred (because the unnecessary terms of the more
 complicated explanation do not add to the predictive power).*

 *Why are expected events so important?* They are a measure of 1)
 explanatory power and 2) predictive power. The more predictive and the more
 explanatory a hypothesis is, the more likely the hypothesis is when compared
 to a competing hypothesis.

 Here are two case studies I've been analyzing from sensory perception of
 simplified visual input:
 The goal of the case studies is to answer the following: How do you
 generate the most likely motion hypothesis in a way that is general and
 applicable to AGI?
 *Case Study 1)* Here is a link to an example: animated gif of two black
 squares move from left to 
 righthttp://practicalai.org/images/CaseStudy1.gif.
 *Description: *Two black squares are moving in unison from left to right
 across a white screen. In each frame the black squares shift to the right so
 that square 1 steals square 2's original position and square two moves an
 equal distance to the right.
 *Case Study 2) *Here is a link to an example: the interrupted 
 squarehttp://practicalai.org/images/CaseStudy2.gif.
 *Description:* A single square is moving from left to right. Suddenly in
 the third frame, a single black square is added in the middle of the
 expected path of the original black square. This second square just stays
 there. So, what happened? Did the square moving from left to right keep
 moving? Or did it stop and then another square suddenly appeared and moved
 from left to right?

 *Here is a simplified version of how we solve case study 1:
 *The important hypotheses to consider are:
 1) the square from frame 1 of the video that has a very close position to
 the square from frame 2 should be matched (we hypothesize that they are the
 same square and that any difference in position is motion).  So, what
 happens is that in each two frames of the video, we only match one square.
 The other square goes unmatched.
 2) We do the same thing as in hypothesis #1, but this time we also match
 the remaining squares and hypothesize motion as follows: the first square
 jumps over the second square from left to right. We hypothesize that this
 happens over and over in each frame of the video. Square 2 stops and square
 1 jumps over it over and over again.
 3) We hypothesize that both squares move to the right in unison. This is
 the correct hypothesis.

 So, why should we prefer the correct hypothesis, #3 over the other two?

 Well, first of all, #3 is correct because it has the most explanatory
 power of the three and is the simplest of the three. Simpler is better
 because, with the given evidence and information, there is no reason to
 desire a more complicated hypothesis such as #2.

 So, the answer to the question is because explanation #3 expects the most
 observations, such as:
 1) the consistent relative positions of the squares in each frame are
 expected.
 2) It also expects their new positions in each from based on velocity

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel

 To put it more succinctly, Dave  Ben  Hutter are doing the wrong subject
 - narrow AI.  Looking for the one right prediction/ explanation is narrow
 AI. Being able to generate more and more possible explanations, wh. could
 all be valid,  is AGI.  The former is rational, uniform thinking. The latter
 is creative, polyform thinking. Or, if you prefer, it's convergent vs
 divergent thinking, the difference between wh. still seems to escape Dave 
 Ben  most AGI-ers.


You are misrepresenting my approach, which is not based on looking for the
one right prediction/explanation

OpenCog relies heavily on evolutionary learning and probabilistic inference,
both of which naturally generate a massive number of alternative possible
explanations in nearly every instance...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
 on it it shows you
 representative bars for each window.

  How do we add and combine this complex behavior learning, explanation,
 recognition and understanding into our system?

  Answer: The way that such things are learned is by making observations,
 learning patterns and then connecting the patterns in a way that is
 consistent, explanatory and likely.

 Example: Clicking the notepad icon causes a notepad window to appear with
 no content. If we previously had a notepad window open, it may seem like
 clicking the icon just clears the content by the instance is the same. But,
 this cannot be the case because if we click the icon when no notepad window
 previously existed, it will be blank. based on these two experiences we can
 construct an explanatory hypothesis such that: clicking the icon simply
 opens a blank window. We also get evidence for this conclusion when we see
 the two windows side by side. If we see the old window with the content
 still intact we will realize that clicking the icon did not seem to have
 cleared it.

 Dave


 On Sun, Jun 27, 2010 at 12:39 PM, Jim Bromer jimbro...@gmail.com wrote:

  On Sun, Jun 27, 2010 at 11:56 AM, Mike Tintner tint...@blueyonder.co.uk
  wrote:

  Jim :This illustrates one of the things wrong with the
 dreary instantiations of the prevailing mind set of a group.  It is only a
 matter of time until you discover (through experiment) how absurd it is to
 celebrate the triumph of an overly simplistic solution to a problem that is,
 by its very potential, full of possibilities]

 To put it more succinctly, Dave  Ben  Hutter are doing the wrong
 subject - narrow AI.  Looking for the one right prediction/ explanation is
 narrow AI. Being able to generate more and more possible explanations, wh.
 could all be valid,  is AGI.  The former is rational, uniform thinking. The
 latter is creative, polyform thinking. Or, if you prefer, it's convergent vs
 divergent thinking, the difference between wh. still seems to escape Dave 
 Ben  most AGI-ers.


 Well, I agree with what (I think) Mike was trying to get at, except that I
 understood that Ben, Hutter and especially David were not only talking about
 prediction as a specification of a single prediction when many possible
 predictions (ie expectations) were appropriate for consideration.

 For some reason none of you seem to ever talk about methods that could be
 used to react to a situation with the flexibility to integrate the
 recognition of different combinations of familiar events and to classify
 unusual events so they could be interpreted as more familiar *kinds* of
 events or as novel forms of events which might be then be integrated.  For
 me, that seems to be one of the unsolved problems.  Being able to say that
 the squares move to the right in unison is a better description than saying
 the squares are dancing the irish jig is not really cutting edge.

 As far as David's comment that he was only dealing with the core issues,
 I am sorry but you were not dealing with the core issues of contemporary AGI
 programming.  You were dealing with a primitive problem that has been
 considered for many years, but it is not a core research issue.  Yes we have
 to work with simple examples to explain what we are talking about, but there
 is a difference between an abstract problem that may be central to
 your recent work and a core research issue that hasn't really been solved.

 The entire problem of dealing with complicated situations is that these
 narrow AI methods haven't really worked.  That is the core issue.

 Jim Bromer


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Hi Steve,

A few comments...

1)
Nobody is trying to implement Hutter's AIXI design, it's a mathematical
design intended as a proof of principle

2)
Within Hutter's framework, one calculates the shortest program that explains
the data, where shortest is measured on Turing  machine M.   Given a
sufficient number of observations, the choice of M doesn't matter and AIXI
will eventually learn any computable reward pattern.  However, choosing the
right M can greatly accelerate learning.  In the case of a physical AGI
system, choosing M to incorporate the correct laws of physics would
obviously accelerate learning considerably.

3)
Many AGI designs try to incorporate prior understanding of the structure 
properties of the physical world, in various ways.  I have a whole chapter
on this in my forthcoming book on OpenCog  E.g. OpenCog's design
includes a physics-engine, which is used directly and to aid with
inferential extrapolations...

So I agree with most of your points, but I don't find them original except
in phrasing ;)

... ben


On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current AGI
 thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore I
 will first attempt to explain what I see, WITHOUT so much trying to convince
 you (or anyone) that it is necessarily correct. Once I convey my vision,
 then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a 4.5
 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding of
 physics, etc., we wouldn't prefer the simplest program at all, but rather
 the simplest representation of the real world that is not physics/math *in
 *consistent with our observations. All observations would be presumed to
 be consistent with the response curves of our sensors, showing a world in
 which Newton's laws prevail, etc. Armed with these presumptions, our
 physics-complete AGI would look for the simplest set of *UN*observed
 phenomena that explained the observed phenomena. This theory of a
 physics-complete AGI seems undeniable, but of course, we are NOT born
 physics-complete - or are we?!

 This all comes down to the limits of representational math. At great risk
 of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
 concepts into NN/AGI terms.

 We all know about layering and columns in neural systems, and understand
 Bayesian math. However, let's dig a little deeper into exactly what is being
 represented by the outputs (or terms for died-in-the-wool AGIers). All
 physical quantities are well known to have value, significance, and
 dimensionality. Neurons/Terms (N/T) could easily be protein-tagged as to the
 dimensionality that their functionality is capable of producing, so that
 only compatible N/Ts could connect to them. However, let's dig a little
 deeper into dimensionality

 Physicists think we live in an MKS (Meters, Kilograms, Seconds) world, and
 that all dimensionality can be reduced to MKS. For physics purposes they may
 be right (see challenge below), but maybe for information processing
 purposes, they are missing some important things.

 *Challenge to MKS:* Note that some physicists and most astronomers utilize
 *dimensional analysis* where they experimentally play with the
 dimensions of observations to inductively find manipulations that would
 yield the dimensions of unobservable quantities, e.g. the mass of a star,
 and then run the numbers through the same manipulation to see if the results
 at least have the right exponent. However, many/most such manipulations
 produce nonsense, so they simply use this technique to jump from
 observations to a list of prospective results with wildly different
 exponents, and discard the results with the ridiculous exponents to find the
 correct result. The frequent failures of this process indirectly
 demonstrates that there is more to dimensionality (and hence physics) than
 just MKS. Let's accept that, and presume that neurons must have already
 dealt with whatever is missing from current thought.

 Consider, there is some (hopefully finite) set of reasonable

Re: [agi] Reward function vs utility

2010-06-27 Thread Ben Goertzel
You can always build the utility function into the assumed universal Turing
machine underlying the definition of algorithmic information...

I guess this will improve learning rate by some additive constant, in the
long run ;)

ben

On Sun, Jun 27, 2010 at 4:22 PM, Joshua Fox joshuat...@gmail.com wrote:

 This has probably been discussed at length, so I will appreciate a
 reference on this:

 Why does Legg's definition of intelligence (following on Hutters' AIXI and
 related work) involve a reward function rather than a utility function? For
 this purpose, reward is a function of the word state/history which is
 unknown to the agent while  a utility function is known to the agent.

 Even if  we replace the former with the latter, we can still have a
 definition of intelligence that integrates optimization capacity over
 possible all utility functions.

 What is the real  significance of the difference between the two types of
 functions here?

 Joshua
*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
Steve,

I know what dimensional analysis is, but it would be great if you could give
an example of how it's useful for everyday commonsense reasoning such as,
say, a service robot might need to do to figure out how to clean a house...

thx
ben

On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 What I saw as my central thesis is that propagating carefully conceived
 dimensionality information along with classical information could greatly
 improve the cognitive process, by FORCING reasonable physics WITHOUT having
 to understand (by present concepts of what understanding means) physics.
 Hutter was just a foil to explain my thought. Note again my comments
 regarding how physicists and astronomers understand some processes though
 dimensional analysis that involves NONE of the sorts of understanding
 that you might think necessary, yet can predictably come up with the right
 answers.

 Are you up on the basics of dimensional analysis? The reality is that it is
 quite imperfect, but is often able to yield a short list of answers, with
 the correct one being somewhere in the list. Usually, the wrong answers are
 wildly wrong (they are probably computing something, but NOT what you might
 be interested in), and are hence easily eliminated. I suspect that neurons
 might be doing much the same, as could formulaic implementations like (most)
 present AGI efforts. This might explain natural architecture and guide
 human architectural efforts.

 In short, instead of a pot of neurons, we might instead have a pot of
 dozens of types of neurons that each have their own complex rules regarding
 what other types of neurons they can connect to, and how they process
 information. Architecture might involve deciding how many of each type to
 provide, and what types to put adjacent to what other types, rather than the
 more detailed concept now usually thought to exist.

 Thanks for helping me wring my thought out here.

 Steve
 =
 On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel b...@goertzel.org wrote:


 Hi Steve,

 A few comments...

 1)
 Nobody is trying to implement Hutter's AIXI design, it's a mathematical
 design intended as a proof of principle

 2)
 Within Hutter's framework, one calculates the shortest program that
 explains the data, where shortest is measured on Turing  machine M.
 Given a sufficient number of observations, the choice of M doesn't matter
 and AIXI will eventually learn any computable reward pattern.  However,
 choosing the right M can greatly accelerate learning.  In the case of a
 physical AGI system, choosing M to incorporate the correct laws of physics
 would obviously accelerate learning considerably.

 3)
 Many AGI designs try to incorporate prior understanding of the structure 
 properties of the physical world, in various ways.  I have a whole chapter
 on this in my forthcoming book on OpenCog  E.g. OpenCog's design
 includes a physics-engine, which is used directly and to aid with
 inferential extrapolations...

 So I agree with most of your points, but I don't find them original except
 in phrasing ;)

 ... ben


 On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield 
 steve.richfi...@gmail.com wrote:

 Ben, et al,

 *I think I may finally grok the fundamental misdirection that current
 AGI thinking has taken!

 *This is a bit subtle, and hence subject to misunderstanding. Therefore
 I will first attempt to explain what I see, WITHOUT so much trying to
 convince you (or anyone) that it is necessarily correct. Once I convey my
 vision, then let the chips fall where they may.

 On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel b...@goertzel.org wrote:

 Hutter's AIXI for instance works [very roughly speaking] by choosing the
 most compact program that, based on historical data, would have yielded
 maximum reward


 ... and there it is! What did I see?

 Example applicable to the lengthy following discussion:
 1 - 2
 2 - 2
 3 - 2
 4 - 2
 5 - ?
 What is ?.

 Now, I'll tell you that the left column represents the distance along a
 4.5 unit long table, and the right column represents the distance above the
 floor that you will be as your walk the length of the table. Knowing this,
 without ANY supporting physical experience, I would guess ? to be zero, or
 maybe a little more if I were to step off of the table and land onto
 something lower, like the shoes that I left there.

 In an imaginary world where a GI boots up with a complete understanding
 of physics, etc., we wouldn't prefer the simplest program at all, but
 rather the simplest representation of the real world that is not
 physics/math *in*consistent with our observations. All observations
 would be presumed to be consistent with the response curves of our sensors,
 showing a world in which Newton's laws prevail, etc. Armed with these
 presumptions, our physics-complete AGI would look for the simplest set of
 *UN*observed phenomena that explained the observed phenomena. This
 theory

Re: [agi] Huge Progress on the Core of AGI

2010-06-27 Thread Ben Goertzel
 to the real world of AGI problems. You should get to know it.

 And as this example (and my rock wall problem) indicate, these problems
 can be as simple and accessible as fairly easy narrow AI problems.
  *From:* Ben Goertzel b...@goertzel.org
 *Sent:* Sunday, June 27, 2010 7:33 PM
   *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Huge Progress on the Core of AGI


 That's a rather bizarre suggestion Mike ... I'm quite sure a simple narrow
 AI system could be constructed to beat humans at Pong ;p ... without
 teaching us much of anything about intelligence...

 Very likely a narrow-AI machine learning system could *learn* by
 experience to beat humans at Pong ... also without teaching us much
 of anything about intelligence...

 Pong is almost surely a toy domain ...

 ben g

 On Sun, Jun 27, 2010 at 2:12 PM, Mike Tintner 
 tint...@blueyonder.co.ukwrote:

  Try ping-pong -  as per the computer game. Just a line (/bat) and a
 square(/ball) representing your opponent - and you have a line(/bat) to play
 against them

 Now you've got a relatively simple true AGI visual problem - because if
 the opponent returns the ball somewhat as a real human AGI does,  (without
 the complexities of spin etc just presumably repeatedly changing the
 direction (and perhaps the speed)  of the returned ball) - then you have a
 fundamentally *unpredictable* object.

 How will your program learn to play that opponent - bearing in mind that
 the opponent is likely to keep changing and even evolving strategy? Your
 approach will have to be fundamentally different from how a program learns
 to play a board game, where all the possibilities are predictable. In the
 real world, past performance is not a [sure] guide to future performance.
 Bayes doesn't apply.

 That's the real issue here -  it's not one of simplicity/complexity -
 it's that  your chosen worlds all consist of objects that are predictable,
 because they behave consistently, are shaped consistently, and come in
 consistent, closed sets - and  can only basically behave in one way at any
 given point. AGI is about dealing with the real world of objects that are
 unpredictable because they behave inconsistently,even contradictorily, are
 shaped inconsistently and come in inconsistent, open sets - and can behave
 in multi-/poly-ways at any given point. These differences apply at all
 levels from the most complex to the simplest.

 Dealing with consistent (and regular) objects is no preparation for
 dealing with inconsistent, irregular objects.It's a fundamental error

 Real AGI animals and humans were clearly designed to deal with a world of
 objects that have some consistencies but overall are inconsistent, irregular
 and come in open sets. The perfect regularities and consistencies of
 geometrical figures and mechanical motion (and boxes moving across a screen)
 were only invented very recently.



  *From:* David Jones davidher...@gmail.com
 *Sent:* Sunday, June 27, 2010 5:57 PM
  *To:* agi agi@v2.listbox.com
 *Subject:* Re: [agi] Huge Progress on the Core of AGI

 Jim,

 Two things.

 1) If the method I have suggested works for the most simple case, it is
 quite straight forward to add complexity and then ask, how do I solve it
 now. If you can't solve that case, there is no way in hell you will solve
 the full AGI problem. This is how I intend to figure out how to solve such a
 massive problem. You cannot tackle the whole thing all at once. I've tried
 it and it doesn't work because you can't focus on anything. It is like a
 Rubik's cube. You turn one piece to get the color orange in place, but at
 the same time you are screwing up the other colors. Now imagine that times
 1000. You simply can't do it. So, you start with a simple demonstration of
 the difficulties and show how to solve a small puzzle, such as a Rubik's
 cube with 4 little cubes to a side instead of 6. Then you can show how to
 solve 2 sides of a rubiks cube, etc. Eventually, it will be clear how to
 solve the whole problem because by the time you're done, you have a complete
 understanding of what is going on and how to go about solving it.

 2) I haven't mentioned a method for matching expected behavior to
 observations and bypassing the default algorithms, but I have figured out
 quite a lot about how to do it. I'll give you an example from my own notes
 below. What I've realized is that the AI creates *expectations* (again).
 When those expectations are matched, the AI does not do its default
 processing and analysis. It doesn't do the default matching that it normally
 does when it has no other knowledge. It starts with an existing hypothesis.
 When unexpected observations or inconsistencies occur, then the AI will have
 a *reason* or *cue* (these words again... very important concepts) to look
 for a better hypothesis. Only then, should it look for another hypothesis.

 My notes:
 How does the ai learn and figure out how to explain complex unforseen
 behaviors that are not preprogrammable

Re: [agi] Hutter - A fundamental misdirection?

2010-06-27 Thread Ben Goertzel
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
steve.richfi...@gmail.comwrote:

 Ben,

 On Sun, Jun 27, 2010 at 3:47 PM, Ben Goertzel b...@goertzel.org wrote:

  know what dimensional analysis is, but it would be great if you could
 give an example of how it's useful for everyday commonsense reasoning such
 as, say, a service robot might need to do to figure out how to clean a
 house...


 How much detergent will it need to clean the floors? Hmmm, we need to know
 ounces. We have the length and width of the floor, and the bottle says to
 use 1 oz/M^2. How could we manipulate two M-dimensioned quantities and 1
 oz/M^2 dimensioned quantity to get oz? The only way would seem to be to
 multiply all three numbers together to get ounces. This WITHOUT
 understanding things like surface area, utilization, etc.



I think that the El Salvadorean maids who come to clean my house
occasionally, solve this problem without any dimensional analysis or any
quantitative reasoning at all...

Probably they solve it based on nearest-neighbor matching against past
experiences cleaning other dirty floors with water in similarly sized and
shaped buckets...

-- ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] The problem with AGI per Sloman

2010-06-24 Thread Ben Goertzel
Yes... the idea underlying Sloman's quote is why the interdisciplinary field
of cognitive science was invented a few decades ago...

ben g

On Thu, Jun 24, 2010 at 12:05 PM, Jim Bromer jimbro...@gmail.com wrote:

 Both of you are wrong.  (Where did that quote come from by the way.  What
 year did he write or say that.)

 An inadequate understanding of the problems is exactly what has to
 be expected by researchers (both professional and amateurs) when they are
 facing a completely novel pursuit.  That is why we have endless discussions
 like these.  What happened over and over again in AI research is that the
 amazing advances in computer technology always seemed to suggest that
 similar advances in AI must be just off the horizon.  And the reality is
 that there have been major advances in AI.  In the 1970's a critic stated
 that he wouldn't believe that AI was possible until a computer was able to
 beat him in chess.  Well, guess what happened and guess what conclusion he
 did not derive from the experience.  One of the problems with critics is
 that they can be as far off as those whose optimism is absurdly unwarranted.

 If a broader multi-disciplinary effort was the obstacle to creating AGI, we
 would have AGI by now.  It should be clear to anyone who examines the
 history of AI or the present day reach of computer programming that a
 multi-discipline effort is not the key to creating effective AGI.  Computers
 have become pervasive in modern day life, and if it was just a matter of
 getting people with different kinds of interests involved, it would have
 been done by now.  It is a little like saying that the key to safe deep sea
 drilling is to rely on the expertise of companies that make billions and
 billions of dollars and which stand to lose billions by mistakes.  While
 that should make sense, if you look a little more closely, you can see that
 it doesn't quite work out that way in the real world.

 Jim Bromer

 On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  One of the problems of AI researchers is that too often they start off
 with an inadequate
 understanding of the *problems* and believe that solutions are only a few
 years away. We need an educational system that not only teaches techniques
 and solutions, but also an understanding of problems and their difficulty —
 which can come from a broader multi-disciplinary education. That could speed
 up progress.
 A. Sloman

 ( who else keeps saying that?)
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/


*agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org


“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Yes, I'm expecting the AI to make tools from blocks and beads

No, i'm not attempting to make a detailed simulation of the human
brain/body, just trying to use vaguely humanlike embodiment and
high-level mind-architecture together with computer science
algorithms, to achieve AGI

On Tue, Jan 13, 2009 at 5:56 AM, William Pearson wil.pear...@gmail.com wrote:
 2009/1/9 Ben Goertzel b...@goertzel.org:
 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 goertzel.org seems to be down. So I can't refresh my memory of the paper.

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.


 In some ways this question is under defined. It depends what the
 learning system is like. If it is like a human brain it would need a
 sufficiently (lawfully) changing world to stimulate its neural
 plasticity (rain, seasons, new buildings, death of pets, growth of its
 own body).  That is a never ending series of connectible but new
 situations to push the brain in different directions. Cat's eyes
 deprived of stimulation go blind, so a brain in an unstimulating
 environment might fail to develop.

 So I would say that not only are certain dynamics important but there
 should also be a large variety of externally presented examples.
 Consider for example learning electronics, the metaphor of rivers and
 dams is often used to teach it, but if the only example of fluid
 dynamics you have come across is a flat pool of beads, then you might
 not get the metaphor.  Similarly a kettle boiling dry might be used to
 teach about part of the water cycle.

 There may be lots of other subconscious  analogies of these sorts that
 have to be made when we are young that we don't know about. It would
 be my worry when implementing a virtual world for AI development.

 If it is not like a human brain (in this respect), then the question
 is a lot harder. Also are you expecting the AIs to make tools out of
 the blocks and beads?

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Hi,

 Since I can now get to the paper some further thoughts. Concepts that
 would seem hard to form in your world is organic growth and phase
 changes of materials. Also naive chemistry would seem to be somewhat
 important (cooking, dissolving materials, burning: these are things
 that a pre-schooler would come into contact more at home than in
 structured pre-school).

Actually, you could probably get plantlike growth using beads, via
methods similar to L-systems (used in graphics for simulating plant
growth)

Woody plants could be obtained using a combination of blocks and
beads, as well..

Phase changes would probably arise via phase transitions in bead
conglomerates, with the control parameters driven by changes in
adhesion

However, naive chemistry would exist only in a far more primitive form
than in the real world, I'll have to admit.  This is just a
shortcoming of the BlocksNBeadsWorld, and I think it's an acceptable
one...

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Indeed...  but cake-baking just won't have the same nuances ;-)

On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace
russell.wall...@gmail.com wrote:
 Melting and boiling at least should be doable: assign every bead a
 temperature, and let solid interbead bonds turn liquid above a certain
 temperature and disappear completely above some higher temperature.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Actually, I view that as a matter for the AGI system, not the world.

Different AGI systems hooked up to the same world may choose to
receive different inputs from it

Binocular vision, for instance, is not necessary in a virtual world,
and some AGIs might want to use it whereas others don't...

On Tue, Jan 13, 2009 at 1:13 PM, Philip Hunt cabala...@googlemail.com wrote:
 2009/1/9 Ben Goertzel b...@goertzel.org:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

 Perhaps the paper could go into more detail about what sensory input
 the AGI would have.

 E.g. you might specify that its vision system would consist of 2
 pixelmaps (binocular vision) each 1000x1000 pixels, in three colours
 and 16 bits of intensity, updated 20 times per second.

 Of course, you may want to specify the visual system differently, but
 it's useful to say so and make your assumptions concrete.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Matt,

The complexity of a simulated environment is tricky to estimate, if
the environment contains complex self-organizing dynamics, random
number generation, and complex human interactions ...

ben

On Tue, Jan 13, 2009 at 1:29 PM, Matt Mahoney matmaho...@yahoo.com wrote:
 My response to Ben's paper is to be cautious about drawing conclusions from 
 simulated environments. Human level AGI has an algorithmic complexity of 10^9 
 bits (as estimated by Landauer). It is not possible to learn this much 
 information from an environment that is less complex. If a baby AI did 
 perform well in a simplified simulation of the world, it would not imply that 
 the same system would work in the real world. It would be like training a 
 language model on a simple, artificial language and then concluding that the 
 system could be scaled up to learn English.

 This is a lesson from my dissertation work in network intrusion anomaly 
 detection. This was a machine learning task in which the system was trained 
 on attack-free network traffic, and then identified anything out of the 
 ordinary as malicious. For development and testing, we used the 1999 
 MIT-DARPA Lincoln Labs data set consisting of 5 weeks of synthetic network 
 traffic with hundreds of labeled attacks. The test set developers took great 
 care to make the data as realistic as possible. They collected statistics 
 from real networks, built an isolated network of 4 real computers running 
 different operating systems, and thousands of simulated computers that 
 generated HTTP requests to public websites and mailing lists, and generated 
 synthetic email using English word bigram frequencies, and other kinds of 
 traffic.

 In my work I discovered a simple algorithm that beat the best intrusion 
 detection systems available at the time. I parsed network packets into 
 individual 1-4 byte fields, recorded all the values that ever occurred at 
 least once in training, and flagged any new value in the test data as 
 suspicious, with a score inversely proportional to the size of the set of 
 values observed in training and proportional to the time since the previous 
 anomaly.

 Not surprisingly, the simple algorithm failed on real network traffic. There 
 were too many false alarms for it to be even remotely useful. The reason it 
 worked on the synthetic traffic was that it was algorithmically simple 
 compared to real traffic. For example, one of the most effective tests was 
 the TTL value, a counter that decrements with each IP routing hop, intended 
 to prevent routing loops. It turned out that most of the attacks were 
 simulated from a machine that was one hop further away than the machines 
 simulating normal traffic.

 A problem like that could have been fixed, but there were a dozen others that 
 I found, and probably many that I didn't find. It's not that the test set 
 developers weren't careful. They spent probably $1 million developing it 
 (several people over 2 years). It's that you can't simulate the high 
 complexity of thousands of computers and human users with anything less than 
 that. Simple problems have simple solutions, but that's not AGI.

 -- Matt Mahoney, matmaho...@yahoo.com


 --- On Fri, 1/9/09, Ben Goertzel b...@goertzel.org wrote:

 From: Ben Goertzel b...@goertzel.org
 Subject: [agi] What Must a World Be That a Humanlike Intelligence May 
 Develop In It?
 To: agi@v2.listbox.com
 Date: Friday, January 9, 2009, 5:58 PM
 Hi all,

 I intend to submit the following paper to JAGI shortly, but
 I figured
 I'd run it past you folks on this list first, and
 incorporate any
 useful feedback into the draft I submit

 This is an attempt to articulate a virtual world
 infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 Most of the paper is taken up by conceptual and
 requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It
 attempts to
 define what kind of underlying virtual world infrastructure
 an
 effective AGI preschool would minimally require.

 thx
 Ben G



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https

Re: [agi] [WAS The Smushaby] The Logic of Creativity

2009-01-13 Thread Ben Goertzel
 and structured) methods by which one
 could achieve some procedural goal, and then he declares that logic
 (in this greater sense that I believe acknowledged) was incapable of
 achieving it.

 Let's take a flying house.  I have to say that there was a very great
 chance that I misunderstood what Mike was saying, since I believe that
 he effectively said that a computer program, using logically derived
 systems could not come to the point where it could creatively draw a
 picture of a flying house like a child might.

 If that was what he was saying then it is very strange.  Obviously,
 one could program a computer to draw a flying house.  So right away,
 his point must have been under stated, because that means that a
 computer program using computer logic (somewhere within this greater
 sense of the term) could follow a program designed to get it to draw a
 flying house.

 So right away, Mike's challenge can't be taken seriously.  If we can
 use logical design to get the computer program to draw a flying house,
 we can find more creative ways to get it to the same point.  Do you
 understand what I am saying?  You aren't actually going to challenge
 me to write a rather insipid program that will draw a flying house for
 you are you?  You accept the statement that I could do that if I
 wanted to right?  If you do accept that statement, then you should be
 able to accept the fact that I could also write a more elaborate
 computer program to do the same thing, only it might, for example, do
 so only after the words house and flying were input. I think you
 understand that I could write a slightly more elaborate computer
 program to do the something like that.  Ok, now I could keep making it
 more complicated and eventually I could get to the point where where
 it could take parts of pictures that it was exposed to and draw them
 in more creative combinations.   If it was exposed to pictures of
 airplanes flying, and if it was exposed to pictures of houses, it
 might,. through quasi random experimentation try drawing a picture of
 the airplane flying past the house as if the house was an immense
 mountain, and then it might try some clouds as landscaping for the
 house and then it might try a cloud with a driveway, garbage can and a
 chimney, and eventually it might even draw a picture of a house with
 wings.  All I need to do that is to use some shape detecting
 algorithms that have been developed for graphics programs and are used
 all the time by graphic artists that can approximately determine the
 shape of the house and airplane in the different pictures and then it
 would just be a matter of time before it could (and would) try to draw
 a flying house.

 Which step do you doubt, or did I completely misunderstand you?
 1. I could (I hope I don't have to) write a program that could draw a
 flying house.
 2. I could make it slightly more elaborate so, for example, that it
 would only draw the flying house if the words 'house' and 'flying'
 were input.
 3. I could vary the program in many other ways.  Now suppose that I
 showed you one of these programs.  After that I could make it more
 complicated so that it went through a slightly more creative process
 than the program you saw the previous time.
 4. I could continue to make the program more and more complicated. I
 could, (with a lot of graphics techniques that I know about but
 haven't actually mastered) write the program so that if it was exposed
 to pictures of houses and to pictures of flying, would have the
 ability to eventually draw a picture of a flying house (along with a
 lot of other creative efforts that you have not) even thought of.  But
 the thing is, that I can do this without using advanced AGI
 techniques!

 So, I must retain the recognition that I may not have been able to
 understand you because what you are saying is not totally reasonable
 to me.
 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] initial reaction to A2I2's call center product

2009-01-12 Thread Ben Goertzel
AGI company A2I2 has released a product for automating call center
functionality, see...

http://www.smartaction.com/index.html

Based on reading the website here is my initial reaction

Certainly, automating a higher and higher percentage of call center
functionality is a worthy goal, and a place one would expect AGI
technology to be able to play a role.  Current automated call center
systems either provide extremely crude functionality, or else require
extensive domain customization prior to each deployment; and they
still show serious shortcomings even after such customization, due
largely to their inability to interpret the user's statements in terms
of an appropriate contextual understanding.  The promise AGI
technology offers for this domain is the possibility of responding to
user statements with the contextual understanding that only general
intelligence can bring.

The extent to which A2I2 has really solved this very difficult
problem, however, is impossible to assess without actually trying the
product.  What they have might be an incremental improvement over
existing technologies, or it might be a quantum leap forward; based on
the information provided, there is no way to tell.  For example
http://www.tuvox.com/ is a quite sophisticated competitor and it would
be interesting to see a head to head competition between their system
and A2I2's.

The available materials tell little about the underlying technology.
Claims such as


Functionally, it recognizes speech, understands the caller's meaning
and intent, remembers the evolving context of the conversation, and
obtains information in real time from databases and websites.


are evocative but could be interpreted in many different ways.
Interpreted most broadly, this would imply that A2I2 has achieved a
human-level AI system; but if this were the case, there would be
better things to do with it than automate call centers.  Based on the
available information, it's not clear just how narrowly one must
interpret these assertions to obtain agreement with the truth.  What
is clear is that they are taking an adaptive learning based approach
rather than an approach based on extensive hand-coding of linguistic
resources, which is interesting, and vaguely reminiscent of Robert
Hecht-Nielsen's neural net approach to language processing.

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Ben Goertzel
The problem with simulations that run slower than real time is that
they aren't much good for running AIs interactively with humans... and
for AGI we want the combination of social and physical interaction

However, I agree that for an initial prototype implementation of bead
physics that would be the best approach...

On Mon, Jan 12, 2009 at 5:30 AM, Russell Wallace
russell.wall...@gmail.com wrote:
 I think this sort of virtual world is an excellent idea.

 I agree with Benjamin Johnston's idea of a unified object model where
 everything consists of beads.

 I notice you mentioned distributing the computation. This would
 certainly be valuable in the long run, but for the first version I
 would suggest having each simulation instance run on a single machine
 with the fastest physics capable GPU on the market, and accepting that
 it will still run slower than real time. Let an experiment be an
 overnight run, and use multiple machines by running multiple
 experiments at the same time. That would make the programming for the
 first version more tractable.


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

This is no place to stop -- half way between ape and angel
-- Benjamin Disraeli


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] time-sensitive issue: voting members sought to participate in upcoming election for H+ (World Transhumanist Association)

2009-01-11 Thread Ben Goertzel
Hi all,

Some of you may be aware of the organization called the World
Transhumanist Association, and currently in the process of being
rebranded as H+ aka Humanity+

I've been on the board of the organization since the middle of the
year and among other things have been helping out with the management
of the H+ magazine

http://www.hplusmagazine.com/

whose first issue got nearly 600,000 downloads.

The organization is in a phase of rapid growth and change, and I'm
hoping it can grow beyond its roots to become a major force in
spreading futurist memes throughout the world...

... which brings us to the purpose of the current message

the Humanity+ (formerly World Transhumanist Association) Board
elections begin tomorrow, Monday January 11th.

To vote in the elections, you need to be a Supporting or Sustaining
Member.  Eight seats are open.

If you have any interest in such things, it would be great if you
could renew your membership or become a member by tonight so that you
can vote in next week's elections.

Here are the candidates running:

http://www.transhumanism.org/index.php/WTA/more/hbrdc/

I'd like to especially recommend supporting the first eight candidates
listed at the URL: Sonia Arrison, George Dvorsky, Patri Friedman, Ben
Goertzel (big surprise), Stephane Gounari, Todd Huffman, Jonas Lamis,
and Mike LaTorra.

Sorry for the short notice, but if you see this in time and have the
interest, I hope you'll become a member by tonight so that you can
vote next week.

You can renew / become a member here for $25-$50 (please see middle of
page with the 1 Year or 2 Year buttons):

http://transhumanism.org/index.php/WTA/join/

thx
Ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook nathan.c...@gmail.com wrote:
 What about vibration? We have specialized mechanoreceptors to detect
 vibration (actually vibration and pressure - presumably there's processing
 to separate the two). It's vibration that lets us feel fine texture, via the
 stick-slip friction between fingertip and object.

Actually, letting beads vibrate at various frequencies would seem
perfectly reasonable ... and could lead to interesting behaviors in
sets of flexibly coupled beads.

I think this would be a good addition to the model, thanks!

 On a related note, even a very fine powder of very low friction feels
 different to water - how can you capture the sensation of water using beads
 and blocks of a reasonably large size?

The objective of a CogDevWorld such as BlocksNBeadsWorld is explicitly
**not** to precisely simulate the sensations of being in the real
world.

My question to you is: What important cognitive ability is drastically
more easily developable given a world that contains a distinction
between fluids and various sorts of bead-conglomerates?

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
 The model feels underspecified to me, but I'm OK with that, the ideas
 conveyed. It doesn't feel fair to insist there's no fluid dynamics
 modeled though ;-)

Yes, the next step would be to write out detailed equations for the
model.  I didn't do that in the paper because I figured that would be
a fairly empty exercise unless I also implemented some kind of simple
simulation of the model.  With this sort of thing, it's easy to write
down equations that look good, but one doesn't really know if they
make sense till one's run some simulations, done some parameter
tuning, etc.

Which seems like a quite fun exercise, but I just didn't get to it
yet... actually it would be sensible to do this together with some
nice visualization...

ben


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=126863270-d7b0b0
Powered by Listbox: http://www.listbox.com


[agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
Hi all,

I intend to submit the following paper to JAGI shortly, but I figured
I'd run it past you folks on this list first, and incorporate any
useful feedback into the draft I submit

This is an attempt to articulate a virtual world infrastructure that
will be adequate for the development of human-level AGI

http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

Most of the paper is taken up by conceptual and requirements issues,
but at the end specific world-design proposals are made.

This complements my earlier paper on AGI Preschool.  It attempts to
define what kind of underlying virtual world infrastructure an
effective AGI preschool would minimally require.

thx
Ben G



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
It's actually  mentioned there, though not emphasized... there's a
section on senses...

ben g

On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton brila...@gmail.com wrote:
 Goertzel this is an interesting line of investigation. What about in
 world sound perception?

 On 1/9/09, Ben Goertzel b...@goertzel.org wrote:
 Hi all,

 I intend to submit the following paper to JAGI shortly, but I figured
 I'd run it past you folks on this list first, and incorporate any
 useful feedback into the draft I submit

 This is an attempt to articulate a virtual world infrastructure that
 will be adequate for the development of human-level AGI

 http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf

 Most of the paper is taken up by conceptual and requirements issues,
 but at the end specific world-design proposals are made.

 This complements my earlier paper on AGI Preschool.  It attempts to
 define what kind of underlying virtual world infrastructure an
 effective AGI preschool would minimally require.

 thx
 Ben G



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] The Smushaby of Flatway.

2009-01-07 Thread Ben Goertzel
  If it was just a matter of writing the code, then it would have been done
 50 years ago.



if proving Fermat's Last theorem was just a matter of doing math, it would
have been done 150 years ago ;-p

obviously, all hard problems that can be solved have already been solved...

???



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
It seems to come down to the simplicity measure... if you can have

simplicity(Turing program P that generates lookup table T)

simplicity(compressed lookup table T)

then the Turing program P can be considered part of a scientific
explanation...


On Tue, Dec 30, 2008 at 10:02 AM, William Pearson wil.pear...@gmail.comwrote:

 2008/12/29 Ben Goertzel b...@goertzel.org:
 
  Hi,
 
  I expanded a previous blog entry of mine on hypercomputation and AGI into
 a
  conference paper on the topic ... here is a rough draft, on which I'd
  appreciate commentary from anyone who's knowledgeable on the subject:
 
  http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
 
 I'm still a bit fuzzy about your argument. So I am going to ask a
 question to hopefully clarify things somewhat.

 Couldn't you use similar arguments to say that we can't use science to
 distinguish between finite state machines and Turing machines? And
 thus question the usefulness of Turing Machines for science? As if you
 are talking about a finite data sets these can always be represented
 by a  compressed giant look up table.

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
I'm heading off on a vacation for 4-5 days [with occasional email access]
and will probably respond to this when i get back ... just wanted to let you
know I'm not ignoring the question ;-)

ben

On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wil.pear...@gmail.comwrote:

 2008/12/30 Ben Goertzel b...@goertzel.org:
 
  It seems to come down to the simplicity measure... if you can have
 
  simplicity(Turing program P that generates lookup table T)
  
  simplicity(compressed lookup table T)
 
  then the Turing program P can be considered part of a scientific
  explanation...
 

 Can you clarify what type of language this is in? You mention
 L-expressions however that is not very clear what that means. lambda
 expressions I'm guessing.

 If you start with a language that has infinity built in to its fabric,
 TMs will be simple, however if you started with a language that only
 allowed FSM to be specified e.g. regular expressions, you wouldn't be
 able to simply specify TMs, as you need to represent an infinitely
 long tape in order to define a TM.

 Is this analogous to the argument at the end of section 3? It is that
 bit that is the least clear as far as I am concerned.

  Will


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Hi,

I expanded a previous blog entry of mine on hypercomputation and AGI into a
conference paper on the topic ... here is a rough draft, on which I'd
appreciate commentary from anyone who's knowledgeable on the subject:

http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

This is a theoretical rather than practical paper, although it does attempt
to explore some of the practical implications as well -- e.g., in the
hypothesis that intelligence does require hypercomputation, how might one go
about creating AGI?   I come to a somewhat surprising conclusion, which is
that -- even if intelligence fundamentally requires hypercomputation -- it
could still be possible to create an AI via making Turing computer programs
... it just wouldn't be possible to do this in a manner guided entirely by
science; one would need to use some other sort of guidance too, such as
chance, imitation or intuition...

-- Ben G


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-29 Thread Ben Goertzel
Well, some of the papers in the references of my paper give formal
mathematical definitions of hypercomputation, though my paper is brief and
conceptual and not of that nature.  So although the generic concept may be
muddled, there are certainly some fully precise variants of it.

This paper surveys various formally defined varieties of hypercomputing,
though I haven't read it closely..

http://www.amirrorclear.net/academic/papers/many-forms.pdf

Anyway the argument in my paper is pretty strong and applies to any variant
with power beyond that of ordinary Turing machines, it would seem...

-- ben g

On Mon, Dec 29, 2008 at 4:18 PM, J. Andrew Rogers 
and...@ceruleansystems.com wrote:


 On Dec 29, 2008, at 10:45 AM, Ben Goertzel wrote:

 I expanded a previous blog entry of mine on hypercomputation and AGI into
 a conference paper on the topic ... here is a rough draft, on which I'd
 appreciate commentary from anyone who's knowledgeable on the subject:

 http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf

 This is a theoretical rather than practical paper, although it does
 attempt to explore some of the practical implications as well -- e.g., in
 the hypothesis that intelligence does require hypercomputation, how might
 one go about creating AGI?   I come to a somewhat surprising conclusion,
 which is that -- even if intelligence fundamentally requires
 hypercomputation -- it could still be possible to create an AI via making
 Turing computer programs ... it just wouldn't be possible to do this in a
 manner guided entirely by science; one would need to use some other sort of
 guidance too, such as chance, imitation or intuition...



 As more of a meta-comment, the whole notion of hypercomputation seems to
 be muddled, insofar as super-recursive algorithms may be a limited example
 of it.

 I was doing a lot of work with inductive Turing machines several years ago,
 and most of the differences seemed to be definitional e.g. what constitutes
 an algorithm or answer.  For most practical purposes, the price of
 implementing them in conventional discrete space is the introduction of some
 (usually acceptable) error.  But if they approximate to the point of
 functional convergence on a normal Turing machine...  As best I have been
 able to tell, and I have not really been paying attention because the
 arguments seem to mostly be people talking past each other, is that ITMs
 raise some interesting philosophical questions regarding hypercomputation.


 We cannot implement a *strict* hypercomputer, but to what extent does it
 count if we can asymptotically converge on the functional consequences of
 a hypercomputer using a normal computer?  It suspect it will be hard to
 evict the belief in Penrosian magic from the error bars in any case.

 Cheers,

 J. Andrew Rogers



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-29 Thread Ben Goertzel
Consciousness of X is: the idea or feeling that X is correlated with
Consciousness of X

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney matmaho...@yahoo.com wrote:

 --- On Mon, 12/29/08, John G. Rose johnr...@polyplexic.com wrote:

   What does consciousness have to do with the rest of your argument?
  
 
  Multi-agent systems should need individual consciousness to
  achieve advanced
  levels of collective intelligence. So if you are
  programming a multi-agent
  system, potentially a compressor, having consciousness in
  the agents could
  have an intelligence amplifying effect instead of having
  non-conscious
  agents. Or some sort of primitive consciousness component
  since higher level
  consciousness has not really been programmed yet.
 
  Agree?

 No. What do you mean by consciousness?

 Some people use consciousness and intelligence interchangeably. If that
 is the case, then you are just using a circular argument. If not, then what
 is the difference?

 -- Matt Mahoney, matmaho...@yahoo.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
David,

Good point... I'll revise the essay to account for it...

The truth is, we just don't know -- but in taking the virtual world
approach to AGI, we're very much **hoping** that a subset of human everyday
physical reality is good enough. ..

ben

On Sat, Dec 27, 2008 at 6:46 AM, David Hart dh...@cogical.com wrote:

 On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


 I really liked this essay. I'm curious about the clarity of terms 'real
 world' and 'physical world' in some places. It seems that, to make its
 point, the essay requires 'real world' and 'physical world' mean only
 'practical' or 'familiar physical reality', depending on context. Whereas,
 if 'real world' is reserved for a very broad definition of realities
 including physical realities (including classical, quantum mechanical and
 relativistic time and distance scales), peculiar human cultural realities,
 and other definable realities, it will be easier in follow-up essays to
 discuss AGI systems that can natively think simultaneously about any
 multitude of interrelated realities (a trick that humans are really bad at).
 I hope this makes sense...

 -dave


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
Dave --

See mildly revised version, where I replaced real world with everyday
world (and defined the latter term explicitly), and added a final section
relevant to the distinctions between the everyday world, simulated everyday
worlds, and other portions of the physical world.

http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html

-- Ben


On Sat, Dec 27, 2008 at 8:28 AM, Ben Goertzel b...@goertzel.org wrote:


 David,

 Good point... I'll revise the essay to account for it...

 The truth is, we just don't know -- but in taking the virtual world
 approach to AGI, we're very much **hoping** that a subset of human everyday
 physical reality is good enough. ..

 ben


 On Sat, Dec 27, 2008 at 6:46 AM, David Hart dh...@cogical.com wrote:

 On Sat, Dec 27, 2008 at 5:25 PM, Ben Goertzel b...@goertzel.org wrote:


 I wrote down my thoughts on this in a little more detail here (with some
 pastings from these emails plus some new info):


 http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html


 I really liked this essay. I'm curious about the clarity of terms 'real
 world' and 'physical world' in some places. It seems that, to make its
 point, the essay requires 'real world' and 'physical world' mean only
 'practical' or 'familiar physical reality', depending on context. Whereas,
 if 'real world' is reserved for a very broad definition of realities
 including physical realities (including classical, quantum mechanical and
 relativistic time and distance scales), peculiar human cultural realities,
 and other definable realities, it will be easier in follow-up essays to
 discuss AGI systems that can natively think simultaneously about any
 multitude of interrelated realities (a trick that humans are really bad at).
 I hope this makes sense...

 -dave


  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-27 Thread Ben Goertzel
The question is how much detail about the world needs to be captured in a
simulation in order to support humanlike cognitive development.

As a single example, Piagetan conservation of volume experiments are often
done with water, which would suggest you need to have fluid dynamics in your
simulation to support that kind of experiment.  But you don't necessarily,
because you can do those same experiments with fairly large beads, via using
Newtonian mechanics to simulate the rolling-around of the beads.  So it's
not clear whether fluidics is needed in the sim world to enable humanlike
cognitive development, versus whether beads rolling around is good enough
(at the moment I suspect the latter)

As I'm planning to write a paper on this stuff, I don't want to diver time
to writing a long email about it.

As for which subset of a physical reality: my specific idea is to simulate
a real-world preschool, with enough fidelity that AIs can carry out the same
learning tasks that human kids carry out in a real preschool.


On Sat, Dec 27, 2008 at 9:56 AM, Mike Tintner tint...@blueyonder.co.ukwrote:

  Ben: in taking the virtual world approach to AGI, we're very much
 **hoping** that a subset of human everyday physical reality is good
 enough. ..

 Ben,

 Which subset(s)?

 The idea that you can virtually recreate any part or processes of reality
 seems horribly flawed - and unexamined.

 Take the development of intelligence. You seem (from recent exchanges) to
 accept that there is very roughly some natural order to the development of
 intelligence. So for example, you can't learn about planets  universes, if
 you haven't first learned about simple objects like stones and balls - nor
 about politics, governments and international relations if you haven't first
 learned about language, speech/conversation, emotions, other minds  much
 more.  Now we - science - have some ideas about this natural order - about
 how we have to develop from understanding simple to complex things. But
 overall our picture is pathetic and hugely gapped.  For science to produce
 an extensive picture of development here would - at a guess - take at least
 hundreds of thousands, if not millions of scientists, and many thousands (or
 millions) of discoveries, and many changes of competing paradigms.

 What are the chances then of an individual like you, or team of
 individuals, being able to design a coherent, practical order of
 intellectual development for an artificial, virtual agent straight off in a
 few years ?

 The same applies to any part of reality. We - science - may have a detailed
 picture of how some pieces of objects, like stones and water, work. But
 again our overall ability to model how all those particles, atoms and
 molecules interrelate in any given object, and how the object as a whole
 behaves, is still very limited. We still have all kinds of gaps in our
 picture of water. Scientific models are always far from the real thing.

 Again, to come anywhere near completing those models will take new armies
 of scientists.

 What are the chances then of a few individuals being able to correctly
 model the behaviour of any objects in the real world on a flat screen?

 IOW the short cut you hope for is probably the longest way round you
 could possibly choose. Robotics - forgetting altogether about formally
 modelling the world - and just interacting with it directly,   is actually
 shorter by far. So I doubt whether you have ever seriously examined how you
 would recreate a *particular* subset of reality.in any detail  - as simple
 even, say, as a ball -  as opposed to the general idea. Have you?

 [Nb We're talking here about composite models of objects - so it's easy
 enough to create a reasonable picture of a ball bouncing on a hard surface,
 but what happens when your agent sits on it, or rubs it on his shirt, or
 bounces it on water,  or sand, or throws it at another ball in mid-air, or
 (as we've partly discussed) plays with it like an infant ?]
 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
I'll try to answer this one...

1)
In a nutshell, the algorithmic info. definition of intelligence is like
this: Intelligence is the ability of a system to achieve a goal that is
randomly selected from the space of all computable goals, according to some
defined probability distribution on computable-goal space.

2)
Of course, if one had a system that was highly intelligent according to the
above definition, it would be a great compressor.

3)
There are theorems stating that if you have a great compressor, then by
wrapping a little code around it, you can get a system that will be highly
intelligent according to the algorithmic info. definition.  The catch is
that this system (as constructed in the theorems) will use insanely,
infeasibly much computational resource.

What are the weaknesses of the approach:

A)
The real problem of AI is to make a system that can achieve complex goals
using feasibly much computational resource.

B)
Workable strategies for achieving complex goals using feasibly much
computational resource, may be highly dependent on the particular
probability distribution over goal space mentioned in 1 above

For this reason, I'm not sure the algorithmic info. approach is of much use
for building real AGI systems.

I note that Shane Legg is now directing his research toward designing
practical AGI systems along totally different lines, not directly based any
of the alg. info. stuff he worked on in his thesis.

However, Marcus Hutter, Juergen Schmidhuber and others are working on
methods of scaling down the approaches mentioned in 3 above (AIXItl, the
Godel Machine, etc.) to as to yield feasible techniques.  So far this has
led to some nice machine learning algorithms (e.g. the parameter-free
temporal difference reinforcement learning scheme in part of Legg's thesis,
and Hutter's new work on Feature Bayesian Networks and so forth), but
nothing particularly AGI-ish.  But personally I wouldn't be harshly
dismissive of this research direction, even though it's not the one I've
chosen.

-- Ben G




On Fri, Dec 26, 2008 at 3:53 PM, Richard Loosemore r...@lightlink.comwrote:

 Philip Hunt wrote:

 2008/12/26 Matt Mahoney matmaho...@yahoo.com:

 I have updated my universal intelligence test with benchmarks on about
 100 compression programs.


 Humans aren't particularly good at compressing data. Does this mean
 humans aren't intelligent, or is it a poor definition of intelligence?

  Although my goal was to sample a Solomonoff distribution to measure
 universal
 intelligence (as defined by Hutter and Legg),


 If I define intelligence as the ability to catch mice, does that mean
 my cat is more intelligent than most humans?

 More to the point, I don't understand the point of defining
 intelligence this way. Care to enlighten me?


 This may or may not help, but in the past I have pursued exactly these
 questions, only to get such confusing, evasive and circular answers, all of
 which amounted to nothing meaningful, that eventually I (like many others)
 have just had to give up and not engage any more.

 So, the real answers to your questions are that no, compression is an
 extremely poor definition of intelligence; and yes, defining intelligence to
 be something completely arbitrary (like the ability to catch mice) is what
 Hutter and Legg's analyses are all about.

 Searching for previous posts of mine which mention Hutter, Legg or AIXI
 will probably turn up a number of lengthy discussion in which I took a deal
 of trouble to debunk this stuff.

 Feel free, of course, to make your own attempt to extract some sense from
 it all, and by all means let me know if you eventually come to a different
 conclusion.




 Richard Loosemore




 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Universal intelligence test benchmark

2008-12-26 Thread Ben Goertzel
 Most compression tests are like defining intelligence as the ability to
 catch mice. They measure the ability of compressors to compress specific
 files. This tends to lead to hacks that are tuned to the benchmarks. For the
 generic intelligence test, all you know about the source is that it has a
 Solomonoff distribution (for a particular machine). I don't know how you
 could make the test any more generic.


IMO the test is *too* generic  ... I don't think real-world AGI is mainly
about being able to recognize totally general patterns in totally general
datasets.   I suspect that to do that, the best approach is ultimately going
to be some AIXItl variant ... meaning it's a problem that's not really
solvable using a real-world amount of resources.  I suspect that all the AGI
system one can really build are SO BAD at this general problem, that it's
better to characterize AGI systems

-- NOT in terms of how well they do at this general problem

but rather

-- in terms of what classes of datasets/environments they are REALLY GOOD at
recognizing patterns in

I think the environments existing in the real physical and social world are
drawn from a pretty specific probability distribution (compared to say, the
universal prior), and that for this reason, looking at problems of
compression or pattern recognition across general program spaces without
real-world-oriented biases, is not going to lead to real-world AGI.  The
important parts of AGI design are the ones that (directly or indirectly)
reflect the specific distribution of problems that the reeal world presents
an AGI system.

And this distribution is **really hard** to encapsulate in a text
compression database.  Because, we don't know what this distribution is.

And this is why we should be working on AGI systems that interact with the
real physical and social world, or the most accurate simulations of it we
can build.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel

 Suppose I take the universal prior and condition it on some real-world
 training data.  For example, if you're interested in real-world
 vision, take 1000 frames of real video, and then the proposed
 probability distribution is the portion of the universal prior that
 explains the real video.  (I can mathematically define this if there
 is interest, but I'm guessing the other people here can too, so maybe
 we can skip that.  Speak up if I'm being too unclear.)

 Do you think the result is different in an important way from the
 real-world probability distribution you're looking for?
 --
 Tim Freeman   http://www.fungible.com
 t...@fungible.com


No, I think that in principle that's the right approach ... but that simple,
artificial exercises like conditioning data on photos don't come close to
capturing the richness of statistical structure in the physical universe ...
or in the subsets of the physical universe that humans typically deal
with...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-26 Thread Ben Goertzel
 Much of AI and pretty much all of AGI is built on the proposition that we
 humans must code knowledge because the stupid machines can't efficiently
 learn it on their own, in short, that UNsupervised learning is difficult.


No, in fact almost **no** AGI is based on this proposition.

Cyc is based strictly on this proposition ... some other GOFAI-ish systems
like SOAR are based on weaker forms of this proposition ... but this is
really a minority view in the AGI world, and a view taken by very few
designs created in the last decade ... sociologically, it seems to be a view
that peaked in the 70's and 80's...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Real-world vs. universal prior (was Re: [agi] Universal intelligence test benchmark)

2008-12-26 Thread Ben Goertzel
I wrote down my thoughts on this in a little more detail here (with some
pastings from these emails plus some new info):

http://multiverseaccordingtoben.blogspot.com/2008/12/subtle-structure-of-physical-world.html

On Sat, Dec 27, 2008 at 12:23 AM, Ben Goertzel b...@goertzel.org wrote:



 Suppose I take the universal prior and condition it on some real-world
 training data.  For example, if you're interested in real-world
 vision, take 1000 frames of real video, and then the proposed
 probability distribution is the portion of the universal prior that
 explains the real video.  (I can mathematically define this if there
 is interest, but I'm guessing the other people here can too, so maybe
 we can skip that.  Speak up if I'm being too unclear.)

 Do you think the result is different in an important way from the
 real-world probability distribution you're looking for?
 --
 Tim Freeman   http://www.fungible.com
 t...@fungible.com


 No, I think that in principle that's the right approach ... but that
 simple, artificial exercises like conditioning data on photos don't come
 close to capturing the richness of statistical structure in the physical
 universe ... or in the subsets of the physical universe that humans
 typically deal with...

 ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
I mentioned it because looked at the book again recently and was pleasantly
surprised at how well his ideas seemed to have held up  In other words,
although there are point on which I think he's probably wrong, his
decade-old ideas *still* seem more sensible and insightful than most of the
theoretical speculations one reads in the neuroscience literature... and I
can't really think of any recent neuroscience data that refutes any of his
key hypotheses...

On Tue, Dec 23, 2008 at 10:36 AM, Richard Loosemore r...@lightlink.comwrote:

 Ben Goertzel wrote:


 Richard,

 I'm curious what you think of William Calvin's neuroscience hypotheses as
 presented in e.g. The Cerebral Code

 That book is a bit out of date now, but still, he took complexity and
 nonlinear dynamics quite seriously, so it seems to me there may be some
 resonance between his ideas and your own

 I find his speculative ideas more agreeable than Tononi's, myself...

 thx
 ben g


 Yes, I did read his book (or part of it) back in 98/99, but 

 From what I remember, I found resonance, as you say, but he is one of those
 people who is struggling to find a way to turn an intuition into something
 concrete.  It is just that he wrote a book about it before he got to
 Concrete Operations.

 It would be interesting to take a look at it again, 10 years later, and see
 whether my opinion has changed.

 To put this in context, I felt like I was looking at a copy of myself back
 in 1982, when I struggled to write down my intuitions as a physicist coming
 to terms with psychology for the first time.  I am now acutely embarrassed
 by the naivete of that first attempt, but in spite of the embarrassment I
 know that I have since turned those intuitions into something meaningful,
 and I know that in spite of my original hubris, I was on a path to something
 that actually did make sense.  To cognitive scientists at the time it looked
 awful, unmotivated and disconnected from reality (by itself, it was!), but
 in the end it was not trash because it had real substance buried inside it.

 With people like Calvin (and others) I see writings that look somewhat
 speculative and ungrounded, just like my early attempts, so I am mixed
 between a desire to be lenient (because I was that like that once) and a
 feeling that they really need to be aware that their thoughts are still
 ungelled.

 Anyhow, that's my quick thoughts on him.  I'll see if I can dig out his
 book at some point.




 Richard Loosemore









 On Tue, Dec 23, 2008 at 9:53 AM, Richard Loosemore 
 r...@lightlink.commailto:
 r...@lightlink.com wrote:

Ed Porter wrote:

Richard,
Please describe some of the counterexamples, that you can easily
come up
with, that make a mockery of Tononi's conclusion.

Ed Porter


Alas, I will have to disappoint.  I put a lot of effort into
understanding his paper first time around, but the sheer agony of
reading (/listening to) his confused, shambling train of thought,
the non-sequiteurs, and the pages of irrelevant math  that I do
not need to experience a second time.  All of my original effort
only resulted in the discovery that I had wasted my time, so I have
no interest in wasting more of my time.

With other papers that contain more coherent substance, but perhaps
what looks like an error, I would make the effort.  But not this one.

It will have to be left as an exercise for the reader, I'm afraid.



Richard Loosemore


P.S.   A hint.  All I remember was that he started talking about
multiple regions (columns?) of the brain exchanging information with
one another in a particular way, and then he asserted a conclusion
which, on quick reflection, I knew would not be true of a system
resembling the distributed one that I described in my consciousness
paper (the molecular model).  Knowing that his conclusion was
flat-out untrue for that one case, and for a whole class of similar
systems, his argument was toast.









-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com
mailto:r...@lightlink.com] Sent: Monday, December 22, 2008 8:54
 AM
To: agi@v2.listbox.com mailto:agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was 
Building a
machine that can learn from experience

Ed Porter wrote:

I don't think this AGI list should be so quick to dismiss a
$4.9 million dollar grant to create an AGI.  It will not
necessarily be vaporware. I think we should view it as a
good sign.

Even if it is for a project that runs the risk,
 like many
DARPA projects (like most scientific funding in general) of
not necessarily placing its money where it might do the most
good --- it is likely to at least produce some interesting

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-23 Thread Ben Goertzel
 claim the human brain is an eternal verity, since it is only
 believed that it has existing in anything close to its current form in the
 last 30 to 100 thousand years, and there is no guarantee how much longer it
 will continue to exists.  Compared to much of what the natural sciences
 study, its existence appears quite fleeting.

 I think this is just a terminology issue. The 'laws of nature' are the
 eternal verity, to me. The dynamical output they represent - of course that
 does whatever it does. The universe is an intrinsically dynamic entity at
 all levels. Even the persistent expression of total randomness is an
 'eternal verity'. No real issue here.



 ===Colin said==

 Anyway, for these reasons, folks who use computer models to study human
 brains/consciousness will encounter some difficulty justifying, to the basic
 physical sciences, claims made as to the equivalence of the model and
 reality. That difficulty is fundamental and cannot be 'believed away'.



 ===ED's reply===

 If you attend brain science lectures and read brain science literature, you
 will find that computer modeling is playing an ever increasing role in brain
 science --- so this basic difficulty that you describe largely does not
 exist.



 I think you've missed the actual point at hand for the reasons detailed *
 HERE*.

  ===Colin said==

 The intelligence originates in the brain. AGI and brain science must be
 literally joined at the hip or the AGI enterprise is arguably scientifically
 impoverished wishful thinking.

 ===ED's reply===

 I don't know what you mean by joined at the hip, but I think it is being
 overly anthropomorphic to think an artificial mind has to slavishly model a
 human brain to have great power and worth.



 But I do think it would probably have to accomplish some of the same
 general functions, such as automatic pattern learning, credit assignment,
 attention control, etc.



 Ed Porter

 We are all enthusiastically intent on creating artificial entities with
 some kind of usefulness (=great power and worth).  However, AGI is
 Artificial *General* Intelligence, seeks to create power and worth through
 a claim that '*general intelligence*' has been delivered. This is not
 merely the same general functions; it is actual general intelligence. The
 statement A *model* of general intelligence is oxymoronic. If you can
 deliver general intelligence then you are not delivering a model of it, you
 are delivering *actual* general intelligence. To use models as a basis for
 it you need to have a scientific basis for a claim that the models that have
 been used to implement the AGI can (in theory) deliver identical behaviour =
 general intelligence. Models of a human brain could be involved. Models of
 outward human behaviour could be involved. ... in any case - Each AGI-er
 needs to have a cogent, scientifically based claim in respect of the models
 as deliverers of the claimed outcomes - or the beliefs underlying the
 AGI-er's approach have a critical weakness in the eyes of science.

 I don't think there's any real issue here. Mostly semantics being mixed a
 bit.

 Gotta get back to xmas! Yuletide stuff to you.

 Colin

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Relevance of SE in AGI

2008-12-22 Thread Ben Goertzel
Well, we have attempted to use sound software engineering principles to
architect the OpenCog framework, with a view toward making it usable for
prototyping speculative AI ideas and ultimately building scalable, robust,
mature AGI systems as well

But, we are fairly confident of our overall architecture with this system
because there have been a number of predecessor systems based on similar
principles, which we implemented and learned a lot from ...

If one has a new AGI idea and wants to start experimenting with it, SE is
basically a secondary matter ... the point is to explore the algorithms and
ideas by whatever means is less time-wasting and frustrating...

OTOH, if one has an AGI idea that's already been fleshed out a fair bit and
one is ready to try to use it as the basis for a scalable, extensible
system, SE is more worth paying attention to...

Premature attention to engineering when one should be focusing on science is
a risk, but so is ignoring engineering when one wants to build a scalable,
extensible system...

ben g

On Mon, Dec 22, 2008 at 9:03 AM, Richard Loosemore r...@lightlink.comwrote:

 Valentina Poletti wrote:

 I have a question for you AGIers.. from your experience as well as from
 your background, how relevant do you think software engineering is in
 developing AI software and, in particular AGI software? Just wondering..
 does software verification as well as correctness proving serve any use in
 this field? Or is this something used just for Nasa and critical
 applications?


 1) Software engineering (if we take that to mean the conventional
 repertoire of techniques taught as SE) is relevant to any project that
 gets up above a certain size, but it is less important when the project is
 much smaller, serves a more exploratory function, or where the design is
 constantly changing.  To this extent I agree with Pei's comments.

 2) If you are looking beyond the idea of simply grabbing some SE techniques
 off the shelf, and are instead asking if SE can have an impact on AGI, then
 the answer is a dramatic Yes!.  Why?  Because tools determine the way that
 we *can* think about things.  Tools shape our thoughts.  They can sometimes
 enable us to think in new ways that were simply not possible before the
 tools were invented.

 I decided a long time ago that if cognitive scientists had easy-to-use use
 tools that enabled them to construct realistic components of thinking
 systems, their entire style of explanation would be revolutionized.  Right
 now, cog sci people cannot afford the time to be both cog sci experts *and*
 sophisticated software developers, so they have to make do with programming
 that is, by and large, trivially simple.  This determines the kinds of
 models and explanations they can come up with.  (Ditto in spades for the
 neuroscientists, by the way).

 So, the more global answer to your question is that nothing could be more
 important for AGI than software engineering.

 The problem is, that the kind of software engineering we are talking about
 is not a matter of grabbing SE components off the shelf, but asking what the
 needs of cognitive scientists and AGIers might be, and then inventing new
 techniques and tools that will give these people the ability to think about
 intelligent systems in new ways.

 That is why I am working on Safaire.





 Richard Loosemore



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
Hi,



 So if the researcher on this project have been learning some of your ideas,
 and some of the better speculative thinking and neural simulations that have
 been done in brains science --- either directly or indirectly --- it might
 be incorrect to say that there is no 'design for a thinking machine' in
 SyNAPSE.



 But perhaps you know the thinking of the researchers involved enough to
 know that they do, in fact, lack such a design, other than what they have
 yet to learn by progress yet to be made by their neural simulations.


Well I talked to Dharmendra on this topic a couple months ago.  Believe me,
there is no grand AI architecture there.  You won't find one in their
publications, and they don't allude to one in their conversations.  You'd
have to be a heck of a conspiracy theorist to posit one...

I agree that one could make a neural-net-like design based on the underlying
conceptual principles of OpenCogPrime, and if I had a lot more free time
maybe I'd do it, but I'm more interested in putting my time into the current
design which IMO is better adapted to current computers.  I have a feeling
the neuroscientists have a lot of surprises for us coming up in the next 2
decades, so that it's premature to base AI designs on neuroscience
knowledge...

ANYWAY, I THINK WE SHOULD, AT LEAST, INVITE THEM TO AGI 2009.  I though one
 of the goal of AGI 2009 it to increase the attention and respect our
 movement receives from the AI community in general and AI funders in
 particular.


Please note that the AI community and the artificial brain / brain
simulation community are rather separate at this point (though not entirely
so).

We will have a number of recognized leaders from the AI field at AGI-09,
such as (to pick a few almost at random) John Laird, Marcus Hutter and
Juergen Schmidhuber

However, in spite of emailing and talking to some relevant folks, I didn't
seem to succeed in pulling brain simulation folks into AGI-09, at least they
didn't submit papers for presentation...

Perhaps for AGI-10 or 11 some different strategy will need to be taken if we
wish to help pull these communities together.  For instance, convince *one*
leader in that area to take charge of pulling his colleagues into a special
session on computational neuroscience modeling etc.

At the AAAI BICA symposium Alexei Samsonovich organized last month, a couple
neuroscience simulation guys (Steve Grossberg for example) showed up
alongside the AI guys ... probably because biology was in the title ;-)
... but still it was strongly AI-focused rather than brain-simulation
focused.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote:

  Ben,



 Thanks for the reply.



 It is a shame the brain science people aren't more interested in AGI.  It
 seems to me there is a lot of potential for cross-fertilization.



I don't think many of these folks have a principled or deep-seated
**aversion** to AGI work or anything like that -- it's just that they're
busy people and need to prioritize, like all working scientists

Similarly, not many AGI types show up at computational neuroscience modeling
type conferences

To create connections between fields there has to be some strong indication
of real value offered by one field to the other ... and preferably mutual
value...

But of course, the catch is that this value will only be demonstrated once
the researchers in the different fields actually start coming together more

I was involved w/ trying to build these kinds of links in the late 90s when
I co-founded two cross-disciplinary university cog sci degree programs.
It's hard because different people from different fields speak different
languages and have different ideas of what constitutes successful or
interesting research.

The problem of bringing together AI and neuroscience and psychology was
*partially* solved back when by the creation of cog sci as a discipline ...
but obviously the solution was only partial because cog sci to a
disturbing degree got sucked into cog psych, and now someone needs to work
again to pull AGI and brain-sim work together.

Obviously, there's only so much one maverick outsider researcher like me can
do to help nudge these two research communities together, but I'll do what I
can via the AGI conferences, anyways

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-22 Thread Ben Goertzel
 and
 reality. That difficulty is fundamental and cannot be 'believed away'. At
 the same time it's not a show-stopper; merely something to be aware of as we
 go about our duties. This will remain an issue - the only real, certain,
 known example of a general intelligence is the human. The intelligence
 originates in the brain. AGI and brain science must be literally joined at
 the hip or the AGI enterprise is arguably scientifically impoverished
 wishful thinking. Which is pretty much what Ben said...although as usual I
 have used too many damned words!

 I expect we'll just have to agree to disagree... but there you have it :-)

 colin hales
 (1) Edelman, G. (2003). Naturalizing consciousness: A theoretical
 framework. Proc Natl Acad Sci U S A, 100(9), 5520–24.



 Ed Porter wrote:

  Colin,



 From a quick read, the gist of what your are saying seems to be that AGI is
 just engineering, i.e., the study of what man can make and the properties
 thereof, whereas science relates to the eternal verities of reality.



 But the brain is not part of an eternal verity.  It is the result of the
 engineering of evolution.



 At the other end of things, physicists are increasingly viewing physical
 reality as a computation, and thus the science of computation (and
 communication which is a part of it), such as information theory, have begun
 to play an increasingly important role in the most basic of all sciences.



 And to the extent that the study of the human mind is a science, then the
 study of the types of computation that are done in the mind is part of that
 science, and AGI is the study of many of the same functions.



 So your post might explain the reason for a current cultural divide, but it
 does not really provide a justification for it.  In addition, if you attend
 events at either MIT's brain study center or its AI center, you will find
 many of the people who are there are from the other of these two centers,
 and that there is a considerable degree of cross-fertilization there that I
 have heard people at such event describe the benefits of.



 Ed Porter





 -Original Message-
 *From:* Colin Hales 
 [mailto:c.ha...@pgrad.unimelb.edu.auc.ha...@pgrad.unimelb.edu.au]

 *Sent:* Monday, December 22, 2008 6:19 PM
 *To:* agi@v2.listbox.com
 *Subject:* Re: [agi] SyNAPSE might not be a joke  was  Building a
 machine that can learn from experience



 Ben Goertzel wrote:



 On Mon, Dec 22, 2008 at 11:05 AM, Ed Porter ewpor...@msn.com wrote:

 Ben,



 Thanks for the reply.



 It is a shame the brain science people aren't more interested in AGI.  It
 seems to me there is a lot of potential for cross-fertilization.



 I don't think many of these folks have a principled or deep-seated
 **aversion** to AGI work or anything like that -- it's just that they're
 busy people and need to prioritize, like all working scientists

 There's a more fundamental reason: Software engineering is not 'science' in
 the sense understood in the basic physical sciences. Science works to
 acquire models of empirically provable critical dependencies (apparent
 causal necessities). Software engineering never delivers this. The result of
 the work, however interesting and powerful, is a model that is, at best,
 merely a correlate of some a-priori 'designed' behaviour. Testing to your
 own specification is a normal behaviour in computer science. This is *not*the 
 testing done in the basic physical science - they 'test' (empirically
 examine) whatever is naturally there - which is, by definition, a-priori
 unknown.

 No matter how interesting it may be, software tells us nothing about the
 actual causal dependencies. The computer's physical hardware (semiconductor
 charge manipulation), configured as per the software, is the actual and
 ultimate causal necessitator of all the natural behaviour of hot rocks
 inside your computer. Software is MANY:1 redundantly/degenerately related to
 the physical (natural world) outcomes. The brilliantly useful
 'hardware-independence' achieved by software engineering and essentially
 analogue electrical machines behaving 'as-if' they were digital - so
 powerful and elegant - actually places the status of the software activities
 outside the realm of any claims as causal.

 This is the fundamental problem that the  basic physical sciences have with
 computer 'science'. It's not, in a formal sense a 'science'. That doesn't
 mean CS is bad or irrelevant - it just means that it's value as a revealer
 of the properties of the natural world must be accepted with appropriate
 caution.

 I've spent 10's of thousands of hours testing software that drove all
 manner of physical world equipment - some of it the size of a 10 storey
 building. I was testing to my own/others specification. Throughout all of it
 I knew I was not doing science in the sense that scientists know it to be.
 The mantra is correlation is not causation and it's beaten into scientist
 pups from an early age

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-21 Thread Ben Goertzel
 should invite people like Edelman, Tononi, and
 Dharmendra Modha to AGI 2009.  The more we act interested and respectful of
 them, the more likely we are to get respect back from them and from their
 funders.



 Ed Porter





 -Original Message-
 From: YKY (Yan King Yin) [mailto:generic.intellige...@gmail.com]
 Sent: Friday, December 19, 2008 12:31 AM
 To: agi@v2.listbox.com
 Subject: Re: [agi] Building a machine that can learn from experience



  DARPA buys G.Tononi for 4.9 $Million!  For what amounts to little
 more

  than vague hopes that any of us here could have dreamed up. Here I am, up
 to

  my armpits in an actual working proposition with a real science basis...

  scrounging for pennies. hmmm...maybe if I sidle up and adopt an aging
 Nobel

  prizewinner...maybe that'll do it.

 

  nah. too cynical for the festive season. There's always 2009! You never

  know



 You talked about building your 'chips'.  Just curious what are you

 working on?  Is it hardware-related?



 YKY





 ---

 agi

 Archives: https://www.listbox.com/member/archive/303/=now

 RSS Feed: https://www.listbox.com/member/archive/rss/303/

 Modify Your Subscription: https://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com
  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 8:01 AM, Derek Zahn derekz...@msn.com wrote:

  Ben:

  Right.  My intuition is that we don't need to simulate the dynamics
  of fluids, powders and the like in our virtual world to make it adequate
  for teaching AGIs humanlike, human-level AGI.  But this could be
  wrong.

 I suppose it depends on what kids actually learn when making cakes,
 skipping rocks, and making a mess with play-dough.  Some might say that if
 they get conservation of mass and newton's law then they skipped all the
 useless stuff!



OK, but those some probably don't include any preschool teachers or
educational theorists.

That hypothesis is completely at odds with my own intuition from having
raised 3 kids and spent probably hundreds of hours helping out in daycare
centers, preschools, kindergartens, etc.

Apart from naive physics, which is rather well-demonstrated not to be
derived in the human mind/brain from basic physical principles, there is a
lot of learning about planning, scheduling, building, cooperating ...
basically, all the stuff mentioned in our AGI Preschool paper.

Yes, you can just take a robo-Cyc type approach and try to abstract, on
your own, what is learned from preschool activities and code it into the AI:
code in Newton's laws, axiomatic naive physics, planning algorithms, etc.
My strong prediction is you'll get a brittle AI system that can at best be
tuned into adequate functionality in some rather narrow contexts.



 But in the case where we are trying to roughly follow stages of human
 development with goals of producing human-like linguistic and reasoning
 capabilities, I very much fear that any significant simplification of the
 universe will provide an insufficient basis for the large sensory concept
 set underlying language and analogical reasoning (both gross and fine).
 Literally, I think you're throwing the baby out with the bathwater.  But, as
 you say, this could be wrong.



Sure... that can't be disproven right now, of course.

We plan to expand the paper into a journal paper where we argue against this
obvious objection more carefully -- basically arguing why the virtual-world
setting provides enough detail to support the learning of the critical
cognitive subcomponents of human intelligence.  But, as with anything in
AGI, even the best-reasoned paper can't convince a skeptic.




 It's really the only critique I have of the AGI preschool idea, which I do
 like because we can all relate to it very easily.  At any rate, if it turns
 out to be a valid criticism the symptom will be that an insufficiently rich
 set of concepts will develop to support the range of capabilities needed and
 at that point the simulations can be adjusted to be more complete and
 realistic and provide more human sensory modalities.  I guess it will be
 disappointing if building an adequate virtual world turns out to be as
 difficult and expensive as building high quality robots -- but at least it's
 easier to clean up after cake-baking.


Well, it's completely obvious to me, based on my knowledge of virtual worlds
and robotics, that building a high quality virtual world is orders of
magnitude easier than making a workable humanoid robot.

*So* much $$ has been spent on humanoid robotics before, by large, rich and
competent companies, and they still suck.It's just a very hard problem,
with a lot of very hard subproblems, and it will take a while to get worked
through.

On the other hand, making a virtual world such as I envision, is more than a
spare-time project, but not more than the project of making a single
high-quality video game.  It's something that any one of these big Japanese
companies could do with a tiny fraction of their robotics budgets.  The
issue is a lack of perceived cool value and a lack of motivation.

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel

 It's an interesting idea, but I suspect it too will rapidly break down.
 Which activities can be known about in a rich, better-than-blind-Cyc way
 *without* a knowledge of objects and object manipulation? How can an agent
 know about reading a book,for example,  if it can't pick up and manipulate a
 book? How can it know about adding and subtracting, if it can't literally
 put objects on top of each other, and remove them?  We humans build up our
 knowledge of the world objects/physics up from infancy.  Science also
 insists that all formal scientific knowledge of  the world  - all scientific
 disciplines - must be ultimately physics/objects-based.  Is there really an
 alternative?


And  just to be clear: in the AGI Preschool world I envision, picking up and
manipulating and stacking objects, and so forth, *would* be possible.  This
much is not hard to achieve using current robot-simulator tech.

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
I agree, but the good news is that game dev advances fast.

So, my plan with the AGI Preschool would be to build it in an open platform
such as OpenSim, and then swap in better and better physics engines as they
become available.

Some current robot simulators use ODE and this seems to be good enough to
handle a lot of useful robot-object and object-object interactions, though I
agree it's limited.

Still, making a dramatically better physics engine -- while a bunch harder
than making a nice AGI preschool using current virtual worlds and physics
engines -- is still a way, way easier problem than making a highly
functional (in terms of sensors and actuators) humanoid robot.

Also, the advantages of working in a virtual rather than physical world
should not be overlooked.  The ability to run tests over and over again, to
freely vary parameters and so forth, is pretty nice ... also the ability to
run 1000s of tests in parallel without paying humongous bucks for a fleet of
robots...

ben

On Sat, Dec 20, 2008 at 8:43 AM, Derek Zahn derekz...@msn.com wrote:


 Oh, and because I am interested in the potential of high-fidelity physical
 simulation as a basis for AI research, I did spend some time recently
 looking into options.  Unfortunately the results, from my perspective, were
 disappointing.

 The common open-source physics libraries like ODE, Newton, and so on, have
 marginal feature sets and frankly cannot scale very well performance-wise.
 Once I even did a little application whose purpose was to see whether a
 human being could learn to control an ankle joint to compensate for an
 impulse event and stabilize a simple body model (that is, to make it not
 fall over) by applying torques to the ankle.  I was curious to see (through
 introspection) how humans learn to act as process controllers.
 http://happyrobots.com/anklegame.zip for anybody bored enough to care.  It
 wasn't a very good test of the question so I didn't really get a
 satisfactory answer.  I did discover, though, that a game built around more
 appealing cases of the player learning to control physics-inspired processes
 could be quite absorbing.

 Beyond that, the most promising avenue seems to be physics libraries tied
 to graphics hardware being worked on by the hardware companies to help
 sell their stream processors.  The best example is Nvidia, who bought PhysX
 and ported it to their latest cards, giving a huge performance boost.  Intel
 has bought Havok and I can only imagine that they are planning on using that
 as the interface to some Larrabee-based physics engine.  I'm sure that ATI
 is working on something similar for their newer (very impressive) stream
 processing cards.

 At this stage, though, despite some interesting features and leaping
 performance, it is still not possible to do things like get realistic sensor
 maps for a simulated soft hand/arm, and complex object modifications like
 bending and breaking are barely dreamed of in those frameworks.  Complex
 multi-body interactions (like realistic behavior when dropping or otherwise
 playing with a ring of keys or realistic baby toys) have a long ways to go.

 Basically, I fear those of us who are interested in this are just waiting
 to ride the game development coattails and it will be a few years at least
 until performance that even begins to interest me will be available.

 Just my opinions on the situation.

 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
On Sat, Dec 20, 2008 at 10:44 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Well, it's completely obvious to me, based on my knowledge of virtual
 worlds
  and robotics, that building a high quality virtual world is orders of
  magnitude easier than making a workable humanoid robot.

 I guess that depends on what you mean by high quality and
 workable. Why does a robot have to be humanoid, BTW? I'd like a
 robot that can make me a cup of tea, I don't particularly care if it
 looks humanoid (in fact I suspect many humans would have less
 emotional resistance to a robot that didn't look humanoid, since it's
 more obviously a machine).



It doesn't have to be humanoid ... but apart from rolling instead of
walking,
I don't see any really significant simplifications obtainable from making it
non-humanoid.

Grasping and manipulating general objects with robot manipulators is
very much an unsolved research problem.  So is object recognition in
realistic conditions.

So, to make an AGI robot preschool, one has to solve these hard
research problems first.

That is a viable way to go if one's not in a hurry --
but anyway in the robotics context any talk
of preschools is drastically premature...




  On the other hand, making a virtual world such as I envision, is more
 than a
  spare-time project, but not more than the project of making a single
  high-quality video game.

 GTA IV cost $5 million, so we're not talking about peanuts here.


Right, but that is way cheaper than making a high-quality humanoid robot

Actually, $$ aside, we don't even **know how** to make a decent humanoid
robot.

Or, a decently functional mobile robot **of any kind**

Whereas making a software based AGI Preschool of the type I described is
clearly
feasible using current technology, w/o any research breakthroughs

And I'm sure it could be done for $300K not $5M using OSS and non-US
outsourced labor...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel
Well, there is massively more $$ going into robotics dev than into AGI dev,
and no one seems remotely near to solving the hard problems

Which is not to say it's a bad area of research, just that it's a whole
other huge confusing RD can of worms

So I still say, the choices are

-- virtual embodiment, as I advocate

-- delay working on AGI for a decade or so, and work on robotics now instead
(where by robotics I include software work on low-level sensing and actuator
control)

Either choice makes sense but I prefer the former as I think it can get us
to the end goal faster.

About the adequacy of current robot hardware -- I'll tell you more in 9
months or so ... a project I'm collaborating on is going to be using AI
(including OpenCog) to control a Nao humanoid robot.  We'll have 3 of them,
they cost about US$14K each or so.   The project is in China but I'll be
there in June-July to play with the Naos and otherwise collaborate on the
project.

My impression is that with a Nao right now, camera-eye sensing is fine so
long as lighting conditions are good ... audition is OK in the absence of
masses of background noise ... walking is very awkward and grasping is
possible but limited

The extent to which the limitations of current robots are hardware vs
software based is rather subtle, actually.

In the case of vision and audition, it seems clear that the bottleneck is
software.

But, with actuation, I'm not so sure.  The almost total absence of touch and
kinesthetics in current robots is a huge impediment, and puts them at a huge
disadvantage relative to humans.  Things like walking and grasping as humans
do them rely extremely heavily on both of these senses, so in trying to deal
with this stuff without these senses (in any serious form), current robots
face a hard and odd problem...

ben

On Sat, Dec 20, 2008 at 11:42 AM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  It doesn't have to be humanoid ... but apart from rolling instead of
  walking,
  I don't see any really significant simplifications obtainable from making
 it
  non-humanoid.

 I can think of several. For example, you could give it lidar to
 measure distances with -- this could then be used as input to its
 vision system making it easier for the robot to tell which objects are
 near or far. Instead of binocular vision, it could have 2 video
 cameras. It could have multiple ears, which would help it tell where a
 sound is coming from.

 The the best of my knowledge, no robot that's ever been used for
 anything practical has ever been humanoid.

  Grasping and manipulating general objects with robot manipulators is
  very much an unsolved research problem.  So is object recognition in
  realistic conditions.

 What sort of visual input do you plan to have in your virtual environment?

  So, to make an AGI robot preschool, one has to solve these hard
  research problems first.
 
  That is a viable way to go if one's not in a hurry --
  but anyway in the robotics context any talk
  of preschools is drastically premature...
 
 
   On the other hand, making a virtual world such as I envision, is more
   than a
   spare-time project, but not more than the project of making a single
   high-quality video game.
 
  GTA IV cost $5 million, so we're not talking about peanuts here.
 
  Right, but that is way cheaper than making a high-quality humanoid robot

 Is it? I suspect one with tracks, two robotic arms, various sensors
 for light and sound, etc, could be made for less than $10,000 -- this
 would be something that could move around and manipulate a blocks
 world. My understanding is that all, or nearly all, the difficulty
 comes in programming it. Which is where AI comes in.

  Actually, $$ aside, we don't even **know how** to make a decent humanoid
  robot.
 
  Or, a decently functional mobile robot **of any kind**

 Is that because of hardware or software issues?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-20 Thread Ben Goertzel


 Consider an object, such as a sock or a book or a cat. These objects
 can all be recognised by young children, even though the visual input
 coming from trhem chasnges from what angle they're viewed at. More
 fundamentally, all these objects can change shape, yet humans can
 still effortlessly recognise them to be the same thing. And this
 ability doesn't stop with humans -- most (if not all) mammalian
 species can do it.

 Until an AI can do this, there's no point in trying to get it to play
 at making cakes, etc.



Well, it seems to me that current virtual worlds are just fine for exploring
this kind of vision processing

However, I have long been perplexed at the obsession with so many AI folks
with vision processing.

I mean: yeah, it's important to human intelligence, and some aspects of
human cognition are related to human visual perception

But, it's not obvious to me why so many folks think vision is so critical to
AI, whereas other aspects of human body function are not.

For instance, the yogic tradition and related Eastern ideas would suggest
that *breathing* and *kinesthesia* are the critical aspects of mind.
Together with touch, kinesthesia is what lets a mind establish a sense of
self, and of the relation between self and world.

In that sense kinesthesia and touch are vastly more fundamental to mind than
vision.  It seems to me that a mind without vision could still be a
basically humanlike mind.  Yet, a mind without touch and kinesthesia could
not, it would seem, because it would lack a humanlike sense of its own self
as a complex dynamic system embedded in a world.

Why then is there constant talk about vision processing and so little talk
about kinesthetic and tactile processing?

Personally I don't think one needs to get into any of this sensorimotor
stuff too deeply to make a thinking machine.  But, if you ARE going to argue
that sensorimotor aspects are critcial to humanlike AI because they're
critical to human intelligence, why harp on vision to the exclusion of other
things that seem clearly far more fundamental??

Is the reason just that AI researchers spend all day staring at screens and
ignoring their physical bodies and surroundings?? ;-)

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel
IMHO, Mike Tintner is not often rude, and is not exactly a troll because I
feel he is genuinely trying to understand the deeper issues related to AGI,
rather than mainly trying to stir up trouble or cause irritation

However, I find conversing with him generally frustrating because he
combines

A)
extremely strong intuitive opinions about AGI topics

with

B)
almost utter ignorance of the details of AGI (or standard AI), or the
background knowledge needed to appreciate these details when compactly
communicated


This means that discussions with Mike never seem to get anywhere... and,
frankly, I usually regret getting into them

I would find it more rewarding by far to engage in discussion with someone
who had Mike's same philosophy and ideas (which I disagree strongly with),
but had enough technical background to actually debate the details of AGI in
a meaningful way

For example, Selmer Bringjord (an AI expert, not on this list) seems to
share a fair number of Mike's ideas, but discussions with him are less
frustrating because rather than wasting time on misunderstandings, basics
and terminology, one cuts VERY QUICKLY to the deep points of conceptual
disagreement

ben g



On Fri, Dec 19, 2008 at 1:19 PM, Pei Wang mail.peiw...@gmail.com wrote:

 BillK,

 Thanks for the reminder. I didn't reply to him, but still got involved. :-(

 I certainty don't want to encourage bad behaviors in this mailing
 list. Here bad behaviors are not in the conclusions or arguments,
 but in the way they are presented, as well as in the
 politeness/rudeness toward other people.

 Pei

 On Fri, Dec 19, 2008 at 11:38 AM, BillK pha...@gmail.com wrote:
  On Fri, Dec 19, 2008 at 3:55 PM, Mike Tintner wrote:
 
  (On the contrary, Pei, you can't get more narrow-minded than rational
  thinking. That's its strength and its weakness).
 
 
 
  Pei
 
  In case you haven't noticed, you won't gain anything from trying to
  engage with the troll.
 
  Mike does not discuss anything. He states his opinions in many
  different ways, pretending to respond to those that waste their time
  talking to him. But no matter what points are raised in discussion
  with him, they will only be used as an excuse to produce yet another
  variation of his unchanged opinions.  He doesn't have any technical
  programming or AI background, so he can't understand that type of
  argument.
 
  He is against the whole basis of AGI research. He believes that
  rationality is a dead end, a dying culture, so deep-down, rational
  arguments mean little to him.
 
  Don't feed the troll!
  (Unless you really, really, think he might say something useful to you
  instead of just wasting your time).
 
 
  BillK
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel
yeah ... that's not a matter of the English language but rather a matter of
the American Way ;-p

Through working with many non-Americans I have noted that what Americans
often intend as a playful obnoxiousness is interpreted by non-Americans
more seriously...

I think we had some mutual colleagues in the past who favored such a style
of discourse ;-)

ben

On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com wrote:

 On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org wrote:
 
  IMHO, Mike Tintner is not often rude, and is not exactly a troll
 because I
  feel he is genuinely trying to understand the deeper issues related to
 AGI,
  rather than mainly trying to stir up trouble or cause irritation

 Well, I guess my English is not good enough to tell the subtle
 difference in tones, but his comments often sound that You AGIers are
 so obviously wrong that I don't even bother to understand what you are
 saying ... Now let me tell you 

 I don't enjoy this tone.

 Pei


  However, I find conversing with him generally frustrating because he
  combines
 
  A)
  extremely strong intuitive opinions about AGI topics
 
  with
 
  B)
  almost utter ignorance of the details of AGI (or standard AI), or the
  background knowledge needed to appreciate these details when compactly
  communicated
 
 
  This means that discussions with Mike never seem to get anywhere... and,
  frankly, I usually regret getting into them
 
  I would find it more rewarding by far to engage in discussion with
 someone
  who had Mike's same philosophy and ideas (which I disagree strongly
 with),
  but had enough technical background to actually debate the details of AGI
 in
  a meaningful way
 
  For example, Selmer Bringjord (an AI expert, not on this list) seems to
  share a fair number of Mike's ideas, but discussions with him are less
  frustrating because rather than wasting time on misunderstandings, basics
  and terminology, one cuts VERY QUICKLY to the deep points of conceptual
  disagreement
 
  ben g
 
 
 
  On Fri, Dec 19, 2008 at 1:19 PM, Pei Wang mail.peiw...@gmail.com
 wrote:
 
  BillK,
 
  Thanks for the reminder. I didn't reply to him, but still got involved.
  :-(
 
  I certainty don't want to encourage bad behaviors in this mailing
  list. Here bad behaviors are not in the conclusions or arguments,
  but in the way they are presented, as well as in the
  politeness/rudeness toward other people.
 
  Pei
 
  On Fri, Dec 19, 2008 at 11:38 AM, BillK pha...@gmail.com wrote:
   On Fri, Dec 19, 2008 at 3:55 PM, Mike Tintner wrote:
  
   (On the contrary, Pei, you can't get more narrow-minded than rational
   thinking. That's its strength and its weakness).
  
  
  
   Pei
  
   In case you haven't noticed, you won't gain anything from trying to
   engage with the troll.
  
   Mike does not discuss anything. He states his opinions in many
   different ways, pretending to respond to those that waste their time
   talking to him. But no matter what points are raised in discussion
   with him, they will only be used as an excuse to produce yet another
   variation of his unchanged opinions.  He doesn't have any technical
   programming or AI background, so he can't understand that type of
   argument.
  
   He is against the whole basis of AGI research. He believes that
   rationality is a dead end, a dying culture, so deep-down, rational
   arguments mean little to him.
  
   Don't feed the troll!
   (Unless you really, really, think he might say something useful to you
   instead of just wasting your time).
  
  
   BillK
  
  
   ---
   agi
   Archives: https://www.listbox.com/member/archive/303/=now
   RSS Feed: https://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: https://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  b...@goertzel.org
 
  I intend to live forever, or die trying.
  -- Groucho Marx
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https

Re: [agi] Creativity and Rationality (was: Re: Should I get a PhD?)

2008-12-19 Thread Ben Goertzel

 In my opinion you are being too generous and your generosity is being
 taken advantage of.


That is quite possible; it's certainly happened before...



 As well as trying to be nice to Mike, you have to bear list quality in
 mind and decide whether his ramblings are of some benefit to all the
 other list members.


Well I decided not to make this a moderated list, and to be extremely
reluctant about banning people

The only ban I've instituted so far was against someone who was persistently
making personal anti-Semitic attacks against other list members, a couple
years back...

I have sniped off-topic threads a handful of times, but by and large I guess
I've decided to leave this list a free for all ...

Later this year I'll likely be involved with the launch of a forum site
oriented toward more structured AGI discussions...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
A paper by Stephan Bugaj and I that will appear in the AGI-09 proceedings
and get presented at the conference.

http://www.opencog.org/wiki/Image:Preschool.pdf

I'll also be giving a couple technical papers together w/ other colleagues,
but this one focuses on how to evaluate AGIs and so may be of interest for
discussion...

Simple stuff, really; but still, the sort of thing that not enough attention
has been paid to

What I'd like to see is a really  nicely implemented virtual world
preschool for AIs ... though of course building such a thing will be a lot
of work for someone...

ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
Colin,

It is of course possible that human intelligence relies upon
electromagnetic-field sensing that goes beyond the traditional five
senses.

However, this argument

 Functionally, the key behaviour I use to test my approach is scientific
 behaviour. If you sacrifice the full EM field, an AGI would provably be
 unable to enact scientific behaviour because the AGI brain dynamics would be
 forced to operate *without the dynamics of the EM field*, which is
 literally connected to the distal natural world (forming a new I/O stream).
 The link to the distal natural world is critically involved in 'scientific
 observation'. You can't simulate it because it's what you are actually there
 to gain access to. A scientist does not already know what it 'out there' -
 an AGI scientist needs what  human scientist has in order that the AGI do
 science as well as a human. Scientific behaviour easily extends to normal
 problem solving behaviour of the kind humans have. Hence 'general
 intelligence'.



makes no sense to me.   I haven't seen you present any meaningful argument
that scientific behavior depends on extrasensory phenomena.  Do you have
such an argument?

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Well, there is a major question whether one can meaningfully address AGI via
virtual-robotics rather than physical-robotics

No one can make a convincing proof either way right now

But, it's clear that if one wants to go the physical-robotics direction, now
is not the time to be working on preschools and cognition.  In that case, we
need to be focusing on vision and grasping and walking and such.

OTOH, if one wants to go the virtual-robotics direction (as is my
intuition), then it is possible to bypass many of the lower-level
perception/actuation issues and focus on preschool-level learning, reasoning
and conceptual creation.

And there's no need to write a paper on the eventual possibility of putting
robots in real preschools: that's obvious.  But it's also far beyond the
scope of contemporary robots, as would be univerally.  Whereas virtual
preschool is not as *obviously* far beyond the scope of contemporary AGI
designs (at least according to some experts, like me), which is what makes
it more interesting in the present moment...

ben g

-- Ben G

On Fri, Dec 19, 2008 at 5:12 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/19 Ben Goertzel b...@goertzel.org:
 
  What I'd like to see is a really  nicely implemented virtual world
  preschool for AIs ... though of course building such a thing will be a
 lot
  of work for someone...

 Why a virtual world preschool and not a real one?

 A virtual world, if not programmed accurately, may be subtly
 differernet from the real world, so that for example an AGI is capable
 of picking up and using a screwdriver in the virtual world but not
 real real world, because the real world is more complex.

 If you want your AGI to be able to use a screwdriver, you probably
 need to train it in the real world (at least some of the time).

 If you don't care whether your AGI can use a screwdriver, why have one
 in the virtual world?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
And when a Chinese doesn't answer a question, it usually means No ;-)

Relatedly, I am discussing with some US gov't people a potential project
involving customizing an AI reasoning system to emulate the different
inferential judgments of people from different cultures...

ben

On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore r...@lightlink.comwrote:

 Ben Goertzel wrote:


 yeah ... that's not a matter of the English language but rather a matter
 of the American Way ;-p

 Through working with many non-Americans I have noted that what Americans
 often intend as a playful obnoxiousness is interpreted by non-Americans
 more seriously...


 Except that, in fact, Mike is not American but British.

 As a result of long experience talking to Americans, I have discovered that
 what British people intend as routine discussion, Americans interpret as
 serious, intentional obnoxiousness.  And then, what Americans (as you say)
 intend as playful obnoxiousness, non-Americans interpret more seriously.



 Richard Loosemore







 I think we had some mutual colleagues in the past who favored such a style
 of discourse ;-)

 ben

 On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.commailto:
 mail.peiw...@gmail.com wrote:

On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org
mailto:b...@goertzel.org wrote:
 
  IMHO, Mike Tintner is not often rude, and is not exactly a
troll because I
  feel he is genuinely trying to understand the deeper issues
related to AGI,
  rather than mainly trying to stir up trouble or cause irritation

Well, I guess my English is not good enough to tell the subtle
difference in tones, but his comments often sound that You AGIers are
so obviously wrong that I don't even bother to understand what you are
saying ... Now let me tell you 

I don't enjoy this tone.

Pei





 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
 IS the AGI. If the company goes bad you take it out and shoot it. The
 process of giving birth to a real company is literally giving birth to s
 specialist AGI - the actual company itself attends board meetings... fun eh!
 The hard question - do you invite it to the coporate dance night? He he.

 cheers,
 colin hales

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com/




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
 *
 d) 75 years of computer-based-AGI failure - has sent me a message that no
 amount of hubris on my part can overcome. As a scientist I must be informed
 by empirical  outcomes, not dogma or wishful thinking.

 *



That argument really is a foolish one not worth paying attention to.

I mean, it could turn out that computer-based AGI is impossible, but it's
*so* obvious that our failure to achieve this so far proves nothing about
this.

Once we have computers that are powerful enough to simulate the brain at the
molecular level ... and detailed understanding of brain structure and
dynamics at that level ... *then*, if we simulate the brain at that level on
a computer and it fails to be intelligent, there will be *empirical* reason
to seriously consider the hypothesis that computer-based AGI is impossible.
Not until then.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
Well, I think you might have overreacted to his writing style for cultural
reasons

However, I also think that -- to be Americanly blunt -- you're very unlikely
to learn anything from conversing with Mike, nor to make much positive
impact on his own understanding by conversing with him.

So in this case, I reckon the cultural factors are kind of irrelevant ;-)

ben

On Fri, Dec 19, 2008 at 7:47 PM, Pei Wang mail.peiw...@gmail.com wrote:

 Richard and Ben,

 If you think I, as a Chinese, have overreacted to Mike Tintner's
 writing style, and this is just a culture difference, please let me
 know. In that case I'll try my best to learn his way of communication,
 at least when talking to British and American people --- who knows, it
 may even improve my marketing ability. ;-)

 Pei

 On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel b...@goertzel.org wrote:
 
  And when a Chinese doesn't answer a question, it usually means No ;-)
 
  Relatedly, I am discussing with some US gov't people a potential project
  involving customizing an AI reasoning system to emulate the different
  inferential judgments of people from different cultures...
 
  ben
 
  On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore r...@lightlink.com
  wrote:
 
  Ben Goertzel wrote:
 
  yeah ... that's not a matter of the English language but rather a
 matter
  of the American Way ;-p
 
  Through working with many non-Americans I have noted that what
 Americans
  often intend as a playful obnoxiousness is interpreted by
 non-Americans
  more seriously...
 
  Except that, in fact, Mike is not American but British.
 
  As a result of long experience talking to Americans, I have discovered
  that what British people intend as routine discussion, Americans
 interpret
  as serious, intentional obnoxiousness.  And then, what Americans (as you
  say) intend as playful obnoxiousness, non-Americans interpret more
  seriously.
 
 
 
  Richard Loosemore
 
 
 
 
 
 
 
  I think we had some mutual colleagues in the past who favored such a
  style of discourse ;-)
 
  ben
 
  On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com
  mailto:mail.peiw...@gmail.com wrote:
 
 On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org
 mailto:b...@goertzel.org wrote:
  
   IMHO, Mike Tintner is not often rude, and is not exactly a
 troll because I
   feel he is genuinely trying to understand the deeper issues
 related to AGI,
   rather than mainly trying to stir up trouble or cause irritation
 
 Well, I guess my English is not good enough to tell the subtle
 difference in tones, but his comments often sound that You AGIers
 are
 so obviously wrong that I don't even bother to understand what you
 are
 saying ... Now let me tell you 
 
 I don't enjoy this tone.
 
 Pei
 
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  b...@goertzel.org
 
  I intend to live forever, or die trying.
  -- Groucho Marx
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: Cross-Cultural Discussion using English [WAS Re: [agi] Creativity ...]

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 7:51 PM, Ben Goertzel b...@goertzel.org wrote:


 Well, I think you might have overreacted to his writing style for cultural
 reasons

 However, I also think that -- to be Americanly blunt -- you're very
 unlikely to learn anything from conversing with Mike,


On AGI-related topics, I meant.  He may well have other areas of expertise
where we could learn a lot from him, but they are not the focus of this
list.



 nor to make much positive impact on his own understanding by conversing
 with him.

 So in this case, I reckon the cultural factors are kind of irrelevant ;-)

 ben


 On Fri, Dec 19, 2008 at 7:47 PM, Pei Wang mail.peiw...@gmail.com wrote:

 Richard and Ben,

 If you think I, as a Chinese, have overreacted to Mike Tintner's
 writing style, and this is just a culture difference, please let me
 know. In that case I'll try my best to learn his way of communication,
 at least when talking to British and American people --- who knows, it
 may even improve my marketing ability. ;-)

 Pei

 On Fri, Dec 19, 2008 at 7:01 PM, Ben Goertzel b...@goertzel.org wrote:
 
  And when a Chinese doesn't answer a question, it usually means No ;-)
 
  Relatedly, I am discussing with some US gov't people a potential project
  involving customizing an AI reasoning system to emulate the different
  inferential judgments of people from different cultures...
 
  ben
 
  On Fri, Dec 19, 2008 at 5:29 PM, Richard Loosemore r...@lightlink.com
  wrote:
 
  Ben Goertzel wrote:
 
  yeah ... that's not a matter of the English language but rather a
 matter
  of the American Way ;-p
 
  Through working with many non-Americans I have noted that what
 Americans
  often intend as a playful obnoxiousness is interpreted by
 non-Americans
  more seriously...
 
  Except that, in fact, Mike is not American but British.
 
  As a result of long experience talking to Americans, I have discovered
  that what British people intend as routine discussion, Americans
 interpret
  as serious, intentional obnoxiousness.  And then, what Americans (as
 you
  say) intend as playful obnoxiousness, non-Americans interpret more
  seriously.
 
 
 
  Richard Loosemore
 
 
 
 
 
 
 
  I think we had some mutual colleagues in the past who favored such a
  style of discourse ;-)
 
  ben
 
  On Fri, Dec 19, 2008 at 1:49 PM, Pei Wang mail.peiw...@gmail.com
  mailto:mail.peiw...@gmail.com wrote:
 
 On Fri, Dec 19, 2008 at 1:40 PM, Ben Goertzel b...@goertzel.org
 mailto:b...@goertzel.org wrote:
  
   IMHO, Mike Tintner is not often rude, and is not exactly a
 troll because I
   feel he is genuinely trying to understand the deeper issues
 related to AGI,
   rather than mainly trying to stir up trouble or cause irritation
 
 Well, I guess my English is not good enough to tell the subtle
 difference in tones, but his comments often sound that You AGIers
 are
 so obviously wrong that I don't even bother to understand what you
 are
 saying ... Now let me tell you 
 
 I don't enjoy this tone.
 
 Pei
 
 
 
 
  ---
  agi
  Archives: https://www.listbox.com/member/archive/303/=now
  RSS Feed: https://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: https://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  b...@goertzel.org
 
  I intend to live forever, or die trying.
  -- Groucho Marx
 
  
  agi | Archives | Modify Your Subscription


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
It's a hard problem, and the answer is to cheat as much as possible, but
not any more so.

We'll just have to feel this out via experiment...

My intuition is that current virtual worlds and game worlds are too crude,
but current robot simulators are not.

I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
doubt one needs bodies with detailed internal musculature ... but I think
one does need basic Newtonian physics and the ability to use tools, break
things in half (but not necessarily realistic cracking behavior), balance
things and carry them and stack them and push them together Lego-like and so
forth...

I could probably frame a detailed argument as to WHY I think the line should
be drawn right there, in terms of the cognitive tasks supported by this
level of physics simulation.  That would be an interesting followup paper, I
guess.

The crux of the argument would be that all the basic tasks required in an
AGI Preschool could be sensibly formulated using only this level of physics
simulation, in a way that doesn't involve cheating... (but the proper
contextualization formalization of doesn't involve cheating would require
some thought)

ben


On Fri, Dec 19, 2008 at 7:54 PM, Derek Zahn derekz...@msn.com wrote:

  Hi Ben.

  OTOH, if one wants to go the virtual-robotics direction (as is my
 intuition),
  then it is possible to bypass many of the lower-level
 perception/actuation
  issues and focus on preschool-level learning, reasoning and conceptual
 creation.

 And yet, in your paper (which I enjoyed), you emphasize the importance of
 not providing
 a simplistic environment (with the screwdriver example).  Without facing
 the low-level
 sensory world (either through robotics or through very advanced simulations
 feeding
 senses essentially equivalent to those of humans), I wonder if a targeted
 human-like
 AGI will be able to acquire the necessary concepts that children absorb and
 use as much o
 f the metaphorical basis for their thought -- slippery, soft, hot, hard,
 rough, sharp, and on
 and on.

 I assume you have some sort of middle ground in mind... what's your
 thinking about
 how much you can cheat in this way (beyond that of what is conveniently
 doable
 I mean)?

 Thanks!


 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
 You can't deliver any evidence at all that the processes I am investigating
 are invalid.


True, and you can't deliver any evidence that once AGIs reach an IQ of 1000,
aliens will contact them and welcome them to the Trans-Universal Club of
Really Clever Beings.

In fact, I won't be at all surprised if something like that happens!

But of course, there is a rather diverse infinitude of hypotheses that
aren't refuted by current evidence...

FWIW, my own intuition is that

-- there quite possibly *are* currently-unexplained, interesting, important
electromagnetic interactions between brains and the world around them

-- these are quite possibly related to various psi phenomena, about which
there is an awful lot of convincing empirical evidence right now (see Damien
Broderick's book Outside the Gates of Science for a nice review)

-- none of this gives any reason why cognition and consciousness can't arise
in a computer program

There is a lot that we don't know about the world!  But, concluding from
this general ignorance that AGI is impossible in digital computers seems
wholly unjustified to me.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
 You, like the rest of us, are incapable of discussing anything else.  Email
 cannot carry non-algorithmic ideas or concepts.  Just because you do not
 consider your system algorithmic does not mean that it is not.  Nature is
 algorithmic, your chip is algorithmic, everything is algorithmic.  That
 which we call a rose by any other name would smell as sweet.


Hey, you just have no way to know that...

If you and I both contain brains that somehow invoke non-Turing oracles,
then an email could communicate info from one oracle to the other that would
provide a coupling of noncomputational processes in our brains

The problem is that **there is no way for science to ever establish the
existence of a nonalgorithmic process**, because science deals only with
finite sets of finite-precision measurements.

So, it is quite **possible** that brain and mind nonalgorithmic, and that
intelligence is not scientifically addressable and AGIs cannot be designed
via science.

But if this is indeed the case, it can never be scientifically established.

FWIW, my own intuition is that

-- mind does involve nonalgorithmic aspects

-- this is in no way an obstacle to the creation of AGI using digital
computer programs.  The nonalgorithmic aspects are gonna be there anyway, we
don't need to build them into our programs;-)

But I can't prove that scientifically and I never will be able to...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 8:42 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  I.e., I doubt one needs serious fluid dynamics in one's simulation ... I
  doubt one needs bodies with detailed internal musculature ... but I think
  one does need basic Newtonian physics and the ability to use tools, break
  things in half (but not necessarily realistic cracking behavior), balance
  things and carry them and stack them and push them together Lego-like and
 so
  forth...

 Needs for what purpose? I can see three uses for a virtual world:

 1. to mimic the real world accurately enough that the AI can use the
 virtual world instead, and by using it become proficient in dealing
 with the real world, because it is cheaper than a real world.
 Obviously to program a virtual world this real is a big up-front
 investment, but once the investment is made, such a world may well be
 cheaper and easier to use than our real one.


I think this will come along as a side-effect of achieving the other goals,
to some extent.  But it's not my main goal, no.




 2. to provide a useful bridge between humans and the AGI, i.e. the
 virtual world will be similar enough to the real world that humans
 will have a common frame of reference with the AGI.



Yes...
to allow the AGI to develop progressively greater intelligence
in a manner that humans can easily comprehend, so that we can
easily participate and encourage its growth (via teaching and via
code changes, knowledge entry, etc.)



 3. to provide a toy domain for the AI to think about and become
 proficient in.


Not just to become proficient in the domain, but become proficient
in general humanlike cognitive processes.

The point of a preschool is that it's designed to present all important
adult human cognitive processes in simplified forms.


 (Of course there's no reason why a toy domain needs to
 be anything like a virtual world, it could for example be a software
 modality that can see/understand source code as easily and fluently
 as humans interprete visual input.)

 AIUI you're mostly thinking in terms of 2 or 3. Fair comment?

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Right.  My intuition is that we don't need to simulate the dynamics
of fluids, powders and the like in our virtual world to make it adequate
for teaching AGIs humanlike, human-level AGI.  But this could be
wrong.

It also could be interesting to program an artificial chemistry that
emulated certain aspects of real chemistry -- not to be realistic, but
to have enough complexity to be vaguely analogous.

After all, I mean: preschoolers have fun and learn a lot mixing flour and
butter and
eggs and so forth, but how realistic does the physics of such things really
have to be to
give a generally comparable learning experience???

ben



 Evolution has equipped humans (and other animals) have a good
 intuitive understanding of many of the physical realities of our
 world. The real world is not just slippery in the physical sense, it's
 slippery in the non-literal sense too. For example, I can pick up an
 OXO cube (a solid object), crush it so it become powder, pour it into
 my stew, and stir it in so it dissolves. My mind can easily and
 effortlessly track that in some sense its the same oxo cube and in
 another sense it isn't.

 Another example: my cat can distinguish between surfaces that are safe
 to sit on, and others that are too wobbly, even if they look the same.

 An animals intuitive physics is a complex system. I expect that in
 humans a lot of this machinery isd re-used to create intelligence. (It
 may be true, and IMO probably is true, that it's not necessary to
 re-create this machinery to make an AGI).


 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Well, that's a really easy example, right?  For making tea, the answer
would probably be yes.

Baking a cake is a harder example.  An AGI trained in a virtual world could
certainly follow a recipe to make a passable cake.  But it would never learn
to be a **really good** baker in the virtual world, unless the virtual world
were fabulously realistic in its simulation (and we don't know how to make
it that good, right now).  Being a really good baker requires a lot of
intuition for subtle physical properties of ingredients, not just following
a recipe and knowing the primitive basics of naive physics...

ben g

On Fri, Dec 19, 2008 at 8:56 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
 
  3. to provide a toy domain for the AI to think about and become
  proficient in.
 
  Not just to become proficient in the domain, but become proficient
  in general humanlike cognitive processes.
 
  The point of a preschool is that it's designed to present all important
  adult human cognitive processes in simplified forms.

 So it would be able to transfer its learning to the real world and
 (when given a robot body) be able to go into a kitchen its never seen
 before and make a cup of tea? (In other words, will the simulation be
 deep enough to allow that).

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Building a machine that can learn from experience

2008-12-19 Thread Ben Goertzel
On Fri, Dec 19, 2008 at 9:10 PM, J. Andrew Rogers 
and...@ceruleansystems.com wrote:


 On Dec 19, 2008, at 5:35 PM, Ben Goertzel wrote:

 The problem is that **there is no way for science to ever establish the
 existence of a nonalgorithmic process**, because science deals only with
 finite sets of finite-precision measurements.



 I suppose it would be more accurate to state that every process we can
 detect is algorithmic within the scope of our ability to measure it.  Like
 with belief in god(s) and similar, the point can then be raised as to why we
 need to invent non-algorithmic processes when ordinary algorithmic processes
 are sufficient to explain everything we see.


Because some folks find that they are not subjectively sufficient to explain
everything they subjectively experience...



  Non-algorithmic processes very conveniently have properties identical to
 the supernatural, and so I treat them similarly.  This is just another
 incarnation of the old unpredictable versus random discussions.

 Sure, non-algorithmic processes could be running the mind machinery, but
 then so could elves, unicorns, the Flying Spaghetti Monster, and many other
 things that it is not necessary to invoke at this time.  Absent the ability
 to ever detect such things and lacking the necessity of such explanations, I
 file non-algorithmic processes with vast number of other explanatory memes
 of woo-ness of which humans are fond.

 Like the old man once said, entia non sunt multiplicanda praeter
 necessitatem.


 Cheers,

 J. Andrew Rogers



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Ahhh... ***that's*** why everyone always hates my cakes!!!  I never realized
you were supposed to **taste** the stuff ... I thought it was just supposed
to look funky after you throw it in somebody's face ;-)

On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Baking a cake is a harder example.  An AGI trained in a virtual world
 could
  certainly follow a recipe to make a passable cake.  But it would never
 learn
  to be a **really good** baker in the virtual world, unless the virtual
 world
  were fabulously realistic in its simulation (and we don't know how to
 make
  it that good, right now).  Being a really good baker requires a lot of
  intuition for subtle physical properties of ingredients, not just
 following
  a recipe and knowing the primitive basics of naive physics...

 A sense of taste would probably help too.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Preschool: sketch of an evaluation framework for early stage AGI systems aimed at human-level, roughly humanlike AGI

2008-12-19 Thread Ben Goertzel
Although, I note, I know a really good baker who makes great cakes in spite
of the fact that she does not eat sugar and hence does not ever taste most
of the stuff she makes...

But she *used to* eat sugar, so to an extent she can go on memory

Sorta like how Beethoven kept composing after he went deaf, I suppose ;-)

On Fri, Dec 19, 2008 at 9:42 PM, Ben Goertzel b...@goertzel.org wrote:


 Ahhh... ***that's*** why everyone always hates my cakes!!!  I never
 realized you were supposed to **taste** the stuff ... I thought it was just
 supposed to look funky after you throw it in somebody's face ;-)


 On Fri, Dec 19, 2008 at 9:31 PM, Philip Hunt cabala...@googlemail.comwrote:

 2008/12/20 Ben Goertzel b...@goertzel.org:
 
  Baking a cake is a harder example.  An AGI trained in a virtual world
 could
  certainly follow a recipe to make a passable cake.  But it would never
 learn
  to be a **really good** baker in the virtual world, unless the virtual
 world
  were fabulously realistic in its simulation (and we don't know how to
 make
  it that good, right now).  Being a really good baker requires a lot of
  intuition for subtle physical properties of ingredients, not just
 following
  a recipe and knowing the primitive basics of naive physics...

 A sense of taste would probably help too.

 --
 Philip Hunt, cabala...@googlemail.com
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 b...@goertzel.org

 I intend to live forever, or die trying.
 -- Groucho Marx




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

I intend to live forever, or die trying.
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


  1   2   3   4   5   6   7   8   9   10   >