Re: [singularity] Vista/AGI

2008-04-14 Thread Ben Goertzel
Brain-scan accuracy is  a very crude proxy for understanding of brain
function; yet a much better proxy than anything existing for the case
of AGI...

On Sun, Apr 13, 2008 at 11:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Ben Goertzel wrote:

  Hi,
 
 
Just my personal opinion...but it appears that the exponential
 technology
   growth chart, which is used in many of the briefings, does not include
   AI/AGI. It is processing centric.  When you include AI/AGI the
 exponential
   technology curve flattens out in the coming years (5-7) and becomes
 part of
   a normal S curve of development.  While computer power and processing
 will
   increase exponentially (as nanotechnology grows) the area of AI will
 need
   more time to develop.
  
I would be interested in your thoughts.
  
 
  I think this is because progress toward general AI has been difficult
  to quantify
  in the past, and looks to remain difficult to quantify into the future...
 
  I am uncertain as to the extent to which this problem can be worked
 around,
  though.
 
  Let me introduce an analogy problem
 
  Understanding the operation of the brain better and better is to
  scanning the brain with higher and higher spatiotemporal accuracy,
  as Creating more and more powerful AGI is to what?
 
  ;-)
 
  The point is that understanding the brain is also a nebulous and
  hard-to-quantify goal, but we make charts for it by treating brain
  scan accuracy as a more easily quantifiable proxy variable.  What's a
  comparable proxy variable for AGI?
 
  Suggestions welcome!
 

  Sadly, the analogy is a wee bit broken.

  Brain scan accuracy as a measure of progress in understanding the operation
 of the brain is a measure that some cognitive neuroscientists may subscribe
 to, but the majority of cognitive scientists outside of that area consider
 this to be a completely spurious idea.

  Doug Hofstadter said this eloquently in I Am A Strange Loop:  getting a
 complete atom-scan in the vicinity of a windmill doesn't mean that you are
 making progress toward understanding why the windmill goes around. It just
 gives you a data analysis problem that will keep you busy until everyone in
 the Hot Place is eating ice cream.




  Richard Loosemore



  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Samantha,

  You know, I am getting pretty tired of hearing this poor mouth crap.   This
 is not that huge a sum to raise or get financed.  Hell, there are some very
 futuristic rich geeks who could finance this single-handed and would not
 really care that much whether they could somehow monetize the result.   I
 don't believe for a minute that there is no way to do this.So exactly
 why are you singing this sad song year after year?
...
  From what you said above $50M will do the entire job.   If that is all that
 is standing between us and AGI then surely we can get on with it in all
 haste.   If it is a great deal more than this relatively small amount of
 money then lets move on to talk about that instead of whining about lack of
 coin.


This is what I thought in 2001, and what Bruce Klein thought when he started
working with me in 2005.

In brief, what we thought is something like:


OK, so  ...

On the one hand, we have an AGI design that seems to its sane PhD-scientist
creator to have serious potential of leading to human-level AGI.  We have
a team of professional AI scientists and software engineers who are
a) knowledgeable about it, b) eager to work on it, c) in agreement that
it has a strong chance of leading to human-level AGI, although with
varying opinions on whether the timeline is, say, 7, 10, 15 or 20 years.
Furthermore, the individuals involved are at least thoughtful about issues
of AGI ethics and the social implications of their work.   Carefully-detailed
arguments as to why it is believed the AGI design will work exist, but,
these are complex, and furthermore do not comprise any sort of irrefutable
proof.

On the other hand, we have a number of wealthy transhumanists who would
love to see a beneficial human-level AGI come about, and who could
donate or invest some $$ to this cause without serious risk to their own
financial stability should the AGI effort fail.

Not only that, but there are a couple related factors

a) early non-AGI versions of some of the components of said AGI design
are already being used to help make biological discoveries of relevant
to life extension (as documented in refereed publications)

b) very clear plans exist, including discussions with many specific potential
customers, regarding how to make $$ from incremental products along the
way to the human-level AGI, if this is the pathway desired


So, we talked to a load of wealthy futurists and the upshot is that it's really
really hard to get these folks to believe you have a chance at achieving
human-level AGI.  These guys don't have the background to spend 6 months
carefully studying the technical documentation, so they make a gut decision,
which is always (so far) that gee, you're a really smart guy, and your team
is great, and you're doing cool stuff, but technology just isn't there yet.

Novamente has gotten small (but much valued)
investments from some visionary folks, and SIAI has
had the vision to hire 1.6 folks to work on OpenCog, which is an
open-source sister project of the Novamente Cogntion Engine project.

I could speculate about the reasons behind this situation, but the reason is NOT
that I suck at raising money ... I have been involved in fundraising
for commercial
software projects before and have been successful at it.

I believe that in 10-15 years from now, one will be able to approach the exact
same people with the same sort of project, and get greeted with enthusiasm
rather than friendly dismissal.  Going against prevailing culture is
really hard,
even if you're dealing with people who **think** they're seeing beyond the
typical preconceptions of their culture.  Slowly though the idea that AGI is
possible and feasible is wending its way into the collective mind.

I stress, though, that if one had some kind of convincing, compelling **proof**
of being on the correct path to AGI, it would likely be possible to raise $$
for one's project.  This proof could be in several possible forms, e.g.

a) a mathematical proof, which was accepted by a substantial majority
of AI academics

b) a working software program that demonstrated human-child-like
functionality

c) a working robot that demonstrated full dog-like functionality

Also, if one had good enough personal connections with the right sort
of wealthy folks, one could raise the $$ -- based on their personal trust
in you rather than their trust in your ideas.

Or of course, being rich and funding your work yourself is always an
option (cf Jeff Hawkins)

This gets back to a milder version of an issue Richard Loosemore is
always raising; the complex systems problem.  My approach to AGI
is complex systems based, which means that the components are NOT
going to demonstrate any general intelligence -- the GI is intended
to come about as a holistic, whole-system phenomenon.  But not in any
kind of mysterious way: we have a detailed, specific theory of why
this will occur, in terms of the particular interactions between the
components.

But what 

Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
  I don't think any reasonable person in AI or AGI will claim any of these
 have been solved. They may want to claim their method has promise, but not
 that it has actually solved any of them.

Yes -- it is true, we have not created a human-level AGI yet.  No serious
researcher disagrees.  So why is it worth repeating the point?

Similarly, up till the moment when the first astronauts walked on the moon,
you could have run around yelping that no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise.

It's true -- theories and ideas can always be wrong, and empirical proof adds
a whole new level of understanding.  (Though, empirical proofs don't exist
in a theoretical vacuum, they do require theoretical interpretation.
For instance
physicists don't agree on which supposed top quark events really were
top quarks ... and some nuts still don't believe people walked on the moon,
just as even after human-level AGI is achieved some nuts still won't believe
it...)

Nevertheless, with something as complex as AGI you gotta build stuff based
on a theory.  And not everyone is going to believe the theory until the proof
is there.  And so it goes...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Hi,

  Just my personal opinion...but it appears that the exponential technology
 growth chart, which is used in many of the briefings, does not include
 AI/AGI. It is processing centric.  When you include AI/AGI the exponential
 technology curve flattens out in the coming years (5-7) and becomes part of
 a normal S curve of development.  While computer power and processing will
 increase exponentially (as nanotechnology grows) the area of AI will need
 more time to develop.

  I would be interested in your thoughts.

I think this is because progress toward general AI has been difficult
to quantify
in the past, and looks to remain difficult to quantify into the future...

I am uncertain as to the extent to which this problem can be worked around,
though.

Let me introduce an analogy problem

Understanding the operation of the brain better and better is to
scanning the brain with higher and higher spatiotemporal accuracy,
as Creating more and more powerful AGI is to what?

;-)

The point is that understanding the brain is also a nebulous and
hard-to-quantify goal, but we make charts for it by treating brain
scan accuracy as a more easily quantifiable proxy variable.  What's a
comparable proxy variable for AGI?

Suggestions welcome!

-- Ben

Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-08 Thread Ben Goertzel
This is part of the idea underlying OpenCog (opencog.org), though it's
being done
in a nonprofit vein rather than commercially...

On Tue, Apr 8, 2008 at 1:55 AM, John G. Rose [EMAIL PROTECTED] wrote:
 Just a thought, maybe there are some commonalities across AGI designs where
  components could be built at a lower cost. An investor invests in the
  company that builds component x that is used by multiple AGI projects. Then
  you have your little AGI ecosystem of companies all competing yet
  cooperating. After all, we need to get the Singularity going ASAP so that we
  can upload before inevitable biologic death? I prefer not to become
  nano-dust I'd rather keep this show a rockin' capiche?

  So it's like this - need standards. Somebody go bust out an RFC. Or is there
  work done on this already like is there a CogML? I don't know if the
  Semantic Web is going to cut the mustard... and the name Semantic Web just
  doesn't have that ring to it. Kinda reminds me of the MBone - names really
  do matter. Then who's the numnutz that came up with Web 3 dot oh geezss!

  John



   -Original Message-
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
   Sent: Monday, April 07, 2008 7:07 PM
   To: singularity@v2.listbox.com


  Subject: Re: [singularity] Vista/AGI
  
   Perhaps the difficulty in finding investors in AGI is that among people
   most
   familiar with the technology (the people on this list and the AGI list),
   everyone has a different idea on how to solve the problem.  Why would I
   invest in someone else's idea when clearly my idea is better?
  
  
   -- Matt Mahoney, [EMAIL PROTECTED]
  

  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
Well, Matt and I are talking about building totally different kinds of
systems...

I believe the system he wants to build would cost a huge amount ...
but I don't think
it's the most interesting sorta thing to build ...

A decent analogue would be spaceships.  All sorts of designs exist, some orders
of magnitude more complex and expensive than others.  It's more
practical to build
the cheaper ones, esp. when they're also more powerful ;-p

ben

On Tue, Apr 8, 2008 at 10:56 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

 Eric B. Ramsay

 Matt Mahoney [EMAIL PROTECTED] wrote:


 --- Mike Tintner wrote:

  Matt : a super-google will answer these questions by routing them to
  experts on these topics that will use natural language in their narrow
  domains of expertise.
 
  And Santa will answer every child's request, and we'll all live happily
 ever
  after. Amen.

 If you have a legitimate criticism of the technology or its funding plan, I
 would like to hear it. I understand there will be doubts about a system I
 expect to cost over $1 quadrillion and take 30 years to build.

 The protocol specifies natural language. This is not a hard problem in
 narrow
 domains. It dates back to the 1960's. Even in broad domains, most of the
 meaning of a message is independent of word order. Google works on this
 principle.

 But this is beside the point. The critical part of the design is an
 incentive
 for peers to provide useful services in exchange for resources. Peers that
 appear most intelligent and useful (and least annoying) are most likely to
 have their messages accepted and forwarded by other peers. People will
 develop domain experts and routers and put them on the net because they can
 make money through highly targeted advertising.

 Google would be a peer on the network with a high reputation. But Google
 controls only 0.1% of the computing power on the Internet. It will have to
 compete with a system that allows updates to be searched instantly, where
 queries are persistent, and where a query or message can initiate
 conversations with other people in real time.

  Which are these areas of science, technology, arts, or indeed any area of
  human activity, period, where the experts all agree and are NOT in deep
  conflict?
 
  And if that's too hard a question, which are the areas of AI or AGI, where
  the experts all agree and are not in deep conflict?

 I don't expect the experts to agree. It is better that they don't. There are
 hard problem remaining to be solved in language modeling, vision, and
 robotics. We need to try many approaches with powerful hardware. The network
 will decide who the winners are.


 -- Matt Mahoney, [EMAIL PROTECTED]

 ---
 singularity
 Archives: http://www.listbox.com/member/archive/11983/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/11983/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  

  singularity | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
  Of course what I imagine emerging from the Internet bears little resemblance
  to Novamente.  It is simply too big to invest in directly, but it will 
 present
  many opportunities.

But the emergence of superhuman AGI's like a Novamente may eventually become,
will both dramatically alter the nature of, and dramatically reduce
the cost of, global
brains such as you envision...

ben g

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

This is based on my bias that AGI is best approached, at the current time,
by focusing on software not specialized hardware.

One of the things I like about AGI is that a single individual or a
small team CAN
just do it without need for massive capital investment in physical
infrastructure.

It's tempting to get into specialized hardware for AGI, and we may
want to at some
point, but I think it makes sense to defer that until we have a very
clear idea of
exactly what AGI design needs the hardware and strong prototype results of some
sort indicating why this AGI design will work on this hardware.  My
suspicion is that
we can get to human-level AGI without any special hardware, though
special hardware
will certainly be able to accelerate things after that.

-- Ben G




On Sun, Apr 6, 2008 at 7:22 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
 Arguably many of the problems of Vista including its legendary slippages
 were the direct result of having thousands of merely human programmers
 involved.   That complex monkey interaction is enough to kill almost
 anything interesting. shudder

  - samantha

  Panu Horsmalahti wrote:

 
  Just because it takes thousands of programmers to create something as
 complex as Vista, does *not* mean that thousands of programmers are required
 to build an AGI, since one property of AGI is/can be that it will learn most
 of its complexity using algorithms programmed into it.
  
  *singularity* | Archives
 http://www.listbox.com/member/archive/11983/=now
 http://www.listbox.com/member/archive/rss/11983/ | Modify
 http://www.listbox.com/member/?; Your Subscription   [Powered by
 Listbox] http://www.listbox.com
 
 


  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
 If the concept behind Novamente is truly compelling enough, it
 should be no problem to make a successful pitch.

 Eric B. Ramsay

Gee ... you mean, I could pitch the idea of funding Novamente to
people with money??  I never thought of that!!  Thanks for the
advice ;-pp

Evidently, the concept behind Novamente is not truly compelling
enough to the casual observer,
as we have failed to attract big-bucks backers so far...

Many folks we've talked to are interested in what we're doing but
it seems we'll have to get further toward the end goal in order to
overcome their AGI skepticism...

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
elevator pitch treatment ... or even PPT summary treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias

Please note that many successful inventors in history have had
huge trouble getting financial backing, although in hindsight
we find their ideas truly compelling.  (And, many failed inventors
with terrible ideas have also had huge trouble getting financial
backing...)

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 12:21 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 Ben:
 I may be mistaken, but it seems to me that AGI today in 2008 is in the air
 again after 50 years.

Yes

You are not trying to present a completely novel and
 unheard of idea and with today's crowd of sophisticated angel investors I am
 surprised that no one bites given the modest sums involved. BTW I was not
 trying to give needless advice, just finishing my thoughts. I already took
 it as a given that you look for funding. I am trying to understand why no
 one bites. It's not as if there are a hundred different AGI efforts out
 there to choose from.

I don't fully understand it myself, but it's a fact.

To be clear: I understand why VC's and big companies don't want to fund
NM.

VC's are in a different sort of business ...

and big companies are either focused
on the short term, or else have their own
research groups who don't want a bunch of upstart outsiders to get
their research
$$ ...

But what vexes me a bit is that none of the many wealthy futurists out
there have been
interested in funding NM extensively, either on an angel investment
basis, or on a
pure nonprofit donation basis (and we have considered doing NM as a nonprofit
before, though right now that's not our focus as the virtual-pets biz
opp seems so
grand...)

I know personally (and have met with) a number of folks who

-- could invest a couple million $$ in NM without it impacting their
lives at all

-- are deeply into the Singularity and AGI and related concepts

-- appear to personally like and respect me and other in the NM team

But, after spending about 1.5 years courting these sorts of folks,
Bruce and I largely
gave up and decided to focus on other avenues.

I have some psychocultural theories as to why things are this way, but
nothing too
solid...

I am surprised that the reason may only be that the
 project isn't far enough along (too immature) given the historical
 precedents of what investors have ponied up money for before.

That's surely part of it ... but investors have put big $$ into much LESS
mature projects in areas such as nanotech and quantum computing.

AGI arouses an irrational amount of skepticism, compared to these other
futurist technologies, it seems to me.  I suppose this partly is
because there have
been more false starts toward AI in the past.

-- Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn [EMAIL PROTECTED] wrote:


  I would think an investor would want a believable specific answer to the
 following question:

  When and how will I get my money back?

  It can be uncertain (risk is part of the game), but you can't just wave
 your hands around on that point.

This is not the problem ... regarding Novamente, we have an extremely
specific business plan and details regarding how we would provide return
on investment.

The problem is that investors are generally pretty unwilling to eat  perceived
technology risk.  Exceptions arise all the time, and AGI has not yet been one.

It is an illusion that VC or angel investors are fond of risk ...
actually they are
quite risk-averse in nearly all cases...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Funny dispute ... is AGI about mathematics or science

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...

-- Ben


On Sun, Apr 6, 2008 at 9:16 PM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:

  On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:


  J. Andrew Rogers wrote:
 
   The fact that the vast majority of AGI theory is pulled out of /dev/ass
 notwithstanding, your above characterization would appear to reflect your
 limitations which you have chosen to project onto the broader field of AGI
 research.  Just because most AI researchers are misguided fools and you do
 not fully understand all the relevant theory does not imply that this is a
 universal (even if it were).
  
 
  Ad hominem.  Shameful.
 


  Ad hominem?  Well, of sorts I suppose, but in this case it is the substance
 of the argument so it is a reasonable device.  I think I have met more AI
 cranks with hare-brained pet obsessions with respect to the topic or
 academics that are beating a horse that died thirty years ago than AI
 researchers that are actually keeping current with the subject matter.
 Pointing out the embarrassing foolishness of the vast number of those that
 claim to be AI researchers and how it colors the credibility of the entire
 field is germane to the discussion.

  As for you specifically, assertions like Artificial Intelligence research
 does not have a credible science behind it in the absence of substantive
 support (now or in the past) can only lead me to believe that you either are
 ignorant of relevant literature (possible) or you do not understand all the
 relevant literature and simply assume it is not important.   As far as I
 have ever been able to tell, theoretical psychology re-heats a very old idea
 while essentially ignoring or dismissing out of hand more recent literature
 that could provide considerable context when (re-)evaluating the notion.
 This is a fine example of part of the problem we are talking about.



  AGI *is* mathematics?
 


  Yes, applied mathematics.  Is there some other kind of non-computational
 AI?  The mathematical nature of the problem does not disappear when you wrap
 it in fuzzy abstractions it just gets, well, fuzzy.  At best the science can
 inform your mathematical model, but in this case the relevant mathematics is
 ahead of the science for most purposes and the relevant science is largely
 working out the specific badly implemented wetware mapping to said
 mathematics.




  I'm sorry, but if you can make a statement such as this, and if you are
 already starting to reply to points of debate by resorting to ad hominems,
 then it would be a waste of my time to engage.
 


  Probably a waste of my time as well if you think this is primarily a
 science problem in the absence of a discernible reason to characterize it as
 such.




  I will just note that if this point of view is at all widespread - if
 there really are large numbers of people who agree that AGI is mathematics,
 not science  -  then this is a perfect illustration of just why no progress
 is being made in the field.
 


  Assertions do not manufacture fact.


  J. Andrew Rogers

  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/

  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Microsoft Launches Singularity

2008-03-24 Thread Ben Goertzel
 http://www.codeplex.com/singularity

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Brief report on AGI-08

2008-03-08 Thread Ben Goertzel
 on AGI.  Society,
including the society of scientists, is starting to wake up to the
notion that, given modern technology and science, human-level AGI is
no longer a pipe dream but a potential near-term reality.  w00t!  Of
course there is a long way to go in terms of getting this kind of work
taken as seriously as it should be, but at least things seem to be
going in the right direction.

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


[singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
This article

http://www.physorg.com/news120735315.html

made me think of Johnjoe McFadden's theory
that quantum nonlocality plays a role in protein-folding

http://www.surrey.ac.uk/qe/quantumevolution.htm

H...

ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=93804562-c9b06c


Re: Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
Bruno,

Posting these links without any comprehensible commentary is not very
useful ... so I think you should stop ...

If you have some discussion about the information being pointed to,
and its relevance to this thread or other possibly
Singularity-relevant issues, that would be welcome...

thanks
Ben Goertzel
List Owner

On Feb 5, 2008 4:36 PM, Bruno Frandemiche [EMAIL PROTECTED] wrote:

 hello,too me(stop me if you have the thue,i am very open)
 http://www.spaceandmotion.com/wave-structure-matter-theorists.htm
 cordialement votre
 bruno


 - Message d'origine 
 De : Bruno Frandemiche [EMAIL PROTECTED]
 À : singularity@v2.listbox.com
 Envoyé le : Mardi, 5 Février 2008, 21h42mn 07s
 Objet : Re : Re : [singularity] Quantum resonance btw DNA strands?




 hell-o
 http://freespace.virgin.net/ch.thompson1/
 inquiry,reflexion,judgement:yes
 heating knowledge:no
 the true is always subjectif,contextuel or intersubjectif and therefore
 social
 cordialement votre
 bruno



 - Message d'origine 
 De : Bruno Frandemiche [EMAIL PROTECTED]
 À : singularity@v2.listbox.com
 Envoyé le : Mardi, 5 Février 2008, 20h52mn 03s
 Objet : Re : [singularity] Quantum resonance btw DNA strands?



 hello (i am a poor little computer-man but honest and i want to know before
 out)
 http://www.glafreniere.com/matter.htm
 ether:yes
 wave:yes
 lorentz:yes
 poincaré:yes
 compton:yes
 cabala:yes
 lafreniere:yes
 http://en.wikipedia.org/wiki/Process_Physics
 http://myprofile.cos.com/mammoth
 http://web.petrsu.ru/~alexk/
 cahill:yes
 kirilyuk:yes
 kaivarainen:yes
 particule:no
 eiinstein:no (excuse-my)(or excuse his)
 fuller:yes
 synergetics:yes
 darwin:little
 symbiose(wave and evolution):YES YES YES YES
 mcfadden:possible(because wave)
 bohr:little(because epistemic)(excuse-my)
 heisenberg:no(excuse-my)
 schrodinger:yes(but no particle and ether)
 descarte:yes(i am french but i feel non-dual dual rationalism)
 agi:yes(attention for worker)
 good french polemic
 cordialement votre
 bruno

 - Message d'origine 
 De : Ben Goertzel [EMAIL PROTECTED]
 À : singularity@v2.listbox.com
 Envoyé le : Mardi, 5 Février 2008, 17h32mn 47s
 Objet : [singularity] Quantum resonance btw DNA strands?

 This article

 http://www.physorg.com/news120735315.html

 made me think of Johnjoe McFadden's theory
 that quantum nonlocality plays a role in protein-folding

 http://www.surrey.ac.uk/qe/quantumevolution.htm

 H...

 ben



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


  
  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
 Mail This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


  
  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
 Mail This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


  
  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
 Mail 
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=93974413-c8d738

Re: Re : Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
Hi Bruno,

 effectively,my commentary is very short so excuse-my(i drive my pc with my
 eyes
 because i am a a.l.s with tracheo and gastro and i was a speaker,not a
 writer and it's difficult)

Well that is certainly a good reason for your commentaries being short!

 hello ben
 ok ,i stop,no problem
 i am thinking mcfadden'theory was possible right because of
 wave-matter-structure and
 no-particle-matter-structure

Certainly the wave nature of matter is a necessary prerequisite for
McFadden's theory to be correct -- but that's already built into quantum
mechanics, right?

The question is whether proteins really function as macroscopic quantum
systems, in the way that McFadden suggests.  They may or may not, but I
don't think the answer is obvious from the wave nature of matter...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=94096411-d20c12


Re: [singularity] Multi-Multi-....-Multiverse

2008-02-02 Thread Ben Goertzel
Hi,

Just a contextualizing note: this is the Singularity list not the AGI list so
the scope of appropriate discussion is not so restricted.

In my view, whacky models of the universe are at least moderately
relevant to Singularity.  After the Singularity, we are almost sure to discover
that our current model of the universe is in many ways wrong ... it seems
interesting to me to speculate about what a broader, richer, deeper model
might look like

-- Ben Goertzel
(list owner, plus the guy who started this thread ;-)

On Feb 2, 2008 3:54 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
 WTF does this have to do with AGI or Singularity?   I hope the AGI
 gets here soon.  We Stupid Monkeys get damn tiresome.

 - samantha


 On Jan 29, 2008, at 7:06 AM, gifting wrote:

 
  On 29 Jan 2008, at 14:13, Vladimir Nesov wrote:
 
  On Jan 29, 2008 11:49 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  OK, but why can't they all be dumped in a single 'normal'
  multiverse?
  If traveling between them is accommodated by 'decisions', there
  is a
  finite number of them for any given time, so it shouldn't pose
  structural problems.
 
  The whacko, speculative SF hypothesis is that lateral movement btw
  Yverses is conducted according to ordinary laws of physics,
  whereas
  vertical movement btw Yverses is conducted via extraphysical psychic
  actions ;-)'
 
 
  What differentiates psychic actions from non-psychic so that they
  can't be considered ordinary? If I can do both, why aren't they
  both
  equally ordinary to me (and everyone else)?..
 
  Is a psychic action telepathy, for example? If I am a schizophrenic
  and hear voices, is this a psychic experience?
  What is a psychic action FOR YOU, or in your set of definitions?
  Do you propose that you are able of psychic actions within a set
  frame of definitions or do you experience psychic actions and
  redefine your environment because
  of this?
  Or is it all in the mind?
  Isn't it only ordinary, if experienced repetitively .
  Gudrun
 
  -- Vladimir Nesov
  mailto:[EMAIL PROTECTED]
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?-3ffb4f
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=92990369-76f3f1


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-29 Thread Ben Goertzel
 OK, but why can't they all be dumped in a single 'normal' multiverse?
 If traveling between them is accommodated by 'decisions', there is a
 finite number of them for any given time, so it shouldn't pose
 structural problems.

The whacko, speculative SF hypothesis is that lateral movement btw
Yverses is conducted according to ordinary laws of physics, whereas
vertical movement btw Yverses is conducted via extraphysical psychic
actions ;-)'

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90975788-c6f349


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-28 Thread Ben Goertzel
Can you define what you mean by decision more precisely, please?


 OK, but why can't they all be dumped in a single 'normal' multiverse?
 If traveling between them is accommodated by 'decisions', there is a
 finite number of them for any given time, so it shouldn't pose
 structural problems. Another question is that it might be useful to
 describe them as organized in a tree-like structure, according to
 navigation methods accessible to an agent. If you represent
 uncertainty by being in 'more-parent' multiverse, it expresses usual
 idea with unusual (and probably unnecessarily restricting) notation.

 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90503257-2c3931


Re: [singularity] Wrong focus?

2008-01-27 Thread Ben Goertzel
 Craig
 Venter  co creating a new genome -

Just to be clear: They did not create a new genome, rather they are re-creating
a subset of a previously existing one...

is an example of the genetic keyboard
 playing on itself, i.e. one genome [Craig Venter] has played with another
 genome and will eventually and inevitably play with itself.

Yes

Clearly it is in
 the nature of the genome to recreate itself - and not just to execute a
 program.

You lost me here, sorry.  Nothing in Venter's work argues against the
Digital Physics hypothesis, which holds that the whole universe is a giant
computer program of sorts.

 P.P.S. The full new paradigm is something like -  the self-driving/
 self-conducting machine -  it is actually the self that is the rest of the
 body and brain, that interactively plays upon, and is played by, the genome,
 (rather than the genome literally playing upon itself). And just as science
 generally has left the self out of its paradigms,

On the contrary, as Thomas Metzinger has masterfully argued in Being No One
(and see also the book The Curse of the Self, whose author's name eludes
me momentarily), the self has been well-understood by neuropsychology
as an emergent aspect of the dynamics of certain
complex systems.  Like will and reflective-consciousness, it is an extremely
useful construct that also seems to have some irrational and undesirable
(even from its own point of view) aspects.

 so cog sci has left the
 indispensible human programmer/operator out of its computational paradigms.

It is true that human programmers are indispensible to current software systems,
except for simple self-propagating systems like computer viruses and worms ...
but this is just because software is at an early stage of development, it's not
something intrinsic to the nature of software versus physical systems (which
as Fredkin and others have argued,
may sensibly be conceived of as just software on a different
operating system...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90331620-260585


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-27 Thread Ben Goertzel
On Jan 27, 2008 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Jan 27, 2008 9:29 PM, John K Clark [EMAIL PROTECTED] wrote:
  Ben Goertzel [EMAIL PROTECTED]
 
   we can think about a multi-multiverse, i.e. a collection of multiverses,
   with a certain probability distribution over them.
 
  A probability distribution of what?
 

 Exactly. It needs stressing that probability is a tool for
 decision-making and it has no semantics when no decision enters the
 picture.

Probability theory is a branch of mathematics and the concept of decision
does not enter into it.

Connecting probability to human life or scientific experiments
does involve an interpretation, but not all interpretations involve the
notion of decision.

De Finetti's interpretation involves decisions, for example (as it has to do
with gambling); but, Cox's interpretation does not...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90404327-911f15


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-27 Thread Ben Goertzel
Nesov wrote:
  Exactly. It needs stressing that probability is a tool for
  decision-making and it has no semantics when no decision enters the
  picture.
...
 What's it good for if it can't be used (= advance knowledge)? For
 other purposes we'd be better off with specially designed random
 number generators. So it's more like tautology that anything useful
 influences decisions.


In another context, I might not be picky about the use of the word
decision here ... but this thread started with a discussion of radical
models of the universe involving multi-multiverses and Yverses
and so on.

In this context, casual usage of folk-psychology notions like decision
isn't really appropriate, I suggest.

The idea of decision seems wrapped up with free will, which has a pretty
tenuous relationship with physical reality.

If what you mean is that probabilities of events are associated with the
actions that agents take, then of course this is true.

The (extremely) speculative hypothesis I was proposing in my blog post
is that perhaps intelligent agents can take two kinds of actions -- those
that are lateral moves within a given multiverse, and those that pop out
of one multiverse into another (surfing through the Yverse to another
multiverse).

One could then talk about conditional probabilities of agent actions ...
which seems unproblematic ...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90464629-d2f914


Re: [singularity] Wrong focus?

2008-01-26 Thread Ben Goertzel
Mike,

 I certainly would like to see discussion of how species generally may be
 artificially altered, (including how brains and therefore intelligence may
 be altered) - and I'm disappointed, more particularly, that Natasha and any
 other transhumanists haven't put forward some half-way reasonable
 possibilities here.  But perhaps Samantha  others would regard such matters
 as offlimits?

I know Samantha well enough to know she would NOT consider this kind
of topic off limits ;-)  ... nor would hardly anyone on this list...

My attitude (and I suspect Samantha shares the same general attitude)
is that, while genetic engineering and other aspects of biotech are
extremely interesting, AGI has a lot more potential to radically
transform life and mind.

Yes, genetic engineering is a big deal relative to ordinary life
today.  But compared to transhuman AGI, it's small potatoes...

The main difference you have with this attitude seems to be that you
feel AGI is a remote, implausible notion, whereas we feel it is almost
an inevitability in the medium term, and a possibility even in the
short term


 It's a pity though because I do think that Venter has changed everything
 today - including the paradigms that govern both science and AI.


Lets not overblow things -- please note that Venter's team has not yet
synthesized an artificial organism.  Also, they didn't really design
the organism, from scratch, they're just regenerating a (slightly
modified) existing design...

Theirs is great work though, and I don't doubt that it will advance
further in the next years...

But, there is nothing particularly surprising about what Venter's team
has done, it's stuff that we have known to be possible for a while ...
he just managed to cut through some of the practical irritations of
that sort of work and make more rapid progress than others...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90312741-593a7b


[singularity] Multi-Multi-....-Multiverse

2008-01-25 Thread Ben Goertzel
Fans of extremely weird and silly speculative pseudo-science ideas may
appreciate my latest blog post, which posits a new
model of the universe ;-_)

http://www.goertzel.org/blog/blog.htm

(A... after a day spent largely on various business-
related hassles, the 30 minutes spent writing that
was really refreshing!!!)

ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=90160582-7ccb62


Re: [singularity] The Extropian Creed by Ben

2008-01-21 Thread Ben Goertzel
 in each essay was/is a desire to see transhumanism work to help
 solve the many hardships of humanity – everywhere.

  Thank you Ben.  Best wishes,

  Natasha



  Natasha Vita-More PhD Candidate,  Planetary Collegium - CAiiA, situated in
 the Faculty of Technology, School of Computing, Communications and
 Electronics, University of Plymouth, UK Transhumanist Arts  Culture
 Thinking About the Future

  If you draw a circle in the sand and study only what's inside the circle,
 then that is a closed-system perspective. If you study what is inside the
 circle and everything outside the circle, then that is an open system
 perspective. - Buckminster Fuller


  
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=88116765-371fc5

Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
Hi,

FYI, that essay was an article I wrote for the German newspaper
Frankfurter Allgemaine Zeitung in 2001 ... it was translated to
German and published...

An elaborated, somewhat modified version was included
as a chapter in the 2005 book The Path to Posthumanity (P2P) by
myself and Stephan Vladimir Bugaj.   I have uploaded
the P2P version of the chapter here:

http://www.goertzel.org/Chapter12_aug16_05.pdf

BTW that book will in 2008 be updated and re-issued with
a different title.

Ben

On Jan 20, 2008 7:06 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Sorry if you've all read this:

 http://www.goertzel.org/benzine/extropians.htm

 But I found it a v. well written sympathetic critique of extropianism 
 highly recommend it. What do people think of its call for a humanist
 transhumanism?


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87898088-6dcd8b


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
Hi Natasha

After discussions with you and others in 2005, I created a revised
version of the essay,
which may not address all your complaints, but hopefully addressed some of them.

http://www.goertzel.org/Chapter12_aug16_05.pdf

However I would be quite interested in further critiques of the 2005
version, because
the book in which is was published is going to be reissued in 2008 and
my coauthor
and I are planning to rework the chapter anyway.

thanks
Ben

On Jan 20, 2008 1:51 PM, Natasha Vita-More [EMAIL PROTECTED] wrote:

  At 06:06 AM 1/20/2008, Mike Tintner wrote:


 Sorry if you've all read this:

  http://www.goertzel.org/benzine/extropians.htm

  But I found it a v. well written sympathetic critique of extropianism 
 highly recommend it. What do people think of its call for a humanist
 transhumanism?
  I found Ben's essay to contain a certain bias which detracts from its
 substance.  If Ben would like to debate key assumptions his essay claims, I
 available. Otherwise, if anyone is interested in key points which I belive
 are narrowly-focused and/or misleading, I'll post them.

  Natasha

  Natasha Vita-More PhD Candidate,  Planetary Collegium - CAiiA, situated in
 the Faculty of Technology, School of Computing, Communications and
 Electronics, University of Plymouth, UK Transhumanist Arts  Culture
 Thinking About the Future

  If you draw a circle in the sand and study only what's inside the circle,
 then that is a closed-system perspective. If you study what is inside the
 circle and everything outside the circle, then that is an open system
 perspective. - Buckminster Fuller


  
  This list is sponsored by AGIRI: http://www.agiri.org/email

 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87922044-bb741d


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
On Jan 20, 2008 1:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi Natasha

 After discussions with you and others in 2005, I created a revised
 version of the essay,
 which may not address all your complaints, but hopefully addressed some of 
 them.

 http://www.goertzel.org/Chapter12_aug16_05.pdf

 However I would be quite interested in further critiques of the 2005
 version, because
 the book in which is was published is going to be reissued in 2008 and
 my coauthor
 and I are planning to rework the chapter anyway.

 thanks
 Ben

I would add that my understanding of the transhumanist/futurist
community in general,
and extropianism in particular, has deepened since 2005 due to a
greater frequency
and intensity of social interaction with relevant individuals; so
there are probably statements
in even the 2005 version that I wouldn't fully agree with now ...

... though, the spirit of the article of course still represents my
perspective...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604id_secret=87922432-9d71fc


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way AIXI has a homunculus on call and ready to go 
when needed. 

Rather, it's right to say AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand.


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel




AIXI is valueless.

Well, I agree that AIXI provides zero useful practical guidance to those 
of us

working on practical AGI systems.

However, as I clarified in a prior longer post, saying that mathematics 
is valueless
is always a risky proposition.  Statements of this nature have been 
proved wrong
plenty of times in the past, in spite of their apparent sensibleness at 
the time of

utterance...

But I think we have all made our views on this topic rather clear, at 
this point ;-)


Time to agree to disagree and move on...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel




Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went 
through a bunch of trouble to precisely define all the component 
terms of that definition; you can consult the Appendix to my 2006 
book The Hidden PatternShane Legg and Marcus Hutter 
have proposed a related definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an 
intelligence.


Such a definition would be pointless.  The question is *why* would it 
be pointless?  What criteria are applied, in order to determine 
whether the definition has something to the thing that in everyday 
life we call intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a complexity measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where degree of complexity 
tends to infinity (in which case all reasonably general complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the everyday life definition 
of intelligence, we can check its consistency with our everyday life 
definition of complexity.   Informally, at least, my definition seems 
to check out to me: intelligence according to an IQ test does seem to 
have something to do with the ability to achieve complex goals; and, the 
reason we think IQ tests mean anything is that we think the ability to 
achieve complex goals in the test-context will correlate with the 
ability to achieve complex goals in various more complex environments 
(contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of 
the complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of 
various entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's consensus complexity measure and trying to make a 
system that, according to this measure, can achieve a diverse variety of 
complex goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Vinge Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Ben Goertzel

Joshua Fox wrote:

Any comments on this: http://news.com.com/2100-11395_3-6160372.html

Google has been mentioned in the context of  AGI, simply because they 
have money, parallel processing power, excellent people, an 
orientation towards technological innovation, and important narrow AI 
successes and research goals. Do Page's words mean that Google is 
seriously working towards AGI? If so, does anyone know the people 
involved? Do they have a chance and do they understand the need for 
Friendliness?


This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines 
and a lot of smart people.


However, no one has ever pointed out to me a single Google hire with a 
demonstrated history of serious thinking about AGI -- as opposed to 
statistical language processing, machine learning, etc.  

That doesn't mean they couldn't have some smart staff who shifted 
research interest to AGI after moving to Google, but it doesn't seem 
tremendously likely.


Please remember that the reward structure for technical staff within 
Google is as follows: Big bonuses and copious approval go to those who 
do cool stuff that actually gets incorporated in Google's customer 
offerings  I don't have the impression they are funding a lot of 
blue-sky AGI research outside the scope of text search, ad placement, 
and other things related to their biz model.


So, my opinion remains that: Google staff described as working on AI 
are almost surely working on clever variants of highly scalable 
statistical language processing.   So, if you believe that this kind of 
work is likely to lead to powerful AGI, then yeah, you should attach a 
fairly high probability to the outcome that Google will create AGI.  
Personally I think it's very unlikely (though not impossible) that AGI 
is going to emerge via this route.


Evidence arguing against this opinion is welcomed ;-)

-- Ben G



Also: Vinge's notes on his Long Now Talk, What If the Singularity 
Does NOT Happen  are at   
http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm 
http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm


I'm delighted to see counter-Singularity analysis from a respected 
Singularity thinker. This further reassurance that the the flip-side 
is being considered deepens my beliefs in pro-Singularity arguments.


Joshua



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-20 Thread Ben Goertzel

Yes, this is one of the things we are working towards with Novamente.
Unfortunately, meeting this low barrier based on a genuine AGI
architecture is a lot more work than doing so in a more bogus way
based on an architecture without growth potential...

ben

On 12/20/06, Joshua Fox [EMAIL PROTECTED] wrote:



Ben,

If I am beating a dead horse, please feel free to ignore this, but I'm
imagining a prototype that shows glimmerings of AGI. Such a system, though
not useful or commercially viable, would  sometimes act in interesting, even
creepy, ways. It might be inconsistent and buggy, and work in a limited
domain.

This sets a low barrier, since existing systems occasionally meet this
description. The key difference is that the hypothesized prototype would
have an AGI engine under it and would rapidly improve.

Joshua



 According the approach I have charted out (the only one I understand),
 the true path to AGI does not really involve commercially valuable
 intermediate stages.  This is for reasons similar to the reasons that
 babies are not very economically useful.

 .But my best guess is that this is an illusion.  IMO by
 far the best path to a true AGI is by building an artificial baby and
 educating it and incrementally improving it, and by its very nature
 this path does not lead to incremental commercially viable results.


 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Storytelling, empathy and AI

2006-12-20 Thread Ben Goertzel

This post is a brief comment on PJ Manney's interesting essay,

http://www.pj-manney.com/empathy.html

Her point (among others) is that, in humans, storytelling is closely
tied with empathy, and is a way of building empathic feelings and
relationships.  Mirror neurons and other related mechanisms are
invoked.

I basically agree with all this.

However, I would add that among AI's with a nonhuman cognitive
architecture, this correlation need not be the case.  Humans are built
so that among humans storytelling helps build empathy.  OTOH, for an
AI storytelling might not increase empathy one whit.

It is interesting to think specifically about the architectural
requirements that having storytelling increase empathy may place on
an AI system.

For example, to encourage the storytelling/empathy connection to exist
in an AI system, one might want to give the system an explicit
cognitive process of hypothetically putting itself in someone else's
place.  So, when it hears a story about character X, it creates
internally a fabricated story in which it takes the place of character
X.  There is no reason to think this kind of strategy would come
naturally to an AI, particularly given its intrinsic dissimilarity to
humans.  But there is also no reason that kind of strategy couldn't be
forced, with the impact of causing the system to understand humans
better than it might otherwise.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-15 Thread Ben Goertzel

Well, the requirements to **design** an AGI on the high level are much
steeper than the requirements to contribute (as part of a team) to the
**implementation** (and working out of design details) of AGI.

I dare say that anyone with a good knowledge of C++, Linux, and
undergraduate computer science -- and who has done a decent amount of
reading in cognitive science -- has the background to contribute to an
AGI project such as Novamente.

Perhaps the Novamente project is now at the stage where it could
benefit from 3-4 junior AI software developers.  But even if so, the
problem still exists of finding say $100K to pay these folks for a
year.  Still, this is not so much funding to find, and it's an
interesting possible direction to take.  So far  I have been skeptical
of the ability of more junior folks to really contribute, but I
think the project may be at a level of maturity now where this may be
sensible...

Something for me to think about during the holidays...

-- Ben

On 12/15/06, Hank Conn [EMAIL PROTECTED] wrote:

I'm also surprised there aren't more programmers or AGI enthusiasts who
aren't willing to work for beans to further this goal.  We're just two
students in Arizona, but we'd both gladly give up our current lives to work
for 15-20G's a year and pull 80 hour weeks eating this stuff up.  Having a
family is valid excuse, but there are others out there who aren't tied
down.  We may not have PhD's, but we learn quickly.

I know a lot of people in this position (myself included)... although I
think the problem is that creating AGI requires you to have a lot of
background knowledge and experience to be able design and solve problems on
that level (way more than I have probably).

-hank


On 12/12/06, Josh Treadwell [EMAIL PROTECTED] wrote:

 What kind of numbers are we talking here to fund a single AGI project like
Novamente?  If I could, I'd instantly dedicate all my time and resources to
developing AI, but because most of my knowledge is auto didactic, I don't
get considered for any jobs.  So for now, I'm stuck in the drudgery of
working 60 hours a week doing IT, while struggling to complete and pay for
college.  As soon as I get out of school I'll have to start paying off
student loans, which won't be feasable in an AGI position (due to lack of
adequate funding).

 Thus, a friend of mine and I have decided to take the lower road and start
building lame websites (myspace profile template pages, ggle.com like
pages, other lame ad-words pages) in order to (a) quit our jobs, and (b)
fund our own or others research.  It boggles my mind that no one has become
financially successful and decided to throw a significant sum of money at
Novamente and the like.  For the love of Pete, sacrificing a single
Budweiser Superbowl commercial could fund years of AGI research.  I'm also
surprised there aren't more programmers or AGI enthusiasts who aren't
willing to work for beans to further this goal.  We're just two students in
Arizona, but we'd both gladly give up our current lives to work for 15-20G's
a year and pull 80 hour weeks eating this stuff up.  Having a family is
valid excuse, but there are others out there who aren't tied down.  We may
not have PhD's, but we learn quickly.


 BTW Ben, for the love of God, can you please tell me when your AGI book is
coming out?  It's been in my Amazon shopping cart for 6 months now!  How
about I just pay you via paypal, and you send me a PDF?


 Josh Treadwell
 [EMAIL PROTECTED]
 
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=11983
 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.


 

 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel

Hi,


You mention intermediate steps to AI, but the question is whether these
are narrow-AI applications (the bane of AGI projects) or some sort of
(incomplete) AGI.


According the approach I have charted out (the only one I understand),
the true path to AGI does not really involve commercially valuable
intermediate stages.  This is for reasons similar to the reasons that
babies are not very economically useful.

So, yeah, the only way I see to use commercial AI to fund AGI is to
build narrow-AI projects and sell them, and do a combination of

a) using the profits to fund AGI
b) using common software components btw the narrow-AI and AGI systems,
so the narrow-AI work can help the AGI directly to some extent

Of course, if you believe (as e.g. the Google founders do) that Web
search can be a path to AGI, then you have an easier time of it,
because there is commercial work that appears to be on the direct path
to true AGI.  But my best guess is that this is an illusion.  IMO by
far the best path to a true AGI is by building an artificial baby and
educating it and incrementally improving it, and by its very nature
this path does not lead to incremental commercially viable results.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel

 BTW Ben, for the love of God, can you please tell me when your AGI book is
coming out?  It's been in my Amazon shopping cart for 6 months now!


The publisher finally mailed me a copy of the book last week!

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

Hi,

For anyone who is curious about the talk Ten Years to the Singularity
(if we Really Really Try) that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text of the speech
online:

http://www.goertzel.org/papers/tenyears.htm

The video presentation has been online for a while

video.google.com/videoplay?docid=1615014803486086198

(alas, the talking is a bit slow in that one, but that's because the
audience was in Finland and mostly spoke English as a second
language.)  But the text may be preferable to those who, like me, hate
watching long videos of people blabbering ;-)

Questions, comments, arguments and insults (preferably clever ones) welcome...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

Hi Joshua,

Thanks for the comments

Indeed, the creation of a thinking machine is not a typical VC type
project.  I know a few VC's personally and am well aware of their way
of thinking and the way thir businesses operate.  There is a lot of
technology risk in the creation of an AGI, as compared to the sorts
of projects that VC's are typical interested in funding today.  There
is just no getting around this fact.  From a typical VC perspective,
building a thinking machine is a project with too much risk and too
much schedule uncertainty in spite of the obviously huge payoff upon
success.

Of course, it's always possible a rule-breaking VC could come along
with an interest in AGI.  VC's have funded nanotech projects with a
10+ year timescale to product, for example.

Currently our fundraising focus is on:

a) transhumanist angel investors interested in funding the creation of true AGI

b) seeking VC money with a view toward funding the rapid construction
and monetization of software products that are
-- based on components of our AGI codebase
-- incremental steps toward AGI.

With regard to b, we are currently working with a business consultant
to formulate a professional investor toolkit to present to
interested VC's.

Unfortunately, US government grant funding for out-of-the-mainstream
AGI projects is very hard to come by these days.  OTOH, the Chinese
government has expressed some interest in Novamente, but that funding
source has some serious issues involved with it, needless to say...

-- Ben G


On 12/11/06, Joshua Fox [EMAIL PROTECTED] wrote:


Ben,

I saw the video.  It's wonderful to see this direct aim at the goal of the
positive Singularity.

If I could comment from the perspective of the software industry, though
without expertise in the problem space, I'd say that there are some phrases
in there which would make me, were I a VC, suspicious. (Of course VC's
aren't the direct audience, but ultimately someone has to provide the
funding you allude to.)

When a visionary says that he requires more funding and ten years, this
often indicates an unfocused project that will never get on-track. In
software projects it is essential to aim for real results, including a beta
within a year and multiple added-value-providing versions within
approximately 3 years. I think that this is not just investor impatience --
experience shows that software projects planned for a much longer schedule
tend to get off-focus.

I know that you already realize this, and that you do have the focus; you
mention your plans, which I assume include meaningful intermediate
achievements in this incredibly challenging and extraordinary task, but this
the impression which comes across in the talk.

Yours,

Joshua



2006/12/11, Ben Goertzel [EMAIL PROTECTED]:

 Hi,

 For anyone who is curious about the talk Ten Years to the Singularity
 (if we Really Really Try) that I gave at Transvision 2006 last
 summer, I have finally gotten around to putting the text of the speech
 online:

 http://www.goertzel.org/papers/tenyears.htm

 The video presentation has been online for a while

 video.google.com/videoplay?docid=1615014803486086198

 (alas, the talking is a bit slow in that one, but that's because the
 audience was in Finland and mostly spoke English as a second
 language.)  But the text may be preferable to those who, like me, hate
 watching long videos of people blabbering ;-)

 Questions, comments, arguments and insults (preferably clever ones)
welcome...

 -- Ben

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=11983


 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

My main reason for resisting the urge to open-source Novamente is AGI
safety concerns.

At the moment Novamente is no danger to anyone, but once it gets more
advanced, I worry about irresponsible people forking the codebase
privately and creating an AGI customized for malicious purposes...

This is an issue I'm still thinking over, but anyway, that is my  main
reason for not having gone the open-source route up to this point...

-- Ben

On 12/11/06, Bo Morgan [EMAIL PROTECTED] wrote:


Ben,

My A.I. group of friends (was: CommonSense Computing Group, and is now
more scattered) has been trying to do an open-source development for a set
of programs that are working toward human-scale intelligence.  For
example, Hugo Liu's commonsense reasoning toolkit, ConceptNet, was ported
from Python to many other more efficient versions by the internet
community at large (and used in many other research projects in our lab
and around the world).

Have you thought about releasing your A.G.I. codebase that you mentioned
to the general public so that it can be developed by everyone?  I, for
one, would be interested in downloading it and trying it out.

I realize that research software is often not documented or easily
digestable, but it seems like one of the most efficient ways to attack
the software development problem.

Bo

On Mon, 11 Dec 2006, Ben Goertzel wrote:

) Hi Joshua,
)
) Thanks for the comments
)
) Indeed, the creation of a thinking machine is not a typical VC type
) project.  I know a few VC's personally and am well aware of their way
) of thinking and the way thir businesses operate.  There is a lot of
) technology risk in the creation of an AGI, as compared to the sorts
) of projects that VC's are typical interested in funding today.  There
) is just no getting around this fact.  From a typical VC perspective,
) building a thinking machine is a project with too much risk and too
) much schedule uncertainty in spite of the obviously huge payoff upon
) success.
)
) Of course, it's always possible a rule-breaking VC could come along
) with an interest in AGI.  VC's have funded nanotech projects with a
) 10+ year timescale to product, for example.
)
) Currently our fundraising focus is on:
)
) a) transhumanist angel investors interested in funding the creation of true
) AGI
)
) b) seeking VC money with a view toward funding the rapid construction
) and monetization of software products that are
) -- based on components of our AGI codebase
) -- incremental steps toward AGI.
)
) With regard to b, we are currently working with a business consultant
) to formulate a professional investor toolkit to present to
) interested VC's.
)
) Unfortunately, US government grant funding for out-of-the-mainstream
) AGI projects is very hard to come by these days.  OTOH, the Chinese
) government has expressed some interest in Novamente, but that funding
) source has some serious issues involved with it, needless to say...
)
) -- Ben G
)
)
) On 12/11/06, Joshua Fox [EMAIL PROTECTED] wrote:
) 
)  Ben,
) 
)  I saw the video.  It's wonderful to see this direct aim at the goal of the
)  positive Singularity.
) 
)  If I could comment from the perspective of the software industry, though
)  without expertise in the problem space, I'd say that there are some phrases
)  in there which would make me, were I a VC, suspicious. (Of course VC's
)  aren't the direct audience, but ultimately someone has to provide the
)  funding you allude to.)
) 
)  When a visionary says that he requires more funding and ten years, this
)  often indicates an unfocused project that will never get on-track. In
)  software projects it is essential to aim for real results, including a beta
)  within a year and multiple added-value-providing versions within
)  approximately 3 years. I think that this is not just investor impatience --
)  experience shows that software projects planned for a much longer schedule
)  tend to get off-focus.
) 
)  I know that you already realize this, and that you do have the focus; you
)  mention your plans, which I assume include meaningful intermediate
)  achievements in this incredibly challenging and extraordinary task, but this
)  the impression which comes across in the talk.
) 
)  Yours,
) 
)  Joshua
) 
) 
) 
)  2006/12/11, Ben Goertzel [EMAIL PROTECTED]:
)  
)   Hi,
)  
)   For anyone who is curious about the talk Ten Years to the Singularity
)   (if we Really Really Try) that I gave at Transvision 2006 last
)   summer, I have finally gotten around to putting the text of the speech
)   online:
)  
)   http://www.goertzel.org/papers/tenyears.htm
)  
)   The video presentation has been online for a while
)  
)   video.google.com/videoplay?docid=1615014803486086198
)  
)   (alas, the talking is a bit slow in that one, but that's because the
)   audience was in Finland and mostly spoke English as a second
)   language.)  But the text may be preferable to those who, like me, hate
)   watching long videos of people blabbering

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

The exponential growth pattern holds regardless of whether you
normalize by global population size or not...

-- Ben

On 12/11/06, Chuck Esterbrook [EMAIL PROTECTED] wrote:

Regarding de Garis' graph of the number of people who've died in
different wars throughout history, are the numbers raw or divided by
the population size?

-Chuck

On 12/11/06, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi,

 For anyone who is curious about the talk Ten Years to the Singularity
 (if we Really Really Try) that I gave at Transvision 2006 last
 summer, I have finally gotten around to putting the text of the speech
 online:

 http://www.goertzel.org/papers/tenyears.htm

 The video presentation has been online for a while

 video.google.com/videoplay?docid=1615014803486086198

 (alas, the talking is a bit slow in that one, but that's because the
 audience was in Finland and mostly spoke English as a second
 language.)  But the text may be preferable to those who, like me, hate
 watching long videos of people blabbering ;-)

 Questions, comments, arguments and insults (preferably clever ones) welcome...

 -- Ben

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Goertzel meets Sirius

2006-10-31 Thread Ben Goertzel

Me, interviewed by R.U. Sirius, on AGI, the Singularity, philosophy of
mind/emotion/immortality and so forth:

http://mondoglobo.net/neofiles/?p=78

Audio only...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel

Hi Richard,

Let me go back to start of this dialogue...

Ben Goertzel wrote:

Loosemore wrote:

 The motivational system of some types of AI (the types you would
 classify as tainted by complexity) can be made so reliable that the
 likelihood of them becoming unfriendly would be similar to the
 likelihood of the molecules of an Ideal Gas suddenly deciding to split
 into two groups and head for opposite ends of their container.


Wow!  This is a vey strong hypothesis  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?


Your subsequent responses have shown that you do have an argument, but
not anything close to a proof.

And, your argument has not convinced me, so far.  Parts of it seem
vague to me, but based on my limited understanding of your argument, I
am far from convinced that AI systems of the type you describe, under
conditions of radically improving intelligence, can be made so
reliable that the likelihood of them becoming unfriendly would be
similar to the likelihood of the molecules of an Ideal Gas suddenly
deciding to split into two groups and head for opposite ends of their
container.

At this point, my judgment is that carrying on this dialogue further
is not the best expenditure of my time.  Your emails are long and
complex mixtures of vague and precise statements, and it takes a long
time for me to read them and respond to them with even a moderate
level of care.

I remain interested in your ideas and if you write a paper or book on
your ideas I will read it as my schedule permits.  But I will now opt
out of this email thread.

Thanks,
Ben


On 10/30/06, Richard Loosemore [EMAIL PROTECTED] wrote:


Ben,

I guess the issue I have with your critique is that you say that I have
given no details, no rigorous argument, just handwaving, etc.

But you are being contradictory:  on the one hand you say that the
proposal is vague/underspecified/does not give any arguments  but
then having said that, you go on to make specific criticisms and say
that it is wrong on this or that point.

I don't think you can have it both ways.  Either you don't see an
argument, and rest your case, or you do see an argument and want to
critique it.  You are trying to do both:  you repeatedly make broad
accusations about the quality of the proposal (some very hand-wavy,
intuitive suggestions, you have not given any sort of rigorous
argument, ... your intuitive suggestions..., you did not give any
details as to why you think your proposal will 'work', etc. etc.), but
then go on to make specific points about what is wrong with it.

Now, if the specific points you make were valid criticisms, I could
perhaps overlook the inconsistency and just address the criticisms.  But
that is exactly what I just did, and your specific criticisms, as I
explained in the last message, were mostly about issues that had nothing
to do with the general class of architectures I proposed, but only with
weird cases or weird issues that had no bearing on my case.

Since you just dropped most of those issues (except one, which I will
address in a moment), I must assume that you accept that I have given a
good reply to each of them.  But instead of conceding that the argument
I gave must therefore have some merit, you repeat -- even more
insistently than before -- that there is nothing in the argument, that
it is all just vague handwaving etc.

No fair!

This kind of response:

   -  Your argument is either too vague or I don't understand it.

Would be fine, and I would just try to clarify it in the future.

But this response:

   -  This is all just handwaving, with no details and no argument.
   -  It is also a wrong argument, for these reasons:
   -  [Reasons that are mostly just handwaving or irrelevant].

Is not so good.

*

I will say something about the specific point you make about my claim
that as time goes on the system will check new ideas against previous
ones to make sure that new ones are consistent with ALL the old ones, so
therefore it will become more and more stable.

What you have raised is a minor technical issue, together with some
confusion about what exactly I meant:

The ideas being checked against all previous ideas are *not* the
incoming general learned concepts (cup, salt, cricket, democracy,
sneezes. etc.) but the concepts related to planned actions and the
system's base of moral/ethical/motivational concerns.  Broadly speaking,
it is when there is a new perhaps I should do this ... idea that the
comparison starts.  I did actually say this, but it was a little
obscurely worded.

Now, when I said checked for consistency against all previous ideas I
was speaking rather loosely (my bad).  Obviously I would not do this by
an exhaustive comparison [please:  I don't

Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel

Hi,


There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it:  I am proposing a general
*class* of architectures for an AI-with-motivational-system.  I am not
saying that this is a specific instance (with all the details nailed
down) of that architecture, but an entire class. an approach.

However, as I explain in detail below, most of your criticisms are that
there MIGHT be instances of that architecture that do not work.


No.   I don't see why there will be any instances of your architecture
that do work (in the sense of providing guaranteeable Friendliness
under conditions of radical, intelligence-increasing
self-modification).

And you have not given any sort of rigorous argument that such
instances will exist

Just some very hand-wavy, intuitive suggestions, centering on the
notion that (to paraphrase) because there are a lot of constraints, a
miracle happens  ;-)

I don't find your intuitive suggestions foolish or anything, just
highly sketchy and unconvincing.

I would say the same about Eliezer's attempt to make a Friendly AI
architecture in his old, now-repudiated-by-him essay Creating a
Friendly AI.  A lot in CFAI seemed plausible to me , and the intuitive
arguments were more fully fleshed out than your in your email
(naturally, because it was an article, not an email) ... but in the
end I felt unconvinced, and Eliezer eventually came to agree with me
(though not on the best approach to fixing the problems)...


  In a radically self-improving AGI built according to your
  architecture, the set of constraints would constantly be increasing in
  number and complexity ... in a pattern based on stimuli from the
  environment as well as internal stimuli ... and it seems to me you
  have no way to guarantee based on the smaller **initial** set of
  constraints, that the eventual larger set of constraints is going to
  preserve Friendliness or any other criterion.

On the contrary, this is a system that grows by adding new ideas whose
motivatonal status must be consistent with ALL of the previous ones, and
the longer the system is allowed to develop, the deeper the new ideas
are constrained by the sum total of what has gone before.


This does not sound realistic.  Within realistic computational
constraints, I don't see how an AI system is going to verify that each
of its new ideas is consistent with all of its previous ideas.

This is a specific issue that has required attention within the
Novamente system.  In Novamente, each new idea is specifically NOT
required to be verified for consistency against all previous ideas
existing in the system, because this would make the process of
knowledge acquisition computationally intractable.  Rather, it is
checked for consistency against those other pieces of knowledge with
which it directly interacts.  If an inconsistency is noticed, in
real-time, during the course of thought, then it is resolved
(sometimes by a biased random decision, if there is not enough
evidence to choose between two inconsistent alternatives; or
sometimes, if the matter is important enough, by explicitly
maintaining two inconsistent perspectives in the system, with separate
labels, and an instruction to pay attention to resolving the
inconsistency as more evidence comes in.)

The kind of distributed system you are describing seems NOT to solve
the computational problem of verifying the consistency of each new
knowledge item with each other knowledge item.



Thus:  if the system has grown up and acquired a huge number of examples
and ideas about what constitutes good behavior according to its internal
system of values, then any new ideas about new values must, because of
the way the system is designed, prove themselves by being compared
against all of the old ones.


If each idea must be compared against all other ideas, then cognition
has order n^2 where n is the number of ideas.  This is not workable.
Some heuristic shortcuts must be used to decrease the number of
comparisons, and such heuristics introduce the possibility of error...


And I said ridiculously small chance advisedly:  if 10,000 previous
constraints apply to each new motivational idea, and if 9,900 of them
say 'Hey, this is inconsistent with what I think is a good thing to do',
then it doesn't have a snowball's chance in hell of getting accepted.
THIS is the deep potential well I keep referring to.


The problem, as I said, is posing a set of constraints that is both
loose enough to allow innovative new behaviors, and tight enough to
prevent the wrong behaviors...


I maintain that we can, during early experimental work, understand the
structure of the motivational system well enough to get it up to a
threshold of acceptably friendly behavior, and that beyond that point
its stability will be self-reinforcing, for the above reasons.


Well, I hope so ;-)

I don't rule out the possibility, but I don't feel you've argued for
it convincingly, 

[singularity] Fwd: After Life by Simon Funk

2006-10-29 Thread Ben Goertzel

FYI

-- Forwarded message --
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
Date: Oct 30, 2006 12:14 AM
Subject: After Life by Simon Funk
To: [EMAIL PROTECTED]


http://interstice.com/~simon/AfterLife/index.html

An online novella, with hardcopy purchaseable from Lulu.
Theme: Uploading.
Author: H/rationalist.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel

Hi,


Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.


My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.  But not the only one.  For instance, suppose that
computer manufacturers decide they need powerful nanotech in order to
build better and better processors: that would be a convincing
nonmilitary source for massive nanotech RD funds.

OTOH for technologies like AGI where the main need is innovation
rather than expensive infrastructure, I think a key role for the
military is less likely.  I would expect the US military to be among
the leaders in robotics, because robotics is
costly-infrastructure-centric.  But not necessarily in robot
*cognition* (as opposed to hardware) because cognition RD is more
innovation-centric.

Not that I'm saying the US military is incapable of innovation, just
that it seems to be more reliable as a source of development $$ for
technologies not yet mature enough to attract commercial investment,
than as a source for innovative ideas.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Ben Goertzel

Hi,


The problem, Ben, is that your response amounts to I don't see why that
would work, but without any details.


The problem, Richard, is that you did not give any details as to why
you think your proposal will work (in the sense of delivering a
system whose Friendliness can be very confidently known)


The central claim was that because the behavior of the system is
constrained by a large number of connections that go from motivational
mechanism to thinking mechanism, the latter is tightly governed.


But this claim, as stated, seems not to be true  The existence of
a large number of constraints does not intrinsically imply tight
governance.

Of course, though, one can posit the existence of a large number of
constraints that DOES provide tight governance.

But the question then becomes whether this set of constraints can
simultaneously provide

a) the tightness of governance needed to guarantee Friendliness

b) the flexibility of governance needed to permit general, broad-based learning

You don't present any argument as to why this is going to be the case

I just wonder if, in this sort of architecture you describe, it is
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re
this point.

Actually, I suspect that it probably **is** possible to make a
reasonably benevolent AGI according to the sort of NN architecture you
suggest ... (as well as according to a bunch of other sorts of
architectures)

However, your whole argument seems to assume an AGI with a fixed level
of intelligence, rather than a constantly self-modifying and improving
AGI.  If an AGI is rapidly increasing its hardware infrastructure and
its intelligence, then I maintain that guaranteeing its Friendliness
is probably impossible ... and your argument gives no way of getting
around this.

In a radically self-improving AGI built according to your
architecture, the set of constraints would constantly be increasing in
number and complexity ... in a pattern based on stimuli from the
environment as well as internal stimuli ... and it seems to me you
have no way to guarantee based on the smaller **initial** set of
constraints, that the eventual larger set of constraints is going to
preserve Friendliness or any other criterion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel

Richard,

As I see it, in this long message you have given a conceptual sketch
of an AI design including a motivational subsystem and a cognitive
subsystem, connected via a complex network of continually adapting
connections.  You've discussed the way such a system can potentially
build up a self-model involving empathy and a high level of awareness,
and stability, etc.

All this makes sense, conceptually; though as you point out, the story
you give is short on details, and I'm not so sure you really know how
to cash it out in terms of mechanisms that will actually function
with adequate intelligence ... but that's another story...

However, you have given no argument as to why the failure of this kind
of architecture to be stably Friendly is so ASTOUNDINGLY UNLIKELY as
you claimed in your original email.  You have just argued why it's
plausible to believe such a system would probably have a stable goal
system.  As I see it, you did not come close to proving your original
claim, that


  The motivational system of some types of AI (the types you would
  classify as tainted by complexity) can be made so reliable that the
  likelihood of them becoming unfriendly would be similar to the
  likelihood of the molecules of an Ideal Gas suddenly deciding to split
  into two groups and head for opposite ends of their container.


I don't understand how this extreme level of reliability would be
achieved, in your design.

Rather, it seems to me that the reliance on complex, self-organizing
dynamics makes some degree of indeterminacy in the system almost
inevitable, thus making the system less than absolutely reliable.
Illustratng this point, humans (who are complex dynamical systems) are
certainly NOT reliable in terms of Friendliness or any other subtle
psychological property...

-- Ben G







On 10/25/06, Richard Loosemore [EMAIL PROTECTED] wrote:

Ben Goertzel wrote:
 Loosemore wrote:
  The motivational system of some types of AI (the types you would
  classify as tainted by complexity) can be made so reliable that the
  likelihood of them becoming unfriendly would be similar to the
  likelihood of the molecules of an Ideal Gas suddenly deciding to split
  into two groups and head for opposite ends of their container.

 Wow!  This is a vey strong hypothesis  I really doubt this
 kind of certainty is possible for any AI with radically increasing
 intelligence ... let alone a complex-system-type AI with highly
 indeterminate internals...

 I don't expect you to have a proof for this assertion, but do you have
 an argument at all?

 ben

Ben,

You are being overdramatic here.

But since you ask, here is the argument/proof.

As usual, I am required to compress complex ideas into a terse piece of
text, but for anyone who can follow and fill in the gaps for themselves,
here it is.  Oh, and btw, for anyone who is scarified by the
psychological-sounding terms, don't worry:  these could all be cashed
out in mechanism-specific detail if I could be bothered  --  it is just
that for a cognitive AI person like myself, it is such a PITB to have to
avoid such language just for the sake of political correctness.

You can build such a motivational system by controlling the system's
agenda by diffuse connections into the thinking component that controls
what it wants to do.

This set of diffuse connections will govern the ways that the system
gets 'pleasure' --  and what this means is, the thinking mechanism is
driven by dynamic relaxation, and the 'direction' of that relaxation
pressure is what defines the things that the system considers
'pleasurable'.  There would likely be several sources of pleasure, not
just one, but the overall idea is that the system always tries to
maximize this pleasure, but the only way it can do this is to engage in
activities or thoughts that stimulate the diffuse channels that go back
from the thinking component to the motivational system.

[Here is a crude analogy:  the thinking part of the system is like a
table ontaining a complicated model landscape, on which a ball bearing
is rolling around (the attentional focus).  The motivational system
controls this situation, not be micromanaging the movements of the ball
bearing, but by tilting the table in one direction or another.  Need to
pee right now?  That's because the table is tilted in the direction of
thoughts about water, and urinary relief.  You are being flooded with
images of the pleasure you would get if you went for a visit, and also
the thoughts and actions that normally give you pleasure are being
disrupted and associated with unpleasant thoughts of future increased
bladder-agony.  You get the idea.]

The diffuse channels are set up in such a way that they grow from seed
concepts that are the basis of later concept building.  One of those
seed concepts is social attachment, or empathy, or imprinting  the
idea of wanting to be part of, and approved by, a 'family' group.  By
the time the system is mature, it has well

Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread Ben Goertzel

HI,

About hybrid/integrative architecturs, Michael Wilson said:

I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software. I was all for hybridising various forms of
connectionism, fuzzy symbolic logic, genetic algorithms and more at one
point. It was only later that I began to realise that most if not all of
those mechanisms were neither optimal, adequate or even all that useful.


My own experience was along similar lines.

The Webmind AI Engine that I worked on in the late 90's was a hybrid
architecture, that incorporated learning/reasoning/etc. agents based
on a variety of existing AI methods, moderately lightly customized.

On the other hand, the various cognitive mechanisms in Novamente
mostly had their roots in standard AI techniques, but have been
modified, customized and re-thought so far that they are really
fundamentally different things by now.

So I did find that even when a standard narrow-AI technique sounds on
the surface like it should be good at playing some role within an AGI
architecture, in practice it generally doesn't work out that way.
Often there is **something vaguely like** that narrow-AI technique
that makes sense in an AGI architecture, but the path from the
narrow-AI method to the AGI-friendly relative can require years of
theoretical and experimental effort.

An example is the path from evolutionary learning to probabilistic
evolutionary learning of the type we've designed for Novamente (which
is hinted at in Moshe Looks' thesis work at www.metacog.org; but even
that stuff is only halfway there to the kind of prob. ev. learning
needed for Novamente AGI purposes; it hits some of the key points but
leaves some important things out too.  But a key point is that by
using probabilistic methods effectively it opens the door for deep
integration of evolutionary learning and probabilistic reasoning,
which is not really possible with standard evolutionary techniques...)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-24 Thread Ben Goertzel

Loosemore wrote:

 The motivational system of some types of AI (the types you would
 classify as tainted by complexity) can be made so reliable that the
 likelihood of them becoming unfriendly would be similar to the
 likelihood of the molecules of an Ideal Gas suddenly deciding to split
 into two groups and head for opposite ends of their container.


Wow!  This is a vey strong hypothesis  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel

 Right - for the record when I use words like loony in this sort of
context I'm not commenting on how someone might come across face to face
(never having met him), nor on what a psychiatrist's report would read (not
being a psychiatrist) - I'm using the word in exactly the same way that I
would call someone loony for believing the Book of Revelations will
literally come true at the end of the Mayan calendar, that they've been
called to make a spiritual rendezvous with a flying saucer following a
comet, etc.


Ah, OK.  In that sense, I believe at least 90% of the world's
population is loony, because they believe in God -- which is a far
more fanciful and less probable notion than De Garis's artilect war
;-O

In fact, the US is a nation ruled by a combination of loonies and
loony-impersonators, since there are no (or nearly no) nationally
elected officials who are admitted atheists...


 Do you think that De Garis's scenario of a massive violent conflict
 between pro and anti Singularity forces is not plausible?


 When was the last time you saw ten geeks marching in formation, let alone
ten million? Seriously, there's a better chance of massive violent conflict
between likers of chocolate versus strawberry ice cream.


**Seriously**, I definitely don't agree with your last statement ;-)

And, I don't think you're trying very hard to understand how such a
war could viably come about.

Suppose that some clever scientist figures out how to construct
molecular assemblers with the capability to enable the construction of
massively powerful weapons  ... as well as all sorts of other nice
technologies...

Suppose Country A decides to ban this nanotech because of the dangers;
but Country B chooses not to ban it, because of the benefits...

Now, suppose A decides the presence of this nanotech in B poses a
danger to A ...

So, A decides to bomb B's molecular-assembler-construction facilities...

But, unknown to A, perhaps B has already engineered some nasty pathogens...

Etc.

I'm not talking about a situation of Geeks versus Ludds carrying out
hand-to-hand combat in the streets... and neither is De Garis,
really...

Looking at the political situation in the world today, regarding
weapons of mass destruction and nuclear proliferation and so forth, I
don't find this kind of scenario all that farfetched --- if one
assumes a soft takeoff...

It is definitely not in the category of the chocolate versus
strawberry ice cream wars [or, as in Dr. Seuss's Butter Battle Book,
the war between the butter-side-up and butter-side-down
bread-butterers ... ]

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Though I have remained often-publiclyopposed to emergence and 'fuzzy' design since first realising what the true
consequences (of the heavily enhanced-GA-based system I was workingon at the time) were, as far as I know I haven't made that particularmistake again.Whereas, my view is that it is precisely the effective combination of probabilistic logic with complex systems science (including the notion of emergence) that will lead to, finally, a coherent and useful theoretical framework for designing and analyzing AGI systems...
I am also interested in creating a fundamental theoretical framework for AGI, but am pursuing this on the backburner in parallel with practical work on Novamente (even tho I personally find theoretical work more fun...). I find that in working on the theoretical framework it is very helpful to proceed in the context of a well-fleshed-out practical design...
-- Ben G

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Japan,despitealotofinterestbackin5thGenerationcomputerdaysseemstohaveadifficulttimeinnovatinginadvancedsoftware.Iamnotsurewhy.
I talked recently, at an academic conference, with the guy who directs robotics research labs within ATR, the primary Japanese government research lab.He said that at the moment the powers that be there are not interested in funding cognitive robotics. 
SohowdowegetyouandyourteamthenecessaryfundingASAPtocompleteyourwork?Idon'tknowthelegalissuesinvolvedbutabunchofveryinterestedfansofSingularitycouldquitepossiblyputtogetherthe$5millionorsoIthinkyoulastsaidyouneededprettyquickly.Thiswasbroughtupquitesometimeago,bymeatleast,andatthetimeIthinkIrecallyousayingthattherightstructurewasn'tinplacetoacceptsuchfunding.Whatisthatstructureandwhatisinthewayofsettingitup?
Well, $5M would be great and is a fair estimate of what I think it would take to create Singularity based on further developing the current Novamente technology and design.
However, it is quite likely sensible to take an incremental approach. For instance, if we were able to raise $500K right now, then during the course of a year we could develop rather impressive demonstrations of Novamente proto-AGI technology, which would make raising the rest of the money easier.
The structure is indeed in place to accept such funding: Novamente LLC, which is a Delaware corporation that owns the IP of the Novamente AI Engine, and is currently operating largely as an AI consulting company (with a handful of staff in Brazil, as well as me here in Maryland and Bruce Klein in San Francisco and Ari Heljakka in Finland). However, Novamente LLC is currently paying 
2.5 programmers to work full-time toward AGI (not counting the portion of my time that is thus expended). But alas, this is not enough to get us there very fast...If for some reason a major funding source preferred to fund an AGI project in a nonprofit context, we also have AGIRI, a Delaware nonprofit corporation. I am not committed to doing the Novamente AI Engine in a for-profit context, although that currently seems to me to be the most rational choice. My current feeling is that I would only be willing to take it nonprofit in the context of a very significant donation (say $3M+, not just $500K), because of a fear that follow-up significant nonprofit donations might be difficult to come by, but this attitude may be subject to change. 
Bruce Klein has been leading a fundraising effort for nearly a year now with relatively success. To be honest, we are at the point of putting raising funds explicitly for building AGI on the backburner now, and focusing on raising funds for commercial projects that will pay for the development of various components of the AGI, and if they succeed big-time will make us rich enough to pay for development of the AGI in a more direct and focused way. Which is rather frustrating, because if we had a decent amount of funding we could progress much more rapidly and directly toward the end goal of an ethically positive AGI system created based on the Novamente architecture.
The main issue that potential investors/donors seem to have may be summarized in the phrase perceived technology risk. In other words: We have not been able to convince anyone with a lot of money that there is a reasonable chance we can actually succeed in creating an AGI in less than a couple decades. Potential investors/donors see that we are a team of very smart people with some very sophisticated and complex ideas about AGI, and a strong knowledge of the AI, computer and cognitive science fields -- but they cannot understand the details of the Novamente system (which is not surprising since new Novamente team members take at least 6 months to really get it), and thus cannot make any real assessment of our odds of success, so they just assume our odds of success are low.
As an example, in a conversation over dinner with a wealthy individual and potential investor in LA two weeks ago, I was asked: Him: But still, I can't understand why you haven't found investment money
yet. I mean, it should be obvious to potential investors that, if you
succeed, the potential rewards are incredible.
Me: Yes, that's obvious to everyone.

So the problem is that no one believes you can really do it.

Yes. Their estimates of our odds of success are apparently very low.Well, how can I know if you yourself really believe that you can create an AGI in a feasible amount of time. You claim you can create a human-level AI in four years... but how can I believe you? How do I know you're not just making that up in order to get research money to play with?
My reply was: Well look, there are two aspects. There's engineering time, and then teaching time. Engineering time is easier to estimate. I'm quite confident that if I could just re-orient the Novamente LLC staff currently working on consulting projects to the AGI project, then we could finish engineering the Novamente system in 2-3 years time. It's complex, and 

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, I know you must be frustrated with fund raising, but investor
relunctance is understandable from the perspective that for decadesnow there has always been someone who said we're N years from fullblown AI, and then N years passed with nothing but narrow AI progress.Of course, someone will end up being right at some point.
Sure ... and most of the time, the narrow AI progress achieved via AI-directed funding has not even been significant, or usefulHowever, it seems to me that the degree of skepticism about AGI goes beyond what is rational. I attribute this to an unconscious reluctance on the part of most humans to conceive that **we**, the mighty and glorious human rulers of the Earth, could really be superseded by mere software programs created by mere mortal humans. Even humans who are willing to accept this theoretically, don't want to accept this pragmatically, as something that may occur in the near term.
After all, there seems to be a lot more cash around for nanotech than for AGI, and that is quite unproven technology also -- and technology that is a hell of a lot riskier and more expensive to develop than AGI software. It is not the case that investors are across the board equally skeptical of all unproven technologies -- AI seems to be viewed with an extra, and undeserved, degree of skepticism. 
For the record, at the same event, Peter Voss of Adaptive AI(
http://www.adaptiveai.com/) stated his company would have AGI in 2years. I *think* he qualified it as being at the level of a 10 yearold child. Help me out on that, if you remember.I could help you out, but I won't, because I believe Peter asked those of us at that meeting **not** to publicly discuss the details of his presentation there (although, frankly, the details were pretty scanty). If he wants to chip in some more info himself, he is welcome to...
Peter has been more successful than Novamente has at fundraising, during the last couple years. I take my hat off to him for his marketing prowess. I also note that he is a lot more experienced than me on the business marketing side ... Novamente LLC is chock full of brilliant techie futurists, but we are not sufficiently staffed in terms of marketing and sales wizardry.
I have my disagreements with Peter's approach to AGI, inasmuch as I understand it (I know the general gist of his architecture but not the nitty-gritty details). However, I don't want to get into that in detail on this list, for fear of disclosing aspects of Peter's work that he may not want disclosed. My basic issue is that I do not, based on what I know of it, see why his architecture will be capable of representing and learning complex knowledge. I am afraid his knowledge representation and learning mechanisms may be overfitted, to an extent, to early-stage infantile type learning tasks. Novamente is more complex than his system, and thus getting it to master infantile learning may be a little trickier than with his system (this is one thing we're working on now ... and of course I can't make any confident comparisons because I have never worked with Peter's system and also what I do know about it is quite out-of-date), but Novamente is designed from the start to be able to deal with complex reasoning such as mathematics and science, and so once the infantile stage is surpassed, I expect progress to be EXTREMELY rapid.
Having summarized very briefly some of my technical concerns about Peter's approach, I must add that I respect his general thinking about AI very much, and admire his enthusiasm and focus at pursuing the AGI goal. I hope his approach **does** succeed, as I think he would be a responsible and competent AGI daddy -- however, based on what I know, I do think that Novamente has far higher odds of success...
-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] i'm new

2006-10-09 Thread Ben Goertzel

Hi,

On 10/9/06, Bruce LaDuke [EMAIL PROTECTED] wrote:

Just a sidebar on the whole 2012 topic.

It's quite possible that singularity is **already here** as new knowledge
and that the only barrier is social acceptance.  Radical new knowledge is
historically created long before it is accepted by society or
institutionalized and that often outside the boudaries of the academic
establishment.


Singularity requires realized technology, not just understanding.

As it happens, I think I do understand how to create a superhuman AI,
but even if I'm right (which you are free to doubt, of course) this
knowledge in itself is just a potential rather than actual
Singularity

If the high likelihood of the coming of a Singularity were widely
accepted, then the expected time till the Singularity comes would
decrease a lot, because of increased financial and attentional
resources paid to Singularity-enabling technologies...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-25 Thread Ben Goertzel

Peter Voss wrote:

I have a more fundamental question though: Why in particular would we want
to convince people that the Singularity is coming? I see many disadvantages
to widely promoting these ideas prematurely.


If one's plan is to launch a Singularity quickly, before anyone else
notices, then I feel that promoting these ideas is basically
irrelevant  It is unlikely that promotion will lead to such rapid
spread of the concepts as to create significant risk of
Singularity-enabling technologies being made illegal in the near
term...

OTOH, if the Singularity launch is to happen a little more slowly,
then it will be of value if a larger number of intelligent and
open-minded people have more thoroughly thought through the
Singularity and related ideas.  These sorts of ideas take a while to
sink in; and I think that people who have had the idea of the
Singularity in their minds for a while will be better able to grapple
with the reality when it comes about...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Excerpt from a work in progress by Eliezer Yudkowsky

2006-09-15 Thread Ben Goertzel

Hi,

Eliezer asked me to forward this to the Singularity list... it is an
excerpt from a work-in-progress of his and is relevant to some current
discussions on the list.

-- Ben G

-- Forwarded message --
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
Date: Sep 15, 2006 3:43 PM
Subject: Please fwd to Singularity list
To: Ben Goertzel [EMAIL PROTECTED]


Ben, please forward this to your Singularity list.

** Excerpts from a work in progress follow. **

Imagine that I'm visiting a distant city, and a local friend volunteers
to drive me to the airport.  I don't know the neighborhood.  Each time
my friend approaches a street intersection, I don't know whether my
friend will turn left, turn right, or continue straight ahead.  I can't
predict my friend's move even as we approach each individual
intersection - let alone, predict the whole sequence of moves in advance.

Yet I can predict the result of my friend's unpredictable actions: we
will arrive at the airport.  Even if my friend's house were located
elsewhere in the city, so that my friend made a completely different
sequence of turns, I would just as confidently predict our arrival at
the airport.  I can predict this long in advance, before I even get into
the car.  My flight departs soon, and there's no time to waste; I
wouldn't get into the car in the first place, if I couldn't confidently
predict that the car would travel to the airport along an unpredictable
pathway.

You cannot build Deep Blue by programming in a good chess move for every
possible chess position.  First of all, it is impossible to build a
chess player this way, because you don't know exactly which positions it
will encounter.  You would have to record a specific move for zillions
of positions, more than you could consider in a lifetime with your slow
neurons.  And second, even if you did this, the resulting program would
not play chess any better than you do.

This holds true on any level where an answer has to meet a sufficiently
high standard.  If you want any answer better than you could come up
with yourself, you necessarily sacrifice your ability to predict the
exact answer in advance.

But you don't sacrifice your ability to predict *everything*.  As my
coworker, Marcello Herreshoff, says:  We never run a program unless we
know something about the output and we don't know the output.  Deep
Blue's programmers didn't know which moves Deep Blue would make, but
they must have known something about Deep Blue's output which
distinguished that output from the output of a pseudo-random move
generator.  After all, it would have been much simpler to create a
pseudo-random move generator; but instead the programmers felt obligated
to carefully craft the complex program that is Deep Blue.  In both
cases, the programmers wouldn't know the move - so what was the key
difference?  What was the fact that the programmers knew about Deep
Blue's output, if they didn't know the output?

They didn't know for certain that Deep Blue would win, but they knew
that it would try; they knew how to describe the compact target region
into which Deep Blue was trying to steer the future, as a fact about its
source code.

It is not possible to prove strong, non-probabilistic theorems about the
external world, because the state of the external world is not fully
known.  Even if we could perfectly observe every atom, there's a little
thing called the problem of induction.  If every swan ever observed
has been white, it doesn't mean that tomorrow you won't see a black
swan.  Just because every physical interaction ever observed has obeyed
conservation of momentum, doesn't mean that tomorrow the rules won't
change.  It's never happened before, but to paraphrase Richard Feynman,
you have to go with what your experiments tell you.  If tomorrow your
experiments start telling you that apples fall up, then that's what you
have to believe.

So you can't build an AI by specifying the exact action - the particular
chess move, the precise motor output - in advance.  It also seems that
it would be impossible to prove any statement about the real-world
consequences of the AI's actions.  The real world is not knowably
knowable.  Even if we possessed a model that was, in fact, complete and
correct, we could never have absolute confidence in that model.  So what
could possibly be a provably Friendly AI?

You can try to prove a theorem along the lines of:  Providing that the
transistors in this computer chip behave the way they're supposed to,
the AI that runs on this chip will always *try* to be Friendly.  You're
going to prove a statement about the search the AI carries out to find
its actions.  To prove this formally, you would have to precisely define
try to be Friendly: the complete criterion that the AI uses to choose
among its actions - including how the AI learns a model of reality from
experience, how the AI identifies the goal-valent aspects of the reality
it learns to model, and how the AI chooses actions

Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel

Hi,


Just for kicks - let's assume that AIXItl yields 1% more intelligent
results when provided 10^6 times the computational resources when
compared to another algorythm X. Let's further assume that today the
cost asscociated with X for reaching a benefit of 1 will be 1 compared
to a cost of 10^6 for a benefit of 1.01 when using the AIXItl. To
simplify I will further assume that cost of computational resources
will continue to half every 12 month. In this scenario it will be
computationally cheaper to apply the AIXItl in less than 20 years.


But this is nowhere near reality -- the level of inefficiency of
AIXItl is such that it will never be usable within the physical
universe, unless current theories of physics are very badly wrong.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Is Friendly AI Bunk?

2006-09-11 Thread Ben Goertzel

Thanks Ben, Russel et al for being so patient with me ;-) To
summarize: AIXItl's inefficiencies are so large and the additional
benefit it provides is so small that it will likely never be a logical
choice over other more efficient, less optimal algorithms.

Stefan


The additional benefit it *would* provide would be large, but, the
inefficiencies are so large that the theoretical benefits or otherwise
are irrelevant..

Basically, what AIXItl does is, before each action it takes, search
the set of all possible computer programs of length  l and runtime
t, and figure out which of these programs should be allowed to
control the action (based on prior history).   But there are vey
vey many computer programs of length  l and runtime t to search
through, so this is a totally infeasible way to ever do things in
practice.

The reason AIXItl can do anything that any other program can do, is
that it searches through all other programs (subject to the length and
runtime requirements).  But the reason it is so slow is that it is
continually searching through a humongous space of possible programs.

Juergen Schmidhuber has tried to partially work around these problems
in his OOPS AI program, but the attempt has not been very
successful...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]