Re: [singularity] Vista/AGI

2008-04-24 Thread Samantha Atkins

J. Andrew Rogers wrote:


On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote:
That's surely part of it ... but investors have put big $$ into much 
LESS

mature projects in areas such as nanotech and quantum computing.



This is because nanotech and quantum computing can be readily and 
easily packaged as straightforward physical machinery technology, 
which a lot of people can readily conceptualize even if they do not 
actually understand it.
AGI too will be on physical machinery.  I dare think I am smarter than 
the average bear but quantum computing makes my head hurt.  From what I 
have read about the field I doubt we are much closer to workable general 
quantum computing than we are to AGI.  AGI makes a lot more conceptual 
and somewhat detailed sense to me.   Nanotech itself has difficulty 
getting many takers for acheiving full molecular nanotech.  Sometimes I 
have the paranoid idea that the difference is that things that are too 
disruptive have a MUCH harder time getting funding. 

AGI is not a physical touchable technology in the same sense (or even 
software sense), which is further aggravated by the many irrational 
memes of woo-ness that surround the idea of consciousness, 
intelligence, spirituality that the vast majority of investors 
uncritically subscribe to.
As investors generally seem a hard-headed lot about investment dollars I 
would be surprised if this is a large factor.   I do think there is a 
yuk-factor or a xenophobia of the utterly unknown at work when 
considering funding of highly disruptive utterly game-changing 
technology.   I have been in conferences of futurists no less where over 
70% of the audience raises their hand that they would likely not avail 
themselves of immortality if it was immediately available!The 
conservative preservation of the known goes a lot deeper than we credit.


- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=101816851-9a120b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-24 Thread Eric B. Ramsay
Samantha Atkins wrote:I have been in conferences of futurists no less where over 70% of the audience raises their hand that they would likely not avail themselves of immortality if it was immediately available!The conservative preservation of the known goes a lot deeper than we credit.That's quite a percentage. I wonder what the number would be for the public at large. Did anyone ask this group of futurists what their major objection to immortality is? Religious reasons? Eric B. Ramsay




  

  
  singularity | Archives

 | Modify
 Your Subscription


  

  




Re: [singularity] Vista/AGI

2008-04-14 Thread MI
On Sun, Apr 13, 2008 at 10:27 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi,


Just my personal opinion...but it appears that the exponential technology
   growth chart, which is used in many of the briefings, does not include
   AI/AGI. It is processing centric.  When you include AI/AGI the exponential
   technology curve flattens out in the coming years (5-7) and becomes part 
 of
   a normal S curve of development.  While computer power and processing will
   increase exponentially (as nanotechnology grows) the area of AI will need
   more time to develop.
  
I would be interested in your thoughts.

  I think this is because progress toward general AI has been difficult
  to quantify
  in the past, and looks to remain difficult to quantify into the future...

  I am uncertain as to the extent to which this problem can be worked around,
  though.

  Let me introduce an analogy problem

  Understanding the operation of the brain better and better is to
  scanning the brain with higher and higher spatiotemporal accuracy,
  as Creating more and more powerful AGI is to what?

  ;-)

  The point is that understanding the brain is also a nebulous and
  hard-to-quantify goal, but we make charts for it by treating brain
  scan accuracy as a more easily quantifiable proxy variable.  What's a
  comparable proxy variable for AGI?

  Suggestions welcome!

Being able to abstract and then implement only those components and
mechanisms relevant to intelligence from all the data these better
brain scans provide?

If intelligence can be abstracted into layers (analogous to network
layers), establishing a set of performance indicators at each layer
and then increasing the values corresponding to these indicators
might probably provide a better measure of AGI's progress. Using that
model, increments of progress might then be much easier to identify,
verify and communicate even for the smallest increments.

Slawek

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-14 Thread Ben Goertzel
Brain-scan accuracy is  a very crude proxy for understanding of brain
function; yet a much better proxy than anything existing for the case
of AGI...

On Sun, Apr 13, 2008 at 11:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Ben Goertzel wrote:

  Hi,
 
 
Just my personal opinion...but it appears that the exponential
 technology
   growth chart, which is used in many of the briefings, does not include
   AI/AGI. It is processing centric.  When you include AI/AGI the
 exponential
   technology curve flattens out in the coming years (5-7) and becomes
 part of
   a normal S curve of development.  While computer power and processing
 will
   increase exponentially (as nanotechnology grows) the area of AI will
 need
   more time to develop.
  
I would be interested in your thoughts.
  
 
  I think this is because progress toward general AI has been difficult
  to quantify
  in the past, and looks to remain difficult to quantify into the future...
 
  I am uncertain as to the extent to which this problem can be worked
 around,
  though.
 
  Let me introduce an analogy problem
 
  Understanding the operation of the brain better and better is to
  scanning the brain with higher and higher spatiotemporal accuracy,
  as Creating more and more powerful AGI is to what?
 
  ;-)
 
  The point is that understanding the brain is also a nebulous and
  hard-to-quantify goal, but we make charts for it by treating brain
  scan accuracy as a more easily quantifiable proxy variable.  What's a
  comparable proxy variable for AGI?
 
  Suggestions welcome!
 

  Sadly, the analogy is a wee bit broken.

  Brain scan accuracy as a measure of progress in understanding the operation
 of the brain is a measure that some cognitive neuroscientists may subscribe
 to, but the majority of cognitive scientists outside of that area consider
 this to be a completely spurious idea.

  Doug Hofstadter said this eloquently in I Am A Strange Loop:  getting a
 complete atom-scan in the vicinity of a windmill doesn't mean that you are
 making progress toward understanding why the windmill goes around. It just
 gives you a data analysis problem that will keep you busy until everyone in
 the Hot Place is eating ice cream.




  Richard Loosemore



  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-14 Thread Charles D Hixson

MI wrote:

...
Being able to abstract and then implement only those components and
mechanisms relevant to intelligence from all the data these better
brain scans provide?

If intelligence can be abstracted into layers (analogous to network
layers), establishing a set of performance indicators at each layer
and then increasing the values corresponding to these indicators
might probably provide a better measure of AGI's progress. Using that
model, increments of progress might then be much easier to identify,
verify and communicate even for the smallest increments.

Slawek
  

Abstracting away the non-central-to-AI parts of the brain isn't necessary.

Try it this way (a possible, if not plausible path to AI).
1) Artificial knee/hip joints
2) Artificial corneas
3) Artificial retinas
4) Artificial cochlea
5) Artificial vertebrae
6) Nerve welds to rejoin severed spinal nerves
7) Artificial nerves
8) Artificial nerve welds to repair severed optic/aural nerves
9) Artificial visual or audio cortex
10) Repair of stroke damaged nerves
11) Replacement of damaged portions of the brain with artificial 
replacements (Hippocampus, etc.)

12) Repair of damaged brains in infants (birth defects)
13) continue on with gradually more significant replacements...at some 
point you'll hit an AGI.


P.S.:  I think this is a workable approach, but one that will 
materialize too slowly to dominate.  Still, we're already working on 
steps 2, 3, 4,  5.  Possibly also 6.



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins

Ben Goertzel wrote:

Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.
  
So you are talking on the order of $9M - $30M.  

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

  
You know, I am getting pretty tired of hearing this poor mouth crap.   
This is not that huge a sum to raise or get financed.  Hell, there are 
some very futuristic rich geeks who could finance this single-handed and 
would not really care that much whether they could somehow monetize the 
result.   I don't believe for a minute that there is no way to do 
this.So exactly why are you singing this sad song year after year?



Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

  
From what you said above $50M will do the entire job.   If that is all 
that is standing between us and AGI then surely we can get on with it in 
all haste.   If it is a great deal more than this relatively small 
amount of money then lets move on to talk about that instead of whining 
about lack of coin.


- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Samantha:From what you said above $50M will do the entire job.   If that is 
all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's  
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health  wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution.  To emulate it, or parallel its 
powers, is going to take more like many not just trillions but zillions of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job.   If that 
is all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it 
is also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to 
their opinions about the future of AGI. Ben is entitled to his view that 
it will only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly 
make *reasonable* predictions about how long it will take to solve the 
rest - predictions that anyone, including yourself should take 
seriously- especially if you've got any sense, any awareness of AI's 
long, ridiculous and incorrigible record of crazy predictions here, (and 
that's by Minsky's  Simon's as well as lesser lights) - by people also 
making predictions without having solved any of AGI's problems. All 
investors beware. Massive health  wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the 
human brain/body is the most awesomely complex machine in the known 
universe, the product of billions of years of evolution.  To emulate it, 
or parallel its powers, is going to take more like many not just 
trillions but zillions of dollars - many times global output, many, 
many Microsoft's. Now right now that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, 
there's not a lot of point in further discussion, is there? Nobody's 
really gaining from it, are they? It's just masturbation, isn't it?


Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
 If you disagree with that, specify exactly what you mean by a problem 
of AGI, and let us list them.  I have discovered the complex systems 
problem:  this is a major breakthrough.  You cannot understand it, or 
why it is a major breakthrough, but that makes no odds.


Everything you say in this post is based on your own ignorance of what 
AGI actually is.  What you are really saying is Nobody has been able to 
make me understand what AGI has achieved, so AGI is useless.


Sorry, but your posts are sounding more and more like incoherent rants.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Eric B. Ramsay
Mike: 
   
  I am a novice to this AGI business and so I am not being cute with the 
following question: What, in your opinion, would be the first AGI problem to 
tackle. Perhaps theses various problems can't be priority ordered but 
nontheless, which problem stands out for you?. Thanks.
   
  Eric B. Ramsay

Mike Tintner [EMAIL PROTECTED] wrote:
  Samantha:From what you said above $50M will do the entire job. If that is 
all
that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.

1) Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.

BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's  
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health  wealth warnings.

MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution. To emulate it, or parallel its 
powers, is going to take more like many not just trillions but zillions of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.

But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner



Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
If you disagree with that, specify exactly what you mean by a problem of 
AGI, and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of Madonna 
which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


[By all means let's identify some more unsolved problems BTW..]

I think Ben  I more or less agreed that if he had really solved 1) - if his 
pet could really independently learn to play hide-and-seek after having been 
taught to fetch, it would constitute a major breakthrough, worthy of 
announcement to the world. And you can be sure it would be provoking a great 
deal of discussion.


As for your discoveries,fine, have all the self-confidence you want, but 
they have had neither public recognition nor, as I understand, publication 
or identification. Nor do you have a working machine. And if you're going to 
claim anyone in AI, like Hofstadter, has solved 5 or 6...puh-lease.


I don't think any reasonable person in AI or AGI will claim any of these 
have been solved. They may want to claim their method has promise, but not 
that it has actually solved any of them.


Which of the above, or any problem of AGI, period, do you claim to have been 
solved?



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Samantha,

  You know, I am getting pretty tired of hearing this poor mouth crap.   This
 is not that huge a sum to raise or get financed.  Hell, there are some very
 futuristic rich geeks who could finance this single-handed and would not
 really care that much whether they could somehow monetize the result.   I
 don't believe for a minute that there is no way to do this.So exactly
 why are you singing this sad song year after year?
...
  From what you said above $50M will do the entire job.   If that is all that
 is standing between us and AGI then surely we can get on with it in all
 haste.   If it is a great deal more than this relatively small amount of
 money then lets move on to talk about that instead of whining about lack of
 coin.


This is what I thought in 2001, and what Bruce Klein thought when he started
working with me in 2005.

In brief, what we thought is something like:


OK, so  ...

On the one hand, we have an AGI design that seems to its sane PhD-scientist
creator to have serious potential of leading to human-level AGI.  We have
a team of professional AI scientists and software engineers who are
a) knowledgeable about it, b) eager to work on it, c) in agreement that
it has a strong chance of leading to human-level AGI, although with
varying opinions on whether the timeline is, say, 7, 10, 15 or 20 years.
Furthermore, the individuals involved are at least thoughtful about issues
of AGI ethics and the social implications of their work.   Carefully-detailed
arguments as to why it is believed the AGI design will work exist, but,
these are complex, and furthermore do not comprise any sort of irrefutable
proof.

On the other hand, we have a number of wealthy transhumanists who would
love to see a beneficial human-level AGI come about, and who could
donate or invest some $$ to this cause without serious risk to their own
financial stability should the AGI effort fail.

Not only that, but there are a couple related factors

a) early non-AGI versions of some of the components of said AGI design
are already being used to help make biological discoveries of relevant
to life extension (as documented in refereed publications)

b) very clear plans exist, including discussions with many specific potential
customers, regarding how to make $$ from incremental products along the
way to the human-level AGI, if this is the pathway desired


So, we talked to a load of wealthy futurists and the upshot is that it's really
really hard to get these folks to believe you have a chance at achieving
human-level AGI.  These guys don't have the background to spend 6 months
carefully studying the technical documentation, so they make a gut decision,
which is always (so far) that gee, you're a really smart guy, and your team
is great, and you're doing cool stuff, but technology just isn't there yet.

Novamente has gotten small (but much valued)
investments from some visionary folks, and SIAI has
had the vision to hire 1.6 folks to work on OpenCog, which is an
open-source sister project of the Novamente Cogntion Engine project.

I could speculate about the reasons behind this situation, but the reason is NOT
that I suck at raising money ... I have been involved in fundraising
for commercial
software projects before and have been successful at it.

I believe that in 10-15 years from now, one will be able to approach the exact
same people with the same sort of project, and get greeted with enthusiasm
rather than friendly dismissal.  Going against prevailing culture is
really hard,
even if you're dealing with people who **think** they're seeing beyond the
typical preconceptions of their culture.  Slowly though the idea that AGI is
possible and feasible is wending its way into the collective mind.

I stress, though, that if one had some kind of convincing, compelling **proof**
of being on the correct path to AGI, it would likely be possible to raise $$
for one's project.  This proof could be in several possible forms, e.g.

a) a mathematical proof, which was accepted by a substantial majority
of AI academics

b) a working software program that demonstrated human-child-like
functionality

c) a working robot that demonstrated full dog-like functionality

Also, if one had good enough personal connections with the right sort
of wealthy folks, one could raise the $$ -- based on their personal trust
in you rather than their trust in your ideas.

Or of course, being rich and funding your work yourself is always an
option (cf Jeff Hawkins)

This gets back to a milder version of an issue Richard Loosemore is
always raising; the complex systems problem.  My approach to AGI
is complex systems based, which means that the components are NOT
going to demonstrate any general intelligence -- the GI is intended
to come about as a holistic, whole-system phenomenon.  But not in any
kind of mysterious way: we have a detailed, specific theory of why
this will occur, in terms of the particular interactions between the
components.

But what 

Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
  I don't think any reasonable person in AI or AGI will claim any of these
 have been solved. They may want to claim their method has promise, but not
 that it has actually solved any of them.

Yes -- it is true, we have not created a human-level AGI yet.  No serious
researcher disagrees.  So why is it worth repeating the point?

Similarly, up till the moment when the first astronauts walked on the moon,
you could have run around yelping that no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise.

It's true -- theories and ideas can always be wrong, and empirical proof adds
a whole new level of understanding.  (Though, empirical proofs don't exist
in a theoretical vacuum, they do require theoretical interpretation.
For instance
physicists don't agree on which supposed top quark events really were
top quarks ... and some nuts still don't believe people walked on the moon,
just as even after human-level AGI is achieved some nuts still won't believe
it...)

Nevertheless, with something as complex as AGI you gotta build stuff based
on a theory.  And not everyone is going to believe the theory until the proof
is there.  And so it goes...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-13 Thread Derek Zahn
Ben Goertzel: Yes -- it is true, we have not created a human-level AGI yet. No 
serious researcher disagrees. So why is it worth repeating the point?
Long ago I put Tintner in my killfile -- he's the only one there, and it's 
regrettable but it was either that or start taking blood pressure medicine... 
so *plonk*.  It's not necessarily that I disagree with most of his (usually 
rather obvious) points or think his own ideas (about image schemas or whatever) 
are worse than other stuff floating around, but his toxic personality makes the 
benefit not worth the cost.  Now I only have to suffer the collateral damage in 
responses.
 
However, I went to the archives to fetch this message.   I do think it would be 
nice to have tests or problems that one could point to as partial 
progress... but it's really hard.  Any such things have to be fairly rigorously 
specified (otherwise we'll argue all day about whether they are solved or not 
-- see Tintner's Creativity problem as an obvious example), and they need to 
not be AGI complete themselves, which is really hard.  For example, Tintner's 
Narrative Visualization task strikes me as needing all the machinery and a very 
large knowledge base so by the time a system could do a decent job of this in a 
general context it would already have demonstrably solved the whole thing.
 
The other common criticism of tests is that they can often be solved by 
Narrow-AI means (say, current face recognizers which are often better at this 
task than humans).  I don't necessarily think this is a disqualification 
though... if the solution is provided in the context of a particular 
architecture with a plausible argument for how the system could have produced 
the specifics itself, that seems like some sort of progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to measure 
the ease with which the system can be adapted by its builders to solve narrow 
AI problems -- sort of a cognitive enhancement measurement.  Such an approach 
makes a decent programming language and development environment be a tangible 
early step toward AGI but maybe that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my opinion 
to this field.  I can't think of any though, and they might not exist.  If it 
is in fact impossible to find such tasks, what does that say about AGI as an 
endeavor?
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:



Mike,

Your comments are irresponsible.  Many problems of AGI have been 
solved. If you disagree with that, specify exactly what you mean by a 
problem of AGI, and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


In your ignorance, you named a set of targets, not a set of problems. 
 If you want to see these fully functioning, you will see them in the 
last year of a 10-year AGI project  but if we listed to you, the 
first nine years of that project would be condemned as a complete waste 
of time.


If, on the other hand, you want to see an *in* *principle* solution (an 
outline of how these can all be implemented), then these in principle 
solutions are all in existence.  It is just that you do not know them, 
and when we go to the trouble of pointing them out to you (or explaining 
them to you), you do not understand them for what they are.




[By all means let's identify some more unsolved problems BTW..]

I think Ben  I more or less agreed that if he had really solved 1) - if 
his pet could really independently learn to play hide-and-seek after 
having been taught to fetch, it would constitute a major breakthrough, 
worthy of announcement to the world. And you can be sure it would be 
provoking a great deal of discussion.


As for your discoveries,fine, have all the self-confidence you want, 
but they have had neither public recognition nor, as I understand, 
publication 


Okay, stop rght there.

This is a perfect example of the nonsense you utter on this list:  you 
know that I have published a paper on the complex systems problem 
because you told me recently that you have read the paper.


But even though you have read this published paper, all you can do when 
faced with the real achievement that it contains is to say that (a) you 
don't understand it, and (b) this published paper that you have already 
read  has not been published!


Are there no depths to which you will not stoop?



Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Derek Zahn wrote:

Ben Goertzel:

  Yes -- it is true, we have not created a human-level AGI yet. No serious
  researcher disagrees. So why is it worth repeating the point?

Long ago I put Tintner in my killfile -- he's the only one there, and 
it's regrettable but it was either that or start taking blood pressure 
medicine... so *plonk*.  It's not necessarily that I disagree with most 
of his (usually rather obvious) points or think his own ideas (about 
image schemas or whatever) are worse than other stuff floating around, 
but his toxic personality makes the benefit not worth the cost.  Now I 
only have to suffer the collateral damage in responses.


Yes, he was in my killfile as well for a long time, then I decided to 
give him a second chance.  Now I am regretting it, so back he goes ... 
*plonk*.


Mike:  the only reason I am now ignoring you is that you persistently 
refuse to educate yourself about the topics discussed on this list, and 
instead you just spout your amateur opinions as if they were fact.  Your 
inability to distinguish real science from your amateur opinion is why, 
finally, I have had enough.


I apologize to the list for engaging him.  I should have just ignored 
his ravings.




However, I went to the archives to fetch this message.   I do think it 
would be nice to have tests or problems that one could point to as 
partial progress... but it's really hard.  Any such things have to be 
fairly rigorously specified (otherwise we'll argue all day about whether 
they are solved or not -- see Tintner's Creativity problem as an 
obvious example), and they need to not be AGI complete themselves, 
which is really hard.  For example, Tintner's Narrative Visualization 
task strikes me as needing all the machinery and a very large knowledge 
base so by the time a system could do a decent job of this in a general 
context it would already have demonstrably solved the whole thing.


It looks like you, Ben and I have now all said exactly the same thing, 
so we have a strong consensus on this.



The other common criticism of tests is that they can often be solved 
by Narrow-AI means (say, current face recognizers which are often better 
at this task than humans).  I don't necessarily think this is a 
disqualification though... if the solution is provided in the context of 
a particular architecture with a plausible argument for how the system 
could have produced the specifics itself, that seems like some sort of 
progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to 
measure the ease with which the system can be adapted by its builders to 
solve narrow AI problems -- sort of a cognitive enhancement 
measurement.  Such an approach makes a decent programming language and 
development environment be a tangible early step toward AGI but maybe 
that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my 
opinion to this field.  I can't think of any though, and they might not 
exist.  If it is in fact impossible to find such tasks, what does that 
say about AGI as an endeavor?


My own feeling about this is that when a set of ideas start to gel into 
one coherent approach to the subject, with a description of those ideas 
being assembled as a book-length manuscript, and when you read those 
ideas and they *feel* like progress, you will know that substantial 
progress is happening.


Until then, the only people who might get an advanced feeling that such 
a work is on the way are the people on the front lines, you see all the 
pieces coming together just before they are assembled for public 
consumption.


Whether or not someone could write down tests of progress ahead of that 
point, I do not know.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Ben:So why is it worth repeating the point?Similarly, up till the moment 
when the first astronauts walked on the moon,

you could have run around yelping that no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise.

I repeated the details because I was challenged. (And unlike Richard, I do 
answer challenges). The original point -  a valid one, I think - is until 
you've solved one AGI problem, you can't make any reasonable prediction as 
to WHEN the rest will be solved and how much it will cost in resources. And 
it's not worth much discussion.


AGI is different from moonwalking - that WAS successfully predicted by JFK 
because they did indeed have technology reasonably likely to bring it about.


I would compare AGI predictions with predicting when we will have a 
mind-reading machine, (except that personally, I think AGI is much harder). 
Yes, you can have a bit of interesting discussion about that to begin with, 
but then the subject, i.e. making predictions,  exhausts itself, because 
there are too many unknowns. Ditto here. No? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins

Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job.   If 
that is all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest 
the following is a reasonable *framework* for any discussions - 
although it is also a framework to end discussions for the moment.
Sigh.  I *was* somewhat tedious in that I am getting more impatient year 
by year especially as I see more friends succumb to maladies that AGI or 
even more IA could solve and watch the world spin closer to chaos.   
However I do not believe you have any business proposing any framework 
or definitive solution as you do not have the knowledge or chops to do so. 



- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Jean-paul Van Belle
Hi Mike

Your 1 consists of two separate challenges: (1) reasoning  (2) learning
IMHO your 3 to 6 can be classified under (3) pattern recognition. I think 
perhaps even your 2 may flow out of pattern recognition.
Of course, the real challenge is to find an algorithmic way (or architecture) 
to do the above without bumping into exponential explosion.e. move the problem 
out of the NP-complete arena. (Else an AGI will never exceed human intelligence 
by a real margin.)

=Jean-Paul

 Mike Tintner [EMAIL PROTECTED] wrote:
 Your comments are irresponsible.  Many problems of AGI have been solved. 
 If you disagree with that, specify exactly what you mean by a problem of 
 AGI, and let us list them.
1.General Problem Solving and Learning (independently learning/solving  
problem in, a new domain)
 
 2.Conceptualisation [Invariant Representation] -  forming concept of Madonna 
 which can embrace rich variety of different faces/photos of her
 
 3.Visual Object Recognition
 
 4.Aural Object Recognition [dunno proper term here - being able to 
 recognize same melody played in any form]
 
 5.Analogy
 
 6.Metaphor
 
 7.Creativity
 
8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner

Jean-Paul,

More or less yes to your points. (I was only tossing off something quickly). 
Actually I think there's a common core to 2)-7) and will be setting out 
something about that soon. But I don't think it's recognizing patterns - on 
the contrary, the common problem is partly that there ISN'T a pattern to be 
recognized. If you have to understand the metaphor, the dancing towers, 
there's no common pattern between human dancers and the skyscrapers referred 
to.


I also think that while there's a common core, each problem has its own 
complications. Maybe Hawkins is right that all the senses process inputs in 
basically the same hierarchical fashion - and any mechanical AGI's senses 
will have to do the same - but if you think about it, the senses evolved 
gradually, so there must be different reasons for that.


(And I would add another unsolved ( unrecognized) problem for AGI:

9)Common Sense Processing - being able to process an event in multiple 
sensory modalities, and switch between them to solve problems - for example, 
to be able to touch an object blindfolded, and then draw its outlines 
visually.  )



Jean-Paul:  Your 1 consists of two separate challenges: (1) reasoning 
 (2) learning
IMHO your 3 to 6 can be classified under (3) pattern recognition. I think 
perhaps even your 2 may flow out of pattern recognition.
Of course, the real challenge is to find an algorithmic way (or 
architecture) to do the above without bumping into exponential explosion.e. 
move the problem out of the NP-complete arena. (Else an AGI will never 
exceed human intelligence by a real margin.)




Mike Tintner [EMAIL PROTECTED] wrote:

Your comments are irresponsible.  Many problems of AGI have been solved.
If you disagree with that, specify exactly what you mean by a problem of
AGI, and let us list them.

1.General Problem Solving and Learning (independently learning/solving
problem in, a new domain)

2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna

which can embrace rich variety of different faces/photos of her

3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to
recognize same melody played in any form]

5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual
scenario ( a movie)   [just made this problem up - but it's a good one]





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Testing AGI (was RE: [singularity] Vista/AGI)

2008-04-13 Thread Matt Mahoney
--- Derek Zahn [EMAIL PROTECTED] wrote:

 At any rate, if there were some clearly-specified tests that are not
 AGI-complete and yet not easily attackable with straightforward software
 engineering or Narrow AI techniques, that would be a huge boost in my
 opinion to this field.  I can't think of any though, and they might not
 exist.  If it is in fact impossible to find such tasks, what does that say
 about AGI as an endeavor?

Text compression is one such test, as I argue in
http://cs.fit.edu/~mmahoney/compression/rationale.html

The test is only for language modeling.  Theoretically it could be extended to
vision or audio processing.  For example, to maximally compress video the
compressor must understand the physics of the scene (e.g. objects fall down),
which can be arbitrarily complex (e.g. a video of people engaging in
conversation about Newton's law of gravity).  Likewise, maximally compressing
music is equivalent to generating or recognizing music that people like.  The
problem is that the information content of video and audio is dominated by
incompressible noise that is nontrivial to remove -- noise being any part of
the signal that people fail to perceive.  Deciding which parts of the signal
are noise is itself AI-hard, so it requires a lossy compression test with
human judges making subjective decisions about quality.  This is not a big
problem for text because the noise level (different ways of expressing the
same meaning) is small, or at least does not overwhelm the signal.  Long term
memory has an information rate of a few bits per second, so any signal you
compress should not be many orders of magnitude higher.

A problem with text compression is the lack of adequate hardware.  There is a
3 way tradeoff between compression ratio, memory, and speed.  The top
compressor in http://cs.fit.edu/~mmahoney/compression/text.html uses 4.6 GB of
memory.  Many of the best algorithms could be drastically improved if only
they ran on a supercomputer with 100 GB or more.  The result is that most
compression gains come from speed and memory optimization rather than using
more intelligent models.  The best compressors use crude models of semantics
and grammar.  They preprocess the text by token substitution from a dictionary
that groups words by topic and grammatical role, then predict the token stream
using mixtures of fixed-offset context models.  It is roughly equivalent to
the ungrounded language model of a 2 or 3 year old child at best.

An alternative would be to reduce the size of the test set to reduce
computational requirements, as the Hutter prize did. http://prize.hutter1.net/
I did not because I believe the proper way to test an adult level language
model is to train it on the same amount of language that an average adult is
exposed to, about 1 GB.  I would be surprised if a 100 MB test progressed past
the level of a 3 year old child.  I believe the data set is too small to train
a model to learn arithmetic, logic, or high level reasoning.  Including these
capabilities would not improve compression.

Tests on small data sets could be used to gauge early progress.  But
ultimately, I think you are going to need hardware that supports AGI to test
it.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread bfwible

Ben,


Good Afternoon.   I am a rather new addition to the AGI mailing list and 
just read your response concerning the future of AGI.  I agree with you. 
The funding is there.  The belief that AGI is right around the corner is 
not.   From the people I talk withthey have read Kurzweil and understand 
the rate of growth of technology (the curve).  They also understand that 
the exponential growth in Kurzweil's graphs represents  processing power and 
this dynamic will substantively increase as nanotechnology moves from MEM to 
a smaller and smaller (atomic possibly) operating environment.




What is difficult for people/investors to gauge is AI/AGI.  Businesses 
and/or government organizations (not including DARPA) need a strategic plan 
for large investments into future technologies. They understand risk but 
weigh it against current requirements and long term gain.  There are 
people/organizations ready to invest if a strong rational analysis on the 
timeline is developed and presented in language that they understand.  The 
latter comment is key.  Senior leaders (business, government and just very 
wealthy investors) are acutely aware of the hype cycle that occurs with all 
new technologies.  I have found that overselling is much worse than 
underselling.




In my previous position I served as a Deputy Chief of a Trends and 
Forecasting Center for the government.  My charter was to provide strategic 
assessments to corporate leadership for investment purposes.  Those 
investments could include people, funding or priorities of effort.  So, I am 
well versed in the interface between developers, customers, senior leaders 
and financial backers.




Just my personal opinion...but it appears that the exponential technology 
growth chart, which is used in many of the briefings, does not include 
AI/AGI. It is processing centric.  When you include AI/AGI the exponential 
technology curve flattens out in the coming years (5-7) and becomes part of 
a normal S curve of development.  While computer power and processing will 
increase exponentially (as nanotechnology grows) the area of AI will need 
more time to develop.


I would be interested in your thoughts.



Regards,

Ben



I am moving to a new position this summer and will be a visiting professor 
in academia for two years.












---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Hi,

  Just my personal opinion...but it appears that the exponential technology
 growth chart, which is used in many of the briefings, does not include
 AI/AGI. It is processing centric.  When you include AI/AGI the exponential
 technology curve flattens out in the coming years (5-7) and becomes part of
 a normal S curve of development.  While computer power and processing will
 increase exponentially (as nanotechnology grows) the area of AI will need
 more time to develop.

  I would be interested in your thoughts.

I think this is because progress toward general AI has been difficult
to quantify
in the past, and looks to remain difficult to quantify into the future...

I am uncertain as to the extent to which this problem can be worked around,
though.

Let me introduce an analogy problem

Understanding the operation of the brain better and better is to
scanning the brain with higher and higher spatiotemporal accuracy,
as Creating more and more powerful AGI is to what?

;-)

The point is that understanding the brain is also a nebulous and
hard-to-quantify goal, but we make charts for it by treating brain
scan accuracy as a more easily quantifiable proxy variable.  What's a
comparable proxy variable for AGI?

Suggestions welcome!

-- Ben

Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 
 --- John G. Rose [EMAIL PROTECTED] wrote:
 
  
   There is no way to know if we are living in a nested simulation, or
 even
   in a
   single simulation.  However there is a mathematical model: enumerate
 all
   Turing machines to find one that simulates a universe with
 intelligent
   life.
  
 
  What if that nest of simulations loop around somehow? What was that
 idea
  where there is this new advanced microscope that can see smaller than
 ever
  before and you look into it and see an image of yourself looking into
 it...
 
 The simulations can't loop because the simulator needs at least as much
 memory
 as the machine being simulated.
 

You're making assumptions when you say that. Outside of a particular
simulation we don't know the rules. If this universe is simulated the
simulator's reality could be so drastically and unimaginably different from
the laws in this universe. Also there could be data busses between
simulations and the simulations could intersect or, a simulation may break
the constraints of its contained simulation somehow and tunnel out. 

John


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  From: Matt Mahoney [mailto:[EMAIL PROTECTED]
  The simulations can't loop because the simulator needs at least as much
  memory
  as the machine being simulated.
  
 
 You're making assumptions when you say that. Outside of a particular
 simulation we don't know the rules. If this universe is simulated the
 simulator's reality could be so drastically and unimaginably different from
 the laws in this universe. Also there could be data busses between
 simulations and the simulations could intersect or, a simulation may break
 the constraints of its contained simulation somehow and tunnel out. 

I am assuming finite memory.  For the universe we observe, the Bekenstein
bound of the Hubble radius is 2pi^2 T^2 c^5/hG = 2.91 x 10^122 bits.  (T = age
of the universe = 13.7 billion years, c = speed of light, h = Planck's
constant, G = gravitational constant).  There is not enough material in the
universe to build a larger memory.  However, a universe up the hierarchy might
be simulated by a Turing machine with infinite memory or by a more powerful
machine such as one with real-valued registers.  In that case the restriction
does not apply.  For example, a real-valued function can contain nested copies
of itself infinitely deep.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:
 How do you resolve disagreements? 

This is a problem for all large databases and multiuser AI systems.  In my
design, messages are identified by source (not necessarily a person) and a
timestamp.  The network economy rewards those sources that provide the most
useful (correct) information. There is an incentive to produce reputation
managers which rank other sources and forward messages from highly ranked
sources, because those managers themselves become highly ranked.

Google handles this problem by using its PageRank algorithm, although I
believe that better (not perfect) solutions are possible in a distributed,
competitive environment.  I believe that these solutions will be deployed
early and be the subject of intense research because it is such a large
problem.  The network I described is vulnerable to spammers and hackers
deliberately injecting false or forged information.  The protocol can only do
so much.  I designed it to minimize these risks.  Thus, there is no procedure
to delete or alter messages once they are posted.  Message recipients are
responsible for verifying the identity and timestamps of senders and for
filtering spam and malicious messages at risk of having their own reputations
lowered if they fail.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-09 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

   Of course what I imagine emerging from the Internet bears little
 resemblance
   to Novamente.  It is simply too big to invest in directly, but it will
 present
   many opportunities.
 
 But the emergence of superhuman AGI's like a Novamente may eventually
 become,
 will both dramatically alter the nature of, and dramatically reduce
 the cost of, global
 brains such as you envision...

Yes, like the difference between writing a web browser and defining the HTTP
protocol, each costing a tiny fraction of the value of the Internet but with a
huge impact on its outcome.



-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-08 Thread Ben Goertzel
This is part of the idea underlying OpenCog (opencog.org), though it's
being done
in a nonprofit vein rather than commercially...

On Tue, Apr 8, 2008 at 1:55 AM, John G. Rose [EMAIL PROTECTED] wrote:
 Just a thought, maybe there are some commonalities across AGI designs where
  components could be built at a lower cost. An investor invests in the
  company that builds component x that is used by multiple AGI projects. Then
  you have your little AGI ecosystem of companies all competing yet
  cooperating. After all, we need to get the Singularity going ASAP so that we
  can upload before inevitable biologic death? I prefer not to become
  nano-dust I'd rather keep this show a rockin' capiche?

  So it's like this - need standards. Somebody go bust out an RFC. Or is there
  work done on this already like is there a CogML? I don't know if the
  Semantic Web is going to cut the mustard... and the name Semantic Web just
  doesn't have that ring to it. Kinda reminds me of the MBone - names really
  do matter. Then who's the numnutz that came up with Web 3 dot oh geezss!

  John



   -Original Message-
   From: Matt Mahoney [mailto:[EMAIL PROTECTED]
   Sent: Monday, April 07, 2008 7:07 PM
   To: singularity@v2.listbox.com


  Subject: Re: [singularity] Vista/AGI
  
   Perhaps the difficulty in finding investors in AGI is that among people
   most
   familiar with the technology (the people on this list and the AGI list),
   everyone has a different idea on how to solve the problem.  Why would I
   invest in someone else's idea when clearly my idea is better?
  
  
   -- Matt Mahoney, [EMAIL PROTECTED]
  

  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-08 Thread Eric B. Ramsay

John G. Rose [EMAIL PROTECTED] wrote:

If you look at the state of internet based intelligence now, all the data
and its structure, the potential for chain reaction or a sort of structural
vacuum exists and it is accumulating a potential at an increasing rate.
IMO...

So you see the arrival of a Tipping Point as per  Malcolm Gladwell. Whether I 
physically benefit from the arrival of the Singularity or not, I just want to 
see the damn thing. I would invest some modest sums in AGI if we could get a 
huge collection plate going around (these collection plate amounts add up!).

Eric B. Ramsay

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-08 Thread John G. Rose
Tipping Point may not be the right word for it. I see it as sort of an
unraveling and then a remolding. Much of the internet is still coming out of
resource compression. It has to stretch out and reoptimize like seeking a
lower energy expenditure structure for higher complexity traffic, but the
lower energy structure has more inherent intelligence. Kind of. it's like it
needs to jump into another efficiency plateau and there is an increasing
daily pressure for this to happen. And once it's reconfigured more of it
will flow into other plateaus or, I see them as harmonic sweet spots..
blah blah.

 

But a wise and savvy investor who has vision, remember much of investing is
hit or miss but a few investors know how to nail the bull's-eye more often
than their less informed counterparts, that wise investor can sense these
things. They know that something's going on and realize that now is the time
to take action, getting in early and gaining a foothold *wink*.

 

John

 

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 08, 2008 8:03 AM
To: singularity@v2.listbox.com
Subject: RE: [singularity] Vista/AGI

 


John G. Rose [EMAIL PROTECTED] wrote:

If you look at the state of internet based intelligence now, all the data
and its structure, the potential for chain reaction or a sort of structural
vacuum exists and it is accumulating a potential at an increasing rate.
IMO...

So you see the arrival of a Tipping Point as per  Malcolm Gladwell. Whether
I physically benefit from the arrival of the Singularity or not, I just want
to see the damn thing. I would invest some modest sums in AGI if we could
get a huge collection plate going around (these collection plate amounts add
up!).

Eric B. Ramsay

  _  


singularity |  http://www.listbox.com/member/archive/11983/=now Archives
http://www.listbox.com/member/archive/rss/11983/ |
http://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 
 John G. Rose [EMAIL PROTECTED] wrote:
 
 If you look at the state of internet based intelligence now, all the
 data
 and its structure, the potential for chain reaction or a sort of
 structural
 vacuum exists and it is accumulating a potential at an increasing
 rate.
 IMO...
 
 So you see the arrival of a Tipping Point as per  Malcolm Gladwell.
 Whether I physically benefit from the arrival of the Singularity or
 not, I just want to see the damn thing. I would invest some modest
 sums in AGI if we could get a huge collection plate going around
 (these collection plate amounts add up!).

You won't see a singularity.  As I explain in
http://www.mattmahoney.net/singularity.html an intelligent agent (you)
is not capable of recognizing agents of significantly greater
intelligence.  We don't know whether a singularity has already occurred
and the world we observe is the result.  It is consistent with the
possibility, e.g. it is finite, Turing computable, and obeys Occam's
Razor (AIXI).

As for AGI research, I believe the most viable path is a distributed
architecture that uses the billions of human brains and computers
already on the Internet.  What is needed is an infrastructure that
routes information to the right experts and an economy that rewards
intelligence and friendliness.  I described one such architecture in
http://www.mattmahoney.net/agi.html  It differs significantly from the
usual approach of trying to replicate a human mind.  I don't believe
that one person or a small group can solve the AGI problem faster than
the billions of people on the Internet are already doing.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Derek Zahn
Matt Mahoney writes: As for AGI research, I believe the most viable path is a 
distributed architecture that uses the billions of human brains and computers 
already on the Internet. What is needed is an infrastructure that routes 
information to the right experts and an economy that rewards intelligence and 
friendliness. I described one such architecture in 
http://www.mattmahoney.net/agi.html It differs significantly from the usual 
approach of trying to replicate a human mind. I don't believe that one person 
or a small group can solve the AGI problem faster than the billions of people 
on the Internet are already doing.
I'm not sure I understand this.  Although a system that can respond well to 
commands of the following form:
 
Show me an existing document that best answers the question 'X'
 
is certainly useful, it is hardly 'general' in any sense we usually mean.  I 
would think a 'general' intelligence should be able to take a shot at answering:
 
Why are so many streets named after trees?
or
If the New York Giants played cricket against the New York Yankees, who would 
probably win?
or
Here are the results of some diagnostic tests.  How likely is it that the 
patient has cancer?  What test should we do next?
or
Design me a stable helicopter with the rotors on the bottom instead of the top
 
Super-google is nifty, but I don't see how it is AGI.
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 
 You won't see a singularity.  As I explain in
 http://www.mattmahoney.net/singularity.html an intelligent agent (you)
 is not capable of recognizing agents of significantly greater
 intelligence.  We don't know whether a singularity has already occurred
 and the world we observe is the result.  It is consistent with the
 possibility, e.g. it is finite, Turing computable, and obeys Occam's
 Razor (AIXI).
 

You should be able to see it coming. That's how people like Kurzweil make
their estimations based on technological rates of change. When it gets
really close though then you can only imagine how it will unfold. 

If a singularity has already occurred how do you know how many there have
been? Has somebody worked out the math on this? And if this universe is a
simulation is that simulation running within another simulation? Is there a
simulation forefront or is it just one simulation within another ad
infinitum? Simulation raises too many questions. Seems like simulation and
singularity would be easier to keep separate, except for uploading. But then
the whole concept of uploading is just ...too.. confusing... unless our
minds are complex systems like Richard Loosemore proposes and uploading
would only be a sort of echo of the original.

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Derek Zahn [EMAIL PROTECTED] wrote:

 Matt Mahoney writes: As for AGI research, I believe the most viable
 path is a distributed architecture that uses the billions of human
 brains and computers already on the Internet. What is needed is an
 infrastructure that routes information to the right experts and an
 economy that rewards intelligence and friendliness. I described one
 such architecture in http://www.mattmahoney.net/agi.html It differs
 significantly from the usual approach of trying to replicate a human
 mind. I don't believe that one person or a small group can solve the
 AGI problem faster than the billions of people on the Internet are
 already doing.
 I'm not sure I understand this.  Although a system that can respond
 well to commands of the following form:
  
 Show me an existing document that best answers the question 'X'
  
 is certainly useful, it is hardly 'general' in any sense we usually
 mean.  I would think a 'general' intelligence should be able to take
 a shot at answering:
  
 Why are so many streets named after trees?
 or
 If the New York Giants played cricket against the New York Yankees,
 who would probably win?
 or
 Here are the results of some diagnostic tests.  How likely is it
 that the patient has cancer?  What test should we do next?
 or
 Design me a stable helicopter with the rotors on the bottom instead
 of the top
  
 Super-google is nifty, but I don't see how it is AGI.

Because a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise. All of this can be done with existing technology
and a lot of hard work. The work will be done because there is an
incentive to do it and because the AGI (in the system, not its
components) is so valuable. AGI will be an extension of the Internet
that nobody planned, nobody built, and nobody owns.




-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Derek Zahn
Matt Mahoney writes:  Super-google is nifty, but I don't see how it is AGI. 
 Because a super-google will answer these questions by routing them to 
experts on these topics that will use natural language in their narrow domains 
of expertise. All of this can be done with existing technology and a lot of 
hard work. 
 
Ok.  I have some doubts personally that lots of narrow intelligences add up to 
general intelligence, but it seems as reasonable as other ideas out there.  I'd 
certainly pay to use it... with the explosion of documents on the web 
Google-as-it-exists gets worse and worse at giving me results that make me 
happy.  I've even (gasp) started trying other search sites.  Ask.com is pretty 
good, often better than google.
 
 
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread John G. Rose
 
 There is no way to know if we are living in a nested simulation, or even
 in a
 single simulation.  However there is a mathematical model: enumerate all
 Turing machines to find one that simulates a universe with intelligent
 life.
 

What if that nest of simulations loop around somehow? What was that idea
where there is this new advanced microscope that can see smaller than ever
before and you look into it and see an image of yourself looking into it... 

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Mike Tintner

Matt : a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise.

And Santa will answer every child's request, and we'll all live happily ever 
after.  Amen.


Which are these areas of science, technology, arts, or indeed any area of 
human activity, period, where the experts all agree and are NOT in deep 
conflict?


And if that's too hard a question, which are the areas of AI or AGI, where 
the experts all agree and are not in deep conflict?



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Mike Tintner

Matt : a super-google will answer these questions by routing them to
experts on these topics that will use natural language in their narrow
domains of expertise.

Another interesting question here is: on how many occasions are the majority 
of experts in any given field, wrong? I don't begin to know how to start 
assessing that. But there's a basic truth - which is that they are often 
wrong and in crucial areas - like politics, economics, investment, medicine 
etc etc.


You guys don't seem to have understood one of the basic functions of Google, 
which is precisely to enable you to get a 2nd, 3rd etc opinion - and NOT 
have to rely on the experts! 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- John G. Rose [EMAIL PROTECTED] wrote:

  
  There is no way to know if we are living in a nested simulation, or even
  in a
  single simulation.  However there is a mathematical model: enumerate all
  Turing machines to find one that simulates a universe with intelligent
  life.
  
 
 What if that nest of simulations loop around somehow? What was that idea
 where there is this new advanced microscope that can see smaller than ever
 before and you look into it and see an image of yourself looking into it... 

The simulations can't loop because the simulator needs at least as much memory
as the machine being simulated.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 Matt : a super-google will answer these questions by routing them to
 experts on these topics that will use natural language in their narrow
 domains of expertise.
 
 And Santa will answer every child's request, and we'll all live happily ever
 after.  Amen.

If you have a legitimate criticism of the technology or its funding plan, I
would like to hear it.  I understand there will be doubts about a system I
expect to cost over $1 quadrillion and take 30 years to build.

The protocol specifies natural language.  This is not a hard problem in narrow
domains.  It dates back to the 1960's.  Even in broad domains, most of the
meaning of a message is independent of word order.  Google works on this
principle.

But this is beside the point.  The critical part of the design is an incentive
for peers to provide useful services in exchange for resources.  Peers that
appear most intelligent and useful (and least annoying) are most likely to
have their messages accepted and forwarded by other peers.  People will
develop domain experts and routers and put them on the net because they can
make money through highly targeted advertising.

Google would be a peer on the network with a high reputation.  But Google
controls only 0.1% of the computing power on the Internet.  It will have to
compete with a system that allows updates to be searched instantly, where
queries are persistent, and where a query or message can initiate
conversations with other people in real time.

 Which are these areas of science, technology, arts, or indeed any area of 
 human activity, period, where the experts all agree and are NOT in deep 
 conflict?
 
 And if that's too hard a question, which are the areas of AI or AGI, where 
 the experts all agree and are not in deep conflict?

I don't expect the experts to agree.  It is better that they don't.  There are
hard problem remaining to be solved in language modeling, vision, and
robotics.  We need to try many approaches with powerful hardware.  The network
will decide who the winners are.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Eric B. Ramsay
If I understand what I have read in this thread so far, there is Ben on the one 
hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the other 
there is Matt saying $1quadrillion, using a billion brains in 30 years. I don't 
believe I have ever seen such a divergence of opinion before on what is 
required  for a technological breakthrough (unless people are not being serious 
and I am being naive). I suppose  this sort of non-consensus on such a scale 
could be part of investor reticence.

Eric B. Ramsay

Matt Mahoney [EMAIL PROTECTED] wrote: 
--- Mike Tintner  wrote:

 Matt : a super-google will answer these questions by routing them to
 experts on these topics that will use natural language in their narrow
 domains of expertise.
 
 And Santa will answer every child's request, and we'll all live happily ever
 after.  Amen.

If you have a legitimate criticism of the technology or its funding plan, I
would like to hear it.  I understand there will be doubts about a system I
expect to cost over $1 quadrillion and take 30 years to build.

The protocol specifies natural language.  This is not a hard problem in narrow
domains.  It dates back to the 1960's.  Even in broad domains, most of the
meaning of a message is independent of word order.  Google works on this
principle.

But this is beside the point.  The critical part of the design is an incentive
for peers to provide useful services in exchange for resources.  Peers that
appear most intelligent and useful (and least annoying) are most likely to
have their messages accepted and forwarded by other peers.  People will
develop domain experts and routers and put them on the net because they can
make money through highly targeted advertising.

Google would be a peer on the network with a high reputation.  But Google
controls only 0.1% of the computing power on the Internet.  It will have to
compete with a system that allows updates to be searched instantly, where
queries are persistent, and where a query or message can initiate
conversations with other people in real time.

 Which are these areas of science, technology, arts, or indeed any area of 
 human activity, period, where the experts all agree and are NOT in deep 
 conflict?
 
 And if that's too hard a question, which are the areas of AI or AGI, where 
 the experts all agree and are not in deep conflict?

I don't expect the experts to agree.  It is better that they don't.  There are
hard problem remaining to be solved in language modeling, vision, and
robotics.  We need to try many approaches with powerful hardware.  The network
will decide who the winners are.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
Well, Matt and I are talking about building totally different kinds of
systems...

I believe the system he wants to build would cost a huge amount ...
but I don't think
it's the most interesting sorta thing to build ...

A decent analogue would be spaceships.  All sorts of designs exist, some orders
of magnitude more complex and expensive than others.  It's more
practical to build
the cheaper ones, esp. when they're also more powerful ;-p

ben

On Tue, Apr 8, 2008 at 10:56 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

 Eric B. Ramsay

 Matt Mahoney [EMAIL PROTECTED] wrote:


 --- Mike Tintner wrote:

  Matt : a super-google will answer these questions by routing them to
  experts on these topics that will use natural language in their narrow
  domains of expertise.
 
  And Santa will answer every child's request, and we'll all live happily
 ever
  after. Amen.

 If you have a legitimate criticism of the technology or its funding plan, I
 would like to hear it. I understand there will be doubts about a system I
 expect to cost over $1 quadrillion and take 30 years to build.

 The protocol specifies natural language. This is not a hard problem in
 narrow
 domains. It dates back to the 1960's. Even in broad domains, most of the
 meaning of a message is independent of word order. Google works on this
 principle.

 But this is beside the point. The critical part of the design is an
 incentive
 for peers to provide useful services in exchange for resources. Peers that
 appear most intelligent and useful (and least annoying) are most likely to
 have their messages accepted and forwarded by other peers. People will
 develop domain experts and routers and put them on the net because they can
 make money through highly targeted advertising.

 Google would be a peer on the network with a high reputation. But Google
 controls only 0.1% of the computing power on the Internet. It will have to
 compete with a system that allows updates to be searched instantly, where
 queries are persistent, and where a query or message can initiate
 conversations with other people in real time.

  Which are these areas of science, technology, arts, or indeed any area of
  human activity, period, where the experts all agree and are NOT in deep
  conflict?
 
  And if that's too hard a question, which are the areas of AI or AGI, where
  the experts all agree and are not in deep conflict?

 I don't expect the experts to agree. It is better that they don't. There are
 hard problem remaining to be solved in language modeling, vision, and
 robotics. We need to try many approaches with powerful hardware. The network
 will decide who the winners are.


 -- Matt Mahoney, [EMAIL PROTECTED]

 ---
 singularity
 Archives: http://www.listbox.com/member/archive/11983/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/11983/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  

  singularity | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Matt Mahoney

--- Eric B. Ramsay [EMAIL PROTECTED] wrote:

 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

I am serious about the $1 quadrillion price tag, which is the low end of my
estimate.  The value of the Internet is now in the tens of trillions and
doubling every few years.  The value of AGI will be a very large fraction of
the world economy, currently US $66 trillion per year and growing at 5% per
year. 

Of course what I imagine emerging from the Internet bears little resemblance
to Novamente.  It is simply too big to invest in directly, but it will present
many opportunities.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
  Of course what I imagine emerging from the Internet bears little resemblance
  to Novamente.  It is simply too big to invest in directly, but it will 
 present
  many opportunities.

But the emergence of superhuman AGI's like a Novamente may eventually become,
will both dramatically alter the nature of, and dramatically reduce
the cost of, global
brains such as you envision...

ben g

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Eric B. Ramsay
Sure, but Matt is also suggesting that his path is the most viable and so from 
the point of view of an investor, he/she is faced with very divergent opinions 
on the  type of resources needed to get to the AGI expeditiously. It's far 
easier to understand wide price swings in a spaceship to get from here to Mars 
(or wherever) depending on how extravagantly you want to travel but if you 
define the problem as just get there, I am confident the costs will not be 
different by a factor of 100 million.

Eric B. Ramsay

Ben Goertzel [EMAIL PROTECTED] wrote: Well, Matt and I are talking about 
building totally different kinds of
systems...

I believe the system he wants to build would cost a huge amount ...
but I don't think
it's the most interesting sorta thing to build ...

A decent analogue would be spaceships.  All sorts of designs exist, some orders
of magnitude more complex and expensive than others.  It's more
practical to build
the cheaper ones, esp. when they're also more powerful ;-p

ben

On Tue, Apr 8, 2008 at 10:56 PM, Eric B. Ramsay  wrote:
 If I understand what I have read in this thread so far, there is Ben on the
 one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
 other there is Matt saying $1quadrillion, using a billion brains in 30
 years. I don't believe I have ever seen such a divergence of opinion before
 on what is required  for a technological breakthrough (unless people are not
 being serious and I am being naive). I suppose  this sort of non-consensus
 on such a scale could be part of investor reticence.

 Eric B. Ramsay

 Matt Mahoney  wrote:


 --- Mike Tintner wrote:

  Matt : a super-google will answer these questions by routing them to
  experts on these topics that will use natural language in their narrow
  domains of expertise.
 
  And Santa will answer every child's request, and we'll all live happily
 ever
  after. Amen.

 If you have a legitimate criticism of the technology or its funding plan, I
 would like to hear it. I understand there will be doubts about a system I
 expect to cost over $1 quadrillion and take 30 years to build.

 The protocol specifies natural language. This is not a hard problem in
 narrow
 domains. It dates back to the 1960's. Even in broad domains, most of the
 meaning of a message is independent of word order. Google works on this
 principle.

 But this is beside the point. The critical part of the design is an
 incentive
 for peers to provide useful services in exchange for resources. Peers that
 appear most intelligent and useful (and least annoying) are most likely to
 have their messages accepted and forwarded by other peers. People will
 develop domain experts and routers and put them on the net because they can
 make money through highly targeted advertising.

 Google would be a peer on the network with a high reputation. But Google
 controls only 0.1% of the computing power on the Internet. It will have to
 compete with a system that allows updates to be searched instantly, where
 queries are persistent, and where a query or message can initiate
 conversations with other people in real time.

  Which are these areas of science, technology, arts, or indeed any area of
  human activity, period, where the experts all agree and are NOT in deep
  conflict?
 
  And if that's too hard a question, which are the areas of AI or AGI, where
  the experts all agree and are not in deep conflict?

 I don't expect the experts to agree. It is better that they don't. There are
 hard problem remaining to be solved in language modeling, vision, and
 robotics. We need to try many approaches with powerful hardware. The network
 will decide who the winners are.


 -- Matt Mahoney, [EMAIL PROTECTED]

 ---
 singularity
 Archives: http://www.listbox.com/member/archive/11983/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/11983/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  

  singularity | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Stephen Reed
As described in my Texai roadmap, it might be possible to achieve AGI using 
primarily volunteer, no-cost human labor.  A precondition is a human/computer 
interface that can intelligently acquire knowledge and skills, and is 
compelling enough for early adopters to use it.  If the profit motive is 
removed (e.g. open source / open content) then on one hand volunteerism is 
encouraged, and on the other hand barriers to widespread utilitization are 
reduced (e.g. like Wikipedia).  

For me the tipping point will be the demonstration of an English dialog system 
that intelligent seeks to acquire more knowledge and skills, and is freely 
deployable in a distributed fashion to a multitude of peer-users as a virtual 
applicance.  

I believe, without any supporting evidence beyond my own limited experience in 
our field, that only a small kernel of hand-written code is required to set 
this off.  What that code might be is the question!  For WIkipedia, it is 
MediaWiki.

-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Eric B. Ramsay [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Tuesday, April 8, 2008 9:56:58 PM
Subject: Re: Promoting AGI (RE: [singularity] Vista/AGI)

 If I understand what I have read in this thread so far, there is Ben on the 
one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the 
other there is Matt saying $1quadrillion, using a billion brains in 30 years. I 
don't believe I have ever seen such a divergence of opinion before on what is 
required  for a technological breakthrough (unless people are not being serious 
and I am being naive). I suppose  this sort of non-consensus on such a scale 
could be part of investor reticence.

Eric B. Ramsay







__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-07 Thread Mike Tintner

J.A.R. Like I stated at the beginning, *most* models are at least
theoretically valid.

1. VALID MODELS/IDEAS. I am not aware of ONE model that has one valid or 
even interesting idea about how to produce general intelligence - how to 
get an agent to independently learn, or solve problems in, a new domain - to 
cross domains.


Which ones  which ideas are you thinking of?

1a. I am only aware of ONE thinker/systembuilder who has even ADDRESSED the 
problem in any shape or form directly -  IMO poorly - Baum in a recent 
paper, in wh. he defines general intelligence practically as moving 
independently from one level of a computer game to another. But at least he 
made an attempt to address the problem. (The recent Swedish ACS robotic 
effort talks about the problem, but the robot only appears to tackle one 
task, rather than moving on from one to another).


Are you aware of any others?

2. FLEDGED INVENTORS/ INNOVATORS

Are there any people in this discussion/group who have any proven record of 
inventing or innovating - e.g. creating a marketed new kind of program? 
Clearly there are many with an extensive professional background, but that's 
different.


IMO while these groups are v. constructive, helpful  friendly, they 
strikingly lack a true CREATIVE culture. Witness the number of people who 
insist that no great/revolutionary, creative ideas are needed for AGI. (In 
fact, I can't think of any AGI leader who doesn't take this position). You 
guys want to be Frankenstein's - to create life - one of the greatest 
creative challenges of all time -  a task that IMO requires at least a few 
Da Vinci's/Turing's  an army of Michelangelo's/Edison's, - but according to 
you guys doesn't even require one big idea! (Does Steve Grand BTW take this 
position?)


That truly makes me weep  want to start pounding my head on the table.

But it might explain why would-be investors aren't excited?

I would strongly urge people to associate more with - and/or seek the 
opinions here of - fledged creatives like Hawkins. 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-07 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 6:55 PM, Ben Goertzel wrote:

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...



Like I stated at the beginning, *most* models are at least theoretically 
valid.  Of course, tractable engineering of said models is another 
issue. :-)  Engineering tractability in the context of computer science 
and software engineering is almost purely an applied mathematics effort 
to the extent there is any theory to it, and science has a very 
limited capacity to inform it.


If someone could describe, specifically, how to science is going to 
inform this process given the existing body of theoretical work, I would 
have no problem with the notion.  My objections were pragmatic.


Now hold on just a minute.

Yesterday you directed the following accusation at me:

 [Your assertion] Artificial Intelligence research does
 not have a credible science behind it ... [leads] me to
 believe that you either are ignorant of relevant literature
 (possible) or you do not understand all the relevant
 literature and simply assume it is not important.

You *vilified* the claim that I made, and implied that I could only say 
such a thing out of ignorance, so I challenged you to explain what 
exactly was the science behind artificial intelligence.


But instead of backing up your remarks, you make no response at all to 
the challenge, and then, in the comments to Ben above, you hint that you 
*agree* that there is no science behind AI (... science has a very 
limited capacity to inform it), it is just that you think there should 
not be, or does not need to be, any science behind it.


So let me summarize:

1)  I make a particular claim.

2)  You state that I can only say such a thing if I am ignorant.

3)  You refuse to provide any arguments against the claim.

4)  You then tacitly agree with the original claim.


Oh, and by the way, a small point of logic.  If someone makes a claim 
that There is no science behind artificial intelligence, this is a 
claim about the *nonexistence* of something, so you cannot demand that 
the person produce evidence to support the nonexistence claim.  The onus 
is entirely on you to provide evidence that there is a science behind 
AI, if you believe that there is, not on me to demonstrate that there is 
none.




Richard Loosemore





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re : [singularity] Vista/AGI

2008-04-07 Thread Bruno Frandemiche
bonjour à tous
question:is there a science behind AGI?
my feeling and thinking are 
self-organisation,holism,contextual-syntatic-semantic and  finally base on topos
response:our univers(multi,...) is  meta-meta-meta-mathematical
cordialement votre
bruno

- Message d'origine 
De : Richard Loosemore [EMAIL PROTECTED]
À : singularity@v2.listbox.com
Envoyé le : Lundi, 7 Avril 2008, 16h26mn 01s
Objet : Re: [singularity] Vista/AGI

J. Andrew Rogers wrote:
 
 On Apr 6, 2008, at 6:55 PM, Ben Goertzel wrote:
 I wonder why some people think there is one true path to AGI ... I
 strongly suspect there are many...
 
 
 Like I stated at the beginning, *most* models are at least theoretically 
 valid.  Of course, tractable engineering of said models is another 
 issue. :-)  Engineering tractability in the context of computer science 
 and software engineering is almost purely an applied mathematics effort 
 to the extent there is any theory to it, and science has a very 
 limited capacity to inform it.
 
 If someone could describe, specifically, how to science is going to 
 inform this process given the existing body of theoretical work, I would 
 have no problem with the notion.  My objections were pragmatic.

Now hold on just a minute.

Yesterday you directed the following accusation at me:

  [Your assertion] Artificial Intelligence research does
  not have a credible science behind it ... [leads] me to
  believe that you either are ignorant of relevant literature
  (possible) or you do not understand all the relevant
  literature and simply assume it is not important.

You *vilified* the claim that I made, and implied that I could only say 
such a thing out of ignorance, so I challenged you to explain what 
exactly was the science behind artificial intelligence.

But instead of backing up your remarks, you make no response at all to 
the challenge, and then, in the comments to Ben above, you hint that you 
*agree* that there is no science behind AI (... science has a very 
limited capacity to inform it), it is just that you think there should 
not be, or does not need to be, any science behind it.

So let me summarize:

1)  I make a particular claim.

2)  You state that I can only say such a thing if I am ignorant.

3)  You refuse to provide any arguments against the claim.

4)  You then tacitly agree with the original claim.


Oh, and by the way, a small point of logic.  If someone makes a claim 
that There is no science behind artificial intelligence, this is a 
claim about the *nonexistence* of something, so you cannot demand that 
the person produce evidence to support the nonexistence claim.  The onus 
is entirely on you to provide evidence that there is a science behind 
AI, if you believe that there is, not on me to demonstrate that there is 
none.



Richard Loosemore





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com






  
_ 
Envoyez avec Yahoo! Mail. Une boite mail plus intelligente http://mail.yahoo.fr

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-07 Thread John G. Rose
Just a thought, maybe there are some commonalities across AGI designs where
components could be built at a lower cost. An investor invests in the
company that builds component x that is used by multiple AGI projects. Then
you have your little AGI ecosystem of companies all competing yet
cooperating. After all, we need to get the Singularity going ASAP so that we
can upload before inevitable biologic death? I prefer not to become
nano-dust I'd rather keep this show a rockin' capiche?

So it's like this - need standards. Somebody go bust out an RFC. Or is there
work done on this already like is there a CogML? I don't know if the
Semantic Web is going to cut the mustard... and the name Semantic Web just
doesn't have that ring to it. Kinda reminds me of the MBone - names really
do matter. Then who's the numnutz that came up with Web 3 dot oh geezss!

John


 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Sent: Monday, April 07, 2008 7:07 PM
 To: singularity@v2.listbox.com
 Subject: Re: [singularity] Vista/AGI
 
 Perhaps the difficulty in finding investors in AGI is that among people
 most
 familiar with the technology (the people on this list and the AGI list),
 everyone has a different idea on how to solve the problem.  Why would I
 invest in someone else's idea when clearly my idea is better?
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Samantha Atkins
Arguably many of the problems of Vista including its legendary slippages 
were the direct result of having thousands of merely human programmers 
involved.   That complex monkey interaction is enough to kill almost 
anything interesting. shudder


- samantha

Panu Horsmalahti wrote:
Just because it takes thousands of programmers to create something as 
complex as Vista, does *not* mean that thousands of programmers are 
required to build an AGI, since one property of AGI is/can be that it 
will learn most of its complexity using algorithms programmed into it.


*singularity* | Archives 
http://www.listbox.com/member/archive/11983/=now 
http://www.listbox.com/member/archive/rss/11983/ | Modify 
http://www.listbox.com/member/?; 
Your Subscription 	[Powered by Listbox] http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

This is based on my bias that AGI is best approached, at the current time,
by focusing on software not specialized hardware.

One of the things I like about AGI is that a single individual or a
small team CAN
just do it without need for massive capital investment in physical
infrastructure.

It's tempting to get into specialized hardware for AGI, and we may
want to at some
point, but I think it makes sense to defer that until we have a very
clear idea of
exactly what AGI design needs the hardware and strong prototype results of some
sort indicating why this AGI design will work on this hardware.  My
suspicion is that
we can get to human-level AGI without any special hardware, though
special hardware
will certainly be able to accelerate things after that.

-- Ben G




On Sun, Apr 6, 2008 at 7:22 AM, Samantha Atkins [EMAIL PROTECTED] wrote:
 Arguably many of the problems of Vista including its legendary slippages
 were the direct result of having thousands of merely human programmers
 involved.   That complex monkey interaction is enough to kill almost
 anything interesting. shudder

  - samantha

  Panu Horsmalahti wrote:

 
  Just because it takes thousands of programmers to create something as
 complex as Vista, does *not* mean that thousands of programmers are required
 to build an AGI, since one property of AGI is/can be that it will learn most
 of its complexity using algorithms programmed into it.
  
  *singularity* | Archives
 http://www.listbox.com/member/archive/11983/=now
 http://www.listbox.com/member/archive/rss/11983/ | Modify
 http://www.listbox.com/member/?; Your Subscription   [Powered by
 Listbox] http://www.listbox.com
 
 


  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
 If the concept behind Novamente is truly compelling enough, it
 should be no problem to make a successful pitch.

 Eric B. Ramsay

Gee ... you mean, I could pitch the idea of funding Novamente to
people with money??  I never thought of that!!  Thanks for the
advice ;-pp

Evidently, the concept behind Novamente is not truly compelling
enough to the casual observer,
as we have failed to attract big-bucks backers so far...

Many folks we've talked to are interested in what we're doing but
it seems we'll have to get further toward the end goal in order to
overcome their AGI skepticism...

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
elevator pitch treatment ... or even PPT summary treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias

Please note that many successful inventors in history have had
huge trouble getting financial backing, although in hindsight
we find their ideas truly compelling.  (And, many failed inventors
with terrible ideas have also had huge trouble getting financial
backing...)

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20 
programmers in 3 to 10 years at a cost of under $10 million, then this 
represents such a paltry expense to some companies (Google for example) 
that it would seem to me that the thing to do is share the design with 
them and go for it (Google could RD this with no impact to their 
shareholders even if it fails). The potential of an AGI is so enormous 
that the cost (risk)/benefit ratio swamps anything Google (or others) 
could possibly be working on. If the concept behind Novamente is truly 
compelling enough it should be no problem to make a successful pitch.


Eric B. Ramsay


[WARNING!  Controversial comments.]


When you say If the concept behind Novamente is truly compelling 
enough, this is the point at which your suggestion hits a brick wall.


What could be compelling about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm theoretical 
basis, because there is no science that says this design should produce 
an intelligent machine because intelligence is KNOWN to be x and y and 
z, and this design unambiguously will produce something that satisfies x 
and y and z.


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  Before 
that, the best that any outside investor can do is use their gut 
instinct to decide whether they think that it will work.


Now, my own argument to investors is that the only situation in which we 
can do better than say My gut instinct says that my design will work 
is when we do actually base our work on a foundation that gives 
objective reasons for believing in it.  And the only situation that I 
know of that allows that kind of objective measure is by taking the 
design of a known intelligent system (the human cognitive system) and 
staying as close to it as possible.  That is precisely what I am trying 
to do, and I know of no other project that is trying to do that 
(including the neural emulation projects like Blue Brain, which are not 
pitched at the cognitive level and therefore have many handicaps).


I have other, much more compelling reasons for staying close to human 
cognition (namely the complex systems problem and the problem of 
guaranteeing friendliness), but this objective-validation factor is one 
of the most important.


My pleas that more people do what I am doing fall on deaf ears, 
unfortunately, because the AI community is heavily biassed against the 
messy empiricism of psychology.  Interesting situation:  the personal 
psychology of AI researchers may be what is keeping the field in Dead 
Stop mode.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:38 AM, Eric B. Ramsay wrote:
If the Novamente design is able to produce an AGI with only 10-20  
programmers in 3 to 10 years at a cost of under $10 million, then  
this represents such a paltry expense to some companies (Google for  
example) that it would seem to me that the thing to do is share the  
design with them and go for it (Google could RD this with no impact  
to their shareholders even if it fails). The potential of an AGI is  
so enormous that the cost (risk)/benefit ratio swamps anything  
Google (or others) could possibly be working on.



You just used the Pascal's Wager fallacy in the context of AGI,  
congratulations.  The cost of investing in AGI is well above zero,  
investment resources are most assuredly finite, and the risk of  
investing in a failure is extremely high -- and many billions of  
dollars have already been invested despite this.


Or to look at it another way, you are also using a variant of the  
infamous (and also fallacious) 5% market share argument.



If the concept behind Novamente is truly compelling enough, it  
should be no problem to make a successful pitch.



The above statement leads me to believe you have little experience  
with funding speculative technology ventures of the scale being  
discussed here.  The dynamic is considerably, and rightly, more  
complicated than this.  A truly compelling concept and a dollar will  
buy you a cup of coffee.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 12:21 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
 Ben:
 I may be mistaken, but it seems to me that AGI today in 2008 is in the air
 again after 50 years.

Yes

You are not trying to present a completely novel and
 unheard of idea and with today's crowd of sophisticated angel investors I am
 surprised that no one bites given the modest sums involved. BTW I was not
 trying to give needless advice, just finishing my thoughts. I already took
 it as a given that you look for funding. I am trying to understand why no
 one bites. It's not as if there are a hundred different AGI efforts out
 there to choose from.

I don't fully understand it myself, but it's a fact.

To be clear: I understand why VC's and big companies don't want to fund
NM.

VC's are in a different sort of business ...

and big companies are either focused
on the short term, or else have their own
research groups who don't want a bunch of upstart outsiders to get
their research
$$ ...

But what vexes me a bit is that none of the many wealthy futurists out
there have been
interested in funding NM extensively, either on an angel investment
basis, or on a
pure nonprofit donation basis (and we have considered doing NM as a nonprofit
before, though right now that's not our focus as the virtual-pets biz
opp seems so
grand...)

I know personally (and have met with) a number of folks who

-- could invest a couple million $$ in NM without it impacting their
lives at all

-- are deeply into the Singularity and AGI and related concepts

-- appear to personally like and respect me and other in the NM team

But, after spending about 1.5 years courting these sorts of folks,
Bruce and I largely
gave up and decided to focus on other avenues.

I have some psychocultural theories as to why things are this way, but
nothing too
solid...

I am surprised that the reason may only be that the
 project isn't far enough along (too immature) given the historical
 precedents of what investors have ponied up money for before.

That's surely part of it ... but investors have put big $$ into much LESS
mature projects in areas such as nanotech and quantum computing.

AGI arouses an irrational amount of skepticism, compared to these other
futurist technologies, it seems to me.  I suppose this partly is
because there have
been more false starts toward AI in the past.

-- Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:46 AM, Ben Goertzel wrote:

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
elevator pitch treatment ... or even PPT summary treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias



Yes, and this happens far more often than just with AGI.  Many venture  
concepts, particularly speculative technology ventures, are extremely  
difficult to package into an elevator pitch because the minimum amount  
of material required for even the above average investor exceeds the  
bandwidth of an elevator pitch or slide deck.


In my experience, this is best framed as a problem of education.  More  
education of the investor required before the pitch indicates an  
exponential drop-off in the probability of being funded.  One of the  
reasons this is true is that not only does the person you are dealing  
with need to be educated, they have to be able to successfully educate  
*their* associates before investment is an option as a practical  
matter.  If the education required is complex and nuanced, this second  
stage will almost certainly be a failure.


Ben already knows this, but I will elaborate for the peanut gallery  
unfamiliar with venture finance.  The trick to dealing with this  
problem is to repackage the venture concept solely for the purpose of  
minimizing the amount of education required to raise money, which in  
the case of AGI means that you are selling a graspable product far  
removed from AGI per se.  The danger of this is that you end up going  
down a road where there is no AGI left in the venture.  Investors need  
to be able to wrap their heads around the venture (any venture), which  
given their limited resources means that the person with the idea  
needs to frame the desired result in terms that require the very  
minimum of education on the part of the investor to be compelling.   
People invest in products, not ideas, and the products must be  
concrete and obvious.  For something like AGI, packaging the  
technology into a fundable venture is an extraordinarily difficult task.



I would go as far as to say that funding speculative technology  
ventures is largely a problem of eliminating the apparent amount of  
education required so that it no longer appears particularly  
speculative but instead obvious when no concrete example exists.   
Successfully doing this is far, far more difficult than I suspect most  
people who have not tried believe.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote:
That's surely part of it ... but investors have put big $$ into much  
LESS

mature projects in areas such as nanotech and quantum computing.



This is because nanotech and quantum computing can be readily and  
easily packaged as straightforward physical machinery technology,  
which a lot of people can readily conceptualize even if they do not  
actually understand it.  AGI is not a physical touchable technology in  
the same sense (or even software sense), which is further aggravated  
by the many irrational memes of woo-ness that surround the idea of  
consciousness, intelligence, spirituality that the vast majority of  
investors uncritically subscribe to.  Indeed, many view the poor track  
record of AI as validation of their nutty beliefs. There have been  
some technically ridiculous AI projects that got substantial funding  
because they appealed to the biases of the investors.


If AGI was merely a function of hardware design, I suspect it would be  
much easier to sell because many investors would much more easily  
delude themselves into thinking they understand it, or at least  
conceptualize it in a way that comports with reality.  Over the years  
I have slowly come to believe that the long track record of failure in  
AI is a minor contributor to the relative dearth of funding for bold  
AI ventures -- the problem has never been a lack of people willing to  
take a risk per se.


J. Andrew Rogers




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be compelling about a project? (Novamente or any  
other). Artificial Intelligence is not a field that rests on a firm  
theoretical basis, because there is no science that says this  
design should produce an intelligent machine because intelligence is  
KNOWN to be x and y and z, and this design unambiguously will  
produce something that satisfies x and y and z.


Every single AGI design in existence is a Suck It And See design.   
We will know if the design is correct if it is built and it works.   
Before that, the best that any outside investor can do is use their  
gut instinct to decide whether they think that it will work.



Even if every single AGI design in existence is fundamentally broken  
(and I would argue that a fair amount of AGI design is theoretically  
correct and merely unavoidably intractable), this is a false  
characterization.  And at a minimum, it should be no mathematics  
rather than no science.


Mathematical proof of validity of a new technology is largely  
superfluous with respect to whether or not a venture gets funded.   
Investors are not mathematicians, at least not in the sense that  
mathematical certainty of the correctness of the model would be  
compelling.  If they trust the person enough to invest in them, they  
will generally trust that the esoteric mathematics behind the venture  
are correct as well.  No one tries to actually understand the  
mathematics even if though they will give them a cursory glance --  
that is your job.



Having had to sell breakthroughs in theoretical computer science  
before (unrelated to AGI), I would make the observation that investors  
in speculative technology do not really put much weight on what you  
know about the technology.  After all, who are they going to ask if  
you are the presumptive leading authority in that field? They will  
verify that the current limitations you claim to be addressing exist  
and will want concise qualitative answers as to how these are being  
addressed that comport with their model of reality, but no one is  
going to dig through the mathematics and derive the result for  
themselves.  Or at least, I am not familiar with cases that worked  
differently than this.  The real problem is that most AGI designers  
cannot answer these basic questions in a satisfactory manner, which  
may or may not reflect what they know.



J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 8:55 AM, Richard Loosemore wrote:
What could be compelling about a project? (Novamente or any other). 
Artificial Intelligence is not a field that rests on a firm 
theoretical basis, because there is no science that says this design 
should produce an intelligent machine because intelligence is KNOWN to 
be x and y and z, and this design unambiguously will produce something 
that satisfies x and y and z.


Every single AGI design in existence is a Suck It And See design.  We 
will know if the design is correct if it is built and it works.  
Before that, the best that any outside investor can do is use their 
gut instinct to decide whether they think that it will work.



Even if every single AGI design in existence is fundamentally broken 
(and I would argue that a fair amount of AGI design is theoretically 
correct and merely unavoidably intractable), this is a false 
characterization.  And at a minimum, it should be no mathematics 
rather than no science.


Mathematical proof of validity of a new technology is largely 
superfluous with respect to whether or not a venture gets funded.  
Investors are not mathematicians, at least not in the sense that 
mathematical certainty of the correctness of the model would be 
compelling.  If they trust the person enough to invest in them, they 
will generally trust that the esoteric mathematics behind the venture 
are correct as well.  No one tries to actually understand the 
mathematics even if though they will give them a cursory glance -- that 
is your job.



Having had to sell breakthroughs in theoretical computer science before 
(unrelated to AGI), I would make the observation that investors in 
speculative technology do not really put much weight on what you know 
about the technology.  After all, who are they going to ask if you are 
the presumptive leading authority in that field? They will verify that 
the current limitations you claim to be addressing exist and will want 
concise qualitative answers as to how these are being addressed that 
comport with their model of reality, but no one is going to dig through 
the mathematics and derive the result for themselves.  Or at least, I am 
not familiar with cases that worked differently than this.  The real 
problem is that most AGI designers cannot answer these basic questions 
in a satisfactory manner, which may or may not reflect what they know.


You are addressing (interesting and valid) issues that lie well above 
the level at which I was making my argument, so unfortnately they miss 
the point.


I was arguing that whenever a project claims to be doing engineering 
there is always a background reference that is some kind of science or 
mathematics or prescription that justifies what the project is trying to 
achieve:


1)  Want to build a system to manage the baggage handling in a large 
airport?  Background prescription = a set of requirements that the flow 
of baggage should satisfy.


2)  Want to build an aircraft wing? Background science =  the physics of 
air flow first, along with specific criteria that must be satisfied.


3)  Want to send people on an optimal trip around a set of cities? 
Background mathematics = a precise statement of the travelling salesman 
problem.


No matter how many other cases you care to list, there is always some 
credible science or mathematics or common sense prescription lying at 
the back of the engineering project.


Here, for contrast, is an example of an engineering project behind which 
there was NO credible science or mathematics or prescription:


4*)  Find an alchemical process that will lead to the philosophers' stone.

Alchemists knew what they wanted - kind of - but there was no credible 
science behind what they did.  They were just hacking.


Artificial Intelligence research does not have a credible science behind 
it.  There is no clear definition of what intelligence is, there is only 
the living example of the human mind that tells us that some things are 
intelligent.


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to an 
agreement that intelligence is X, and so, starting from that position we 
are able to do some engineering to build a system that satisfies the 
criteria inherent in X, so we can build an intellgence.


Instead what we have are AI researchers who have gut instincts about 
what intelligence is, and from that gut instinct they proceed to hack.


They are, in short, alchemists.

And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook, and claim that the Rational Agents framework plus 
logical reasoning is the scientific framework on which an idealized 
intelligent system can be designed, I should point out that this concept 
is completely rejected by most cognitive psychologists:  they point out 
that the intelligence to be found in the only example of an 
intelligent 

Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science  
behind it.  There is no clear definition of what intelligence is,  
there is only the living example of the human mind that tells us  
that some things are intelligent.



The fact that the vast majority of AGI theory is pulled out of /dev/ 
ass notwithstanding, your above characterization would appear to  
reflect your limitations which you have chosen to project onto the  
broader field of AGI research.  Just because most AI researchers are  
misguided fools and you do not fully understand all the relevant  
theory does not imply that this is a universal (even if it were).



This is not about mathematical proof, it is about having a credible,  
accepted framework that allows us to say that we have already come  
to an agreement that intelligence is X, and so, starting from that  
position we are able to do some engineering to build a system that  
satisfies the criteria inherent in X, so we can build an intellgence.



I do not need anyone's agreement to prove that system Y will have  
property X, nor do I have to accommodate pet theories to do so.  AGI  
is mathematics, not science.  Plenty of people can agree on what X is  
and are satisfied with the rigor of whatever derivations were  
required.  There are even multiple X out there depending on the  
criteria you are looking to satisfy -- the label of AI is immaterial.


What seems to have escaped you is that there is nothing about an  
agreement on X that prescribes a real-world engineering design.  We  
have many examples of tightly defined Xs in theory that took many  
decades of RD to reduce to practice or which in some cases have never  
been reduced to real-world practice even though we can very strictly  
characterize them in the mathematical abstract.  There are many AI  
researchers who could be accurately described as having no rigorous  
framework or foundation for their implementation work, but conflating  
this group with those stuck solving the implementation theory problems  
of a well-specified X is a category error.


There are two unrelated difficult problems in AGI: choosing a rigorous  
X with satisfactory theoretical properties and designing a real-world  
system implementation that expresses X with satisfactory properties.   
There was a time when most credible AGI research was stuck working on  
the former, but today an argument could be made that most credible AGI  
research is stuck working on the latter.  I would question the  
credibility of opinions offered by people who cannot discern the  
difference.



And in case you are tempted to do what (e.g.) Russell and Norvig do  
in their textbook...



I'm not interested in lame classical AI, so this is essentially a  
strawman.  To the extent I am personally in a theory camp, I have  
been in the broader algorithmic information theory camp since before  
it was on anyone's radar.



It is not that these investors understand the abstract ideas I just  
described, it is that they have a gut feel for the rate of progress  
and the signs of progress and the type of talk that they should be  
encountering if AGI had mature science behind it.  Instead, what  
they get is a feeling from AGI researchers that each one is doing  
the following:


1)  Resorting to a bottom line that amounts to I have a really good  
personal feeling that my project really will get there, and


2)  Examples of progress that look like an attempt to dress a  
doughnut up as a wedding cake.



Sure, but what does this have to do with the topic at hand?  The  
problem is that investors lack any ability to discern a doughnut from  
a wedding cake.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Rolf Nelson
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn [EMAIL PROTECTED] wrote:
 As to why sympathetic rich people are
 apparently not willing to toss this consideration aside, it doesn't make
 much sense to me unless they simply don't think specific approaches are
 feasible -- although there's also a disconnect between sympathies and
 checkbooks, which is why we have cliche phrases like put your money where
 your mouth is and talk is cheap.

Sympathetic rich people often want to keep their money for the same
reasons that sympathetic poor people want to keep their money, and
sympathetic G7 middle-class people (who are rich compared with the
median person in the world, and are filthy rich compared with the
average person who's lived throughout history) want to keep their
money. There's almost always someone richer and more successful than
you who you can use as an excuse to shirk, if you're the shirking
type.

As to why many people prefer saving whales to fighting malaria, and
fighting malaria to building an FAI, well, that's more complicated,
and any answer I give would be long and would almost certainly be
wrong in some minor detail.

-Rolf

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

J. Andrew Rogers wrote:


On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science 
behind it.  There is no clear definition of what intelligence is, 
there is only the living example of the human mind that tells us that 
some things are intelligent.



The fact that the vast majority of AGI theory is pulled out of /dev/ass 
notwithstanding, your above characterization would appear to reflect 
your limitations which you have chosen to project onto the broader field 
of AGI research.  Just because most AI researchers are misguided fools 
and you do not fully understand all the relevant theory does not imply 
that this is a universal (even if it were).


Ad hominem.  Shameful.


This is not about mathematical proof, it is about having a credible, 
accepted framework that allows us to say that we have already come to 
an agreement that intelligence is X, and so, starting from that 
position we are able to do some engineering to build a system that 
satisfies the criteria inherent in X, so we can build an intellgence.



I do not need anyone's agreement to prove that system Y will have 
property X, nor do I have to accommodate pet theories to do so.  AGI is 
mathematics, not science.


AGI *is* mathematics?

Oh dear.

I'm sorry, but if you can make a statement such as this, and if you are 
already starting to reply to points of debate by resorting to ad 
hominems, then it would be a waste of my time to engage.


I will just note that if this point of view is at all widespread - if 
there really are large numbers of people who agree that AGI is 
mathematics, not science  -  then this is a perfect illustration of 
just why no progress is being made in the field.



Richard Loosemore


Plenty of people can agree on what X is and 
are satisfied with the rigor of whatever derivations were required.  
There are even multiple X out there depending on the criteria you are 
looking to satisfy -- the label of AI is immaterial.


What seems to have escaped you is that there is nothing about an 
agreement on X that prescribes a real-world engineering design.  We have 
many examples of tightly defined Xs in theory that took many decades of 
RD to reduce to practice or which in some cases have never been reduced 
to real-world practice even though we can very strictly characterize 
them in the mathematical abstract.  There are many AI researchers who 
could be accurately described as having no rigorous framework or 
foundation for their implementation work, but conflating this group with 
those stuck solving the implementation theory problems of a 
well-specified X is a category error.


There are two unrelated difficult problems in AGI: choosing a rigorous X 
with satisfactory theoretical properties and designing a real-world 
system implementation that expresses X with satisfactory properties.  
There was a time when most credible AGI research was stuck working on 
the former, but today an argument could be made that most credible AGI 
research is stuck working on the latter.  I would question the 
credibility of opinions offered by people who cannot discern the 
difference.



And in case you are tempted to do what (e.g.) Russell and Norvig do in 
their textbook...



I'm not interested in lame classical AI, so this is essentially a 
strawman.  To the extent I am personally in a theory camp, I have been 
in the broader algorithmic information theory camp since before it was 
on anyone's radar.



It is not that these investors understand the abstract ideas I just 
described, it is that they have a gut feel for the rate of progress 
and the signs of progress and the type of talk that they should be 
encountering if AGI had mature science behind it.  Instead, what they 
get is a feeling from AGI researchers that each one is doing the 
following:


1)  Resorting to a bottom line that amounts to I have a really good 
personal feeling that my project really will get there, and


2)  Examples of progress that look like an attempt to dress a doughnut 
up as a wedding cake.



Sure, but what does this have to do with the topic at hand?  The problem 
is that investors lack any ability to discern a doughnut from a wedding 
cake.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn [EMAIL PROTECTED] wrote:


  I would think an investor would want a believable specific answer to the
 following question:

  When and how will I get my money back?

  It can be uncertain (risk is part of the game), but you can't just wave
 your hands around on that point.

This is not the problem ... regarding Novamente, we have an extremely
specific business plan and details regarding how we would provide return
on investment.

The problem is that investors are generally pretty unwilling to eat  perceived
technology risk.  Exceptions arise all the time, and AGI has not yet been one.

It is an illusion that VC or angel investors are fond of risk ...
actually they are
quite risk-averse in nearly all cases...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:

J. Andrew Rogers wrote:
The fact that the vast majority of AGI theory is pulled out of /dev/ 
ass notwithstanding, your above characterization would appear to  
reflect your limitations which you have chosen to project onto the  
broader field of AGI research.  Just because most AI researchers  
are misguided fools and you do not fully understand all the  
relevant theory does not imply that this is a universal (even if it  
were).


Ad hominem.  Shameful.



Ad hominem?  Well, of sorts I suppose, but in this case it is the  
substance of the argument so it is a reasonable device.  I think I  
have met more AI cranks with hare-brained pet obsessions with respect  
to the topic or academics that are beating a horse that died thirty  
years ago than AI researchers that are actually keeping current with  
the subject matter.  Pointing out the embarrassing foolishness of the  
vast number of those that claim to be AI researchers and how it  
colors the credibility of the entire field is germane to the discussion.


As for you specifically, assertions like Artificial Intelligence  
research does not have a credible science behind it in the absence of  
substantive support (now or in the past) can only lead me to believe  
that you either are ignorant of relevant literature (possible) or you  
do not understand all the relevant literature and simply assume it is  
not important.   As far as I have ever been able to tell, theoretical  
psychology re-heats a very old idea while essentially ignoring or  
dismissing out of hand more recent literature that could provide  
considerable context when (re-)evaluating the notion.  This is a fine  
example of part of the problem we are talking about.




AGI *is* mathematics?



Yes, applied mathematics.  Is there some other kind of non- 
computational AI?  The mathematical nature of the problem does not  
disappear when you wrap it in fuzzy abstractions it just gets, well,  
fuzzy.  At best the science can inform your mathematical model, but in  
this case the relevant mathematics is ahead of the science for most  
purposes and the relevant science is largely working out the specific  
badly implemented wetware mapping to said mathematics.



I'm sorry, but if you can make a statement such as this, and if you  
are already starting to reply to points of debate by resorting to ad  
hominems, then it would be a waste of my time to engage.



Probably a waste of my time as well if you think this is primarily a  
science problem in the absence of a discernible reason to characterize  
it as such.



I will just note that if this point of view is at all widespread -  
if there really are large numbers of people who agree that AGI is  
mathematics, not science  -  then this is a perfect illustration of  
just why no progress is being made in the field.



Assertions do not manufacture fact.

J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Funny dispute ... is AGI about mathematics or science

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...

-- Ben


On Sun, Apr 6, 2008 at 9:16 PM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:

  On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:


  J. Andrew Rogers wrote:
 
   The fact that the vast majority of AGI theory is pulled out of /dev/ass
 notwithstanding, your above characterization would appear to reflect your
 limitations which you have chosen to project onto the broader field of AGI
 research.  Just because most AI researchers are misguided fools and you do
 not fully understand all the relevant theory does not imply that this is a
 universal (even if it were).
  
 
  Ad hominem.  Shameful.
 


  Ad hominem?  Well, of sorts I suppose, but in this case it is the substance
 of the argument so it is a reasonable device.  I think I have met more AI
 cranks with hare-brained pet obsessions with respect to the topic or
 academics that are beating a horse that died thirty years ago than AI
 researchers that are actually keeping current with the subject matter.
 Pointing out the embarrassing foolishness of the vast number of those that
 claim to be AI researchers and how it colors the credibility of the entire
 field is germane to the discussion.

  As for you specifically, assertions like Artificial Intelligence research
 does not have a credible science behind it in the absence of substantive
 support (now or in the past) can only lead me to believe that you either are
 ignorant of relevant literature (possible) or you do not understand all the
 relevant literature and simply assume it is not important.   As far as I
 have ever been able to tell, theoretical psychology re-heats a very old idea
 while essentially ignoring or dismissing out of hand more recent literature
 that could provide considerable context when (re-)evaluating the notion.
 This is a fine example of part of the problem we are talking about.



  AGI *is* mathematics?
 


  Yes, applied mathematics.  Is there some other kind of non-computational
 AI?  The mathematical nature of the problem does not disappear when you wrap
 it in fuzzy abstractions it just gets, well, fuzzy.  At best the science can
 inform your mathematical model, but in this case the relevant mathematics is
 ahead of the science for most purposes and the relevant science is largely
 working out the specific badly implemented wetware mapping to said
 mathematics.




  I'm sorry, but if you can make a statement such as this, and if you are
 already starting to reply to points of debate by resorting to ad hominems,
 then it would be a waste of my time to engage.
 


  Probably a waste of my time as well if you think this is primarily a
 science problem in the absence of a discernible reason to characterize it as
 such.




  I will just note that if this point of view is at all widespread - if
 there really are large numbers of people who agree that AGI is mathematics,
 not science  -  then this is a perfect illustration of just why no progress
 is being made in the field.
 


  Assertions do not manufacture fact.


  J. Andrew Rogers

  ---
  singularity
  Archives: http://www.listbox.com/member/archive/11983/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/11983/

  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Richard Loosemore

Ben Goertzel wrote:

Funny dispute ... is AGI about mathematics or science

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...


Actually, the discussion had nothing to do with the rather bizarre 
interpretation you put on it above.




Richard Loosemore




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 5:26 PM, Ben Goertzel wrote:
The problem is that investors are generally pretty unwilling to eat   
perceived
technology risk.  Exceptions arise all the time, and AGI has not yet  
been one.



There have been exceptions, just ill-advised ones.  :-)

But yes, most investors are actually looking for a Killer Demo(tm) or  
unimpeachable credibility, the latter not to be construed as referring  
to anyone with an academic AI background in this particular case.



Absent a Killer Demo, my observation is that people with  
unimpeachable credibility in this case and the genuine technical  
ability to plausibly produce results are essentially sets that very  
rarely intersect for these purposes.  No one on the investment side is  
really looking for an AI academic of any type per se when they  
consider investing in these kinds of things, but there are few others  
in the field (discounting cranks).  For better or worse, you need to  
be a J. Hawkins or similar.  Such is the world we live in.


Cheers,

J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread J. Andrew Rogers


On Apr 6, 2008, at 6:55 PM, Ben Goertzel wrote:

I wonder why some people think there is one true path to AGI ... I
strongly suspect there are many...



Like I stated at the beginning, *most* models are at least  
theoretically valid.  Of course, tractable engineering of said models  
is another issue. :-)  Engineering tractability in the context of  
computer science and software engineering is almost purely an applied  
mathematics effort to the extent there is any theory to it, and  
science has a very limited capacity to inform it.


If someone could describe, specifically, how to science is going to  
inform this process given the existing body of theoretical work, I  
would have no problem with the notion.  My objections were pragmatic.


Cheers,

J. Andrew Rogers



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-03-17 Thread John G. Rose
The payoff on AGI justifies investment. The problem is that the probability
of success is in question. But spinoff technologies developed along the way
could have value.

 

I think though that particular proof of concepts may not need more than a
few people. Putting it all together would require more than a few. Then the
resources needed to make it interact with various systems in the world would
make the number of people needed grow exponentially.

 

John

 

From: Eric B. Ramsay [mailto:[EMAIL PROTECTED] 
Sent: Sunday, March 16, 2008 10:14 AM
To: singularity@v2.listbox.com
Subject: [singularity] Vista/AGI

 

It took Microsoft over 1000 engineers, $6 Billion and several years to make
Vista.  Will building an AGI be any less formidable? If the AGI effort is
comparable, how can the relatively small efforts of Ben (comparatively
speaking) and others possibly succeed? If the effort to build an AGI is not
comparable, why not? Perhaps a consortium (non-governmental) should be
created specifically for the building of an AGI. Ben talks about a Manhattan
style project. A consortium could pool all resources currently available
(people and hardware), actively seek private funds on a  continuing basis
and give coherence to the effort.

Eric B. Ramsay

  _  


singularity |  http://www.listbox.com/member/archive/11983/=now Archives
http://www.listbox.com/member/archive/rss/11983/ |
http://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-17 Thread Vladimir Nesov
On Mon, Mar 17, 2008 at 4:48 PM, John G. Rose [EMAIL PROTECTED] wrote:

 I think though that particular proof of concepts may not need more than a
 few people. Putting it all together would require more than a few. Then the
 resources needed to make it interact with various systems in the world would
 make the number of people needed grow exponentially.


Then what's the point? We have this problem with existing software
already, and it's precisely the magic bullet of AGI that should allow
free lunch of automatic interfacing with real-world issues...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-03-17 Thread John G. Rose
 From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
 On Mon, Mar 17, 2008 at 4:48 PM, John G. Rose [EMAIL PROTECTED]
 wrote:
 
  I think though that particular proof of concepts may not need more
 than a
  few people. Putting it all together would require more than a few.
 Then the
  resources needed to make it interact with various systems in the world
 would
  make the number of people needed grow exponentially.
 
 
 Then what's the point? We have this problem with existing software
 already, and it's precisely the magic bullet of AGI that should allow
 free lunch of automatic interfacing with real-world issues...
 

The assumed value of AGI is blanketed magic bullets. They'll be quite a bit
of automatic interfacing. There will be quite a bit of prevented and
controlled automatic interfacing. But in the beginning, think about it, it's
not instantaneous super-intelligence. 

John

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread BillK
On Sun, Mar 16, 2008 at 4:14 PM, Eric B. Ramsay wrote:
 It took Microsoft over 1000 engineers, $6 Billion and several years to make
 Vista.  Will building an AGI be any less formidable? If the AGI effort is
 comparable, how can the relatively small efforts of Ben (comparatively
 speaking) and others possibly succeed? If the effort to build an AGI is not
 comparable, why not? Perhaps a consortium (non-governmental) should be
 created specifically for the building of an AGI. Ben talks about a Manhattan
 style project. A consortium could pool all resources currently available
 (people and hardware), actively seek private funds on a  continuing basis
 and give coherence to the effort.


Oohh!   Flamebait, yummy!

The building of Vista was a total shambles and produced a worse OS than XP.
The initial project was scrapped and bits that mostly worked were
flung together to get something out the door, so the mugs (sorry,
customers) could continue pouring money into Microsoft.
Don't use that as a basis for estimates of any kind.


BillK

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Thomas McCabe
On 3/16/08, Eric B. Ramsay [EMAIL PROTECTED] wrote:

 It took Microsoft over 1000 engineers, $6 Billion and several years to
 make Vista.  Will building an AGI be any less formidable? If the AGI effort
 is comparable, how can the relatively small efforts of Ben (comparatively
 speaking) and others possibly succeed? If the effort to build an AGI is not
 comparable, why not? Perhaps a consortium (non-governmental) should be
 created specifically for the building of an AGI. Ben talks about a Manhattan
 style project. A consortium could pool all resources currently available
 (people and hardware), actively seek private funds on a  continuing basis
 and give coherence to the effort.

 Eric B. Ramsay

 --
   *singularity* | Archiveshttp://www.listbox.com/member/archive/11983/=now
 http://www.listbox.com/member/archive/rss/11983/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com



Big companies are really, really lousy at writing software, in terms of
useful software produced/resources expended. That's why startups can make so
much money, even when they start off as two guys in a garage.

-- 
- Tom
http://www.acceleratingfuture.com/tom

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Thomas McCabe
On 3/16/08, Eric B. Ramsay [EMAIL PROTECTED] wrote:

 Two guys in a garage would never have built the bomb. The question is
 whether or not the two efforts are indeed comparable.

 Eric B. Ramsay


You're right that software engineering is more amenable to startups than
other kinds of work, but AGI *is* mostly software engineering (and math).

-- 
- Tom
http://www.acceleratingfuture.com/tom

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Panu Horsmalahti
Just because it takes thousands of programmers to create something as
complex as Vista, does *not* mean that thousands of programmers are required
to build an AGI, since one property of AGI is/can be that it will learn most
of its complexity using algorithms programmed into it.

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread J. Andrew Rogers


On Mar 16, 2008, at 9:14 AM, Eric B. Ramsay wrote:
It took Microsoft over 1000 engineers, $6 Billion and several years  
to make Vista.  Will building an AGI be any less formidable? If the  
AGI effort is comparable, how can the relatively small efforts of  
Ben (comparatively speaking) and others possibly succeed? If the  
effort to build an AGI is not comparable, why not?



Yeah, what kind of fool would believe something as complex and  
interesting as a tree could grow from an insignificant and  
unremarkable looking seed. There is no evidence that AGI is a complex  
problem per se.


Few people would define the developments task as hiring hundreds of  
engineers to do things like write device drivers and apps for  
defective Chinese silicon so that little Billy's stuffed purple  
dinosaur with a USB cable coming out its ass can dance along with  
Hannah Montana music videos being streamed from YouTube with built-in  
DRM as a heroic last ditch effort to contain the spread of that  
insipid music while your email-client-and-dishwashing-machine  
forwards your porn collection to everyone in your address book in the  
background because a Russian hacker^H^H^H^H^H^H programmer might find  
that funny^H^H^H^H^H useful.


All very necessary if you are building a Microsoft operating system  
product, but superfluous to the development of AGI or even operating  
systems generally.  A lot of functional operating systems have been  
developed by a single individuals, and most have traditionally been  
written by small teams.


J. Andrew Rogers

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Eric B. Ramsay
Lol. Calm down fella. You are going to give yourself a stroke.

Eric B. Ramsay

J. Andrew Rogers [EMAIL PROTECTED] wrote:
Few people would define the developments task as hiring hundreds of  
engineers to do things like write device drivers and apps for  
defective Chinese silicon so that little Billy's stuffed purple  
dinosaur with a USB cable coming out its ass can dance along with  
Hannah Montana music videos being streamed from YouTube with built-in  
DRM as a heroic last ditch effort to contain the spread of that  
insipid music while your email-client-and-dishwashing-machine  
forwards your porn collection to everyone in your address book in the  
background because a Russian hacker^H^H^H^H^H^H programmer might find  
that funny^H^H^H^H^H use

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Nathan Cravens
Hi Matt,

Great topic here.

Remember, the Manhattan Project didn't come about until everyone believed a
global catastrophe was afoot. That kind of mentality seems to help bring
people together to make amazing stuff, in that case explosive stuff. As
narrow AI and robotics become more ubiquitous, the pressure to form an AGI
Manhattan Project will increase. Simple technologies like narrow AI
(software) and robotics are weeding out labor, reforming the economic
playing field (however slowly) into a laborless society. The signs of this
are slight, but striking. It seems that only those in the
hypertechnology/Singularity field see where its going economically, however
scantily. Some examples: Major unemployment in management positions second
to industrial loss, the failure of the debt market, increased hoarding of
the rich, and price inflation that began to catapult in the mid 1970s,
progressing to this day.

Continued automation of service and expert systems fused with robotics will
break the old economic dinosaur sooner or later. Like AGI research,
heterodox economic research isn't profitable, which will remain so until the
glass underneath us thins and shatters. I see one of two likely pathways
approaching before Manhattan Project activity ensues, (1) a great economic
collapse or (2) the formation of a new friendly opposition that acts to
even things using big stick political means. Either of these movements
will require capable AGI.

Microsoft could use a Human Waste Management department to go with the
infinitude of other departments it currently has, not to mention a Human
Waste Management department for the Human Waste Management department.
Perhaps that would be too costly?

It would be wise for the AGI collective to write an AGI Roadmap to present
to the public once working or theoretical architectures are firmly in place.
That would help promote AGI and potentially save forming an AGI Manhattan
Project.

Nathan

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-03-16 Thread Richard Loosemore

[EMAIL PROTECTED] wrote:
You have to be careful with the phrase 'Manhattan-style project'.  


You are right.

On previous occasions when this subject has come up I, at least, have 
referred to the idea as an Apollo Project, not a Manhattan Project.




Richard Loosemore





 That was a military project with military aims, and a 'benevolent' 
dictator mgmt structure.  No input for researchers concerning things 
like applicability of the project output, delivery systems, timeframes, 
social issues, nothing.   Compartmentalization, not open overview, would 
be the general tenor.   Similarly, with a consortium, you have the 
necessary economic incentive struggles and tensions.   Only real chance 
would be the lone wolf, in my opinion, more like what you might call the 
Tesla-model.


Not that I really think AGI is something possible or desirable.

~Robert S.
-- Original message from Eric B. Ramsay [EMAIL PROTECTED]: -- 


It took Microsoft over 1000 engineers, $6 Billion and several years to make 
Vista.  Will building an AGI be any less formidable? If the AGI effort is 
comparable, how can the relatively small efforts of Ben (comparatively 
speaking) and others possibly succeed? If the effort to build an AGI is not 
comparable, why not? Perhaps a consortium (non-governmental) should be created 
specifically for the building of an AGI. Ben talks about a Manhattan style 
project. A consortium could pool all resources currently available (people and 
hardware), actively seek private funds on a  continuing basis and give 
coherence to the effort.

Eric B. Ramsay




singularity | Archives  | Modify Your Subscription

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com




---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com