Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins

Ben Goertzel wrote:

Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.
  
So you are talking on the order of $9M - $30M.  

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

  
You know, I am getting pretty tired of hearing this poor mouth crap.   
This is not that huge a sum to raise or get financed.  Hell, there are 
some very futuristic rich geeks who could finance this single-handed and 
would not really care that much whether they could somehow monetize the 
result.   I don't believe for a minute that there is no way to do 
this.So exactly why are you singing this sad song year after year?



Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

  
From what you said above $50M will do the entire job.   If that is all 
that is standing between us and AGI then surely we can get on with it in 
all haste.   If it is a great deal more than this relatively small 
amount of money then lets move on to talk about that instead of whining 
about lack of coin.


- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Samantha:From what you said above $50M will do the entire job.   If that is 
all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's  
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health  wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution.  To emulate it, or parallel its 
powers, is going to take more like many not just trillions but zillions of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job.   If that 
is all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it 
is also a framework to end discussions for the moment.


1)  Given our general ignorance, everyone is, strictly, entitled to 
their opinions about the future of AGI. Ben is entitled to his view that 
it will only take $50M or thereabouts.


BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly 
make *reasonable* predictions about how long it will take to solve the 
rest - predictions that anyone, including yourself should take 
seriously- especially if you've got any sense, any awareness of AI's 
long, ridiculous and incorrigible record of crazy predictions here, (and 
that's by Minsky's  Simon's as well as lesser lights) - by people also 
making predictions without having solved any of AGI's problems. All 
investors beware. Massive health  wealth warnings.


MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the 
human brain/body is the most awesomely complex machine in the known 
universe, the product of billions of years of evolution.  To emulate it, 
or parallel its powers, is going to take more like many not just 
trillions but zillions of dollars - many times global output, many, 
many Microsoft's. Now right now that's a reasonable POV too.


But until you've solved one, just a measly one of AGI's problems, 
there's not a lot of point in further discussion, is there? Nobody's 
really gaining from it, are they? It's just masturbation, isn't it?


Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
 If you disagree with that, specify exactly what you mean by a problem 
of AGI, and let us list them.  I have discovered the complex systems 
problem:  this is a major breakthrough.  You cannot understand it, or 
why it is a major breakthrough, but that makes no odds.


Everything you say in this post is based on your own ignorance of what 
AGI actually is.  What you are really saying is Nobody has been able to 
make me understand what AGI has achieved, so AGI is useless.


Sorry, but your posts are sounding more and more like incoherent rants.



Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Eric B. Ramsay
Mike: 
   
  I am a novice to this AGI business and so I am not being cute with the 
following question: What, in your opinion, would be the first AGI problem to 
tackle. Perhaps theses various problems can't be priority ordered but 
nontheless, which problem stands out for you?. Thanks.
   
  Eric B. Ramsay

Mike Tintner [EMAIL PROTECTED] wrote:
  Samantha:From what you said above $50M will do the entire job. If that is 
all
that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest the 
following is a reasonable *framework* for any discussions - although it is 
also a framework to end discussions for the moment.

1) Given our general ignorance, everyone is, strictly, entitled to their 
opinions about the future of AGI. Ben is entitled to his view that it will 
only take $50M or thereabouts.

BUT

2) Not a SINGLE problem of AGI has been solved yet. Not a damn one. Is 
anyone arguing different? And until you've solved one, you can hardly make 
*reasonable* predictions about how long it will take to solve the rest - 
predictions that anyone, including yourself should take seriously- 
especially if you've got any sense, any awareness of AI's long, ridiculous 
and incorrigible record of crazy predictions here, (and that's by Minsky's  
Simon's as well as lesser lights) - by people also making predictions 
without having solved any of AGI's problems. All investors beware. Massive 
health  wealth warnings.

MEANWHILE

3)Others - and I'm not the only one here - take a view more like: the human 
brain/body is the most awesomely complex machine in the known universe, the 
product of billions of years of evolution. To emulate it, or parallel its 
powers, is going to take more like many not just trillions but zillions of 
dollars - many times global output, many, many Microsoft's. Now right now 
that's a reasonable POV too.

But until you've solved one, just a measly one of AGI's problems, there's 
not a lot of point in further discussion, is there? Nobody's really gaining 
from it, are they? It's just masturbation, isn't it? 


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner



Mike,

Your comments are irresponsible.  Many problems of AGI have been solved. 
If you disagree with that, specify exactly what you mean by a problem of 
AGI, and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of Madonna 
which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


[By all means let's identify some more unsolved problems BTW..]

I think Ben  I more or less agreed that if he had really solved 1) - if his 
pet could really independently learn to play hide-and-seek after having been 
taught to fetch, it would constitute a major breakthrough, worthy of 
announcement to the world. And you can be sure it would be provoking a great 
deal of discussion.


As for your discoveries,fine, have all the self-confidence you want, but 
they have had neither public recognition nor, as I understand, publication 
or identification. Nor do you have a working machine. And if you're going to 
claim anyone in AI, like Hofstadter, has solved 5 or 6...puh-lease.


I don't think any reasonable person in AI or AGI will claim any of these 
have been solved. They may want to claim their method has promise, but not 
that it has actually solved any of them.


Which of the above, or any problem of AGI, period, do you claim to have been 
solved?



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Samantha,

  You know, I am getting pretty tired of hearing this poor mouth crap.   This
 is not that huge a sum to raise or get financed.  Hell, there are some very
 futuristic rich geeks who could finance this single-handed and would not
 really care that much whether they could somehow monetize the result.   I
 don't believe for a minute that there is no way to do this.So exactly
 why are you singing this sad song year after year?
...
  From what you said above $50M will do the entire job.   If that is all that
 is standing between us and AGI then surely we can get on with it in all
 haste.   If it is a great deal more than this relatively small amount of
 money then lets move on to talk about that instead of whining about lack of
 coin.


This is what I thought in 2001, and what Bruce Klein thought when he started
working with me in 2005.

In brief, what we thought is something like:


OK, so  ...

On the one hand, we have an AGI design that seems to its sane PhD-scientist
creator to have serious potential of leading to human-level AGI.  We have
a team of professional AI scientists and software engineers who are
a) knowledgeable about it, b) eager to work on it, c) in agreement that
it has a strong chance of leading to human-level AGI, although with
varying opinions on whether the timeline is, say, 7, 10, 15 or 20 years.
Furthermore, the individuals involved are at least thoughtful about issues
of AGI ethics and the social implications of their work.   Carefully-detailed
arguments as to why it is believed the AGI design will work exist, but,
these are complex, and furthermore do not comprise any sort of irrefutable
proof.

On the other hand, we have a number of wealthy transhumanists who would
love to see a beneficial human-level AGI come about, and who could
donate or invest some $$ to this cause without serious risk to their own
financial stability should the AGI effort fail.

Not only that, but there are a couple related factors

a) early non-AGI versions of some of the components of said AGI design
are already being used to help make biological discoveries of relevant
to life extension (as documented in refereed publications)

b) very clear plans exist, including discussions with many specific potential
customers, regarding how to make $$ from incremental products along the
way to the human-level AGI, if this is the pathway desired


So, we talked to a load of wealthy futurists and the upshot is that it's really
really hard to get these folks to believe you have a chance at achieving
human-level AGI.  These guys don't have the background to spend 6 months
carefully studying the technical documentation, so they make a gut decision,
which is always (so far) that gee, you're a really smart guy, and your team
is great, and you're doing cool stuff, but technology just isn't there yet.

Novamente has gotten small (but much valued)
investments from some visionary folks, and SIAI has
had the vision to hire 1.6 folks to work on OpenCog, which is an
open-source sister project of the Novamente Cogntion Engine project.

I could speculate about the reasons behind this situation, but the reason is NOT
that I suck at raising money ... I have been involved in fundraising
for commercial
software projects before and have been successful at it.

I believe that in 10-15 years from now, one will be able to approach the exact
same people with the same sort of project, and get greeted with enthusiasm
rather than friendly dismissal.  Going against prevailing culture is
really hard,
even if you're dealing with people who **think** they're seeing beyond the
typical preconceptions of their culture.  Slowly though the idea that AGI is
possible and feasible is wending its way into the collective mind.

I stress, though, that if one had some kind of convincing, compelling **proof**
of being on the correct path to AGI, it would likely be possible to raise $$
for one's project.  This proof could be in several possible forms, e.g.

a) a mathematical proof, which was accepted by a substantial majority
of AI academics

b) a working software program that demonstrated human-child-like
functionality

c) a working robot that demonstrated full dog-like functionality

Also, if one had good enough personal connections with the right sort
of wealthy folks, one could raise the $$ -- based on their personal trust
in you rather than their trust in your ideas.

Or of course, being rich and funding your work yourself is always an
option (cf Jeff Hawkins)

This gets back to a milder version of an issue Richard Loosemore is
always raising; the complex systems problem.  My approach to AGI
is complex systems based, which means that the components are NOT
going to demonstrate any general intelligence -- the GI is intended
to come about as a holistic, whole-system phenomenon.  But not in any
kind of mysterious way: we have a detailed, specific theory of why
this will occur, in terms of the particular interactions between the
components.

But what 

Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
  I don't think any reasonable person in AI or AGI will claim any of these
 have been solved. They may want to claim their method has promise, but not
 that it has actually solved any of them.

Yes -- it is true, we have not created a human-level AGI yet.  No serious
researcher disagrees.  So why is it worth repeating the point?

Similarly, up till the moment when the first astronauts walked on the moon,
you could have run around yelping that no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise.

It's true -- theories and ideas can always be wrong, and empirical proof adds
a whole new level of understanding.  (Though, empirical proofs don't exist
in a theoretical vacuum, they do require theoretical interpretation.
For instance
physicists don't agree on which supposed top quark events really were
top quarks ... and some nuts still don't believe people walked on the moon,
just as even after human-level AGI is achieved some nuts still won't believe
it...)

Nevertheless, with something as complex as AGI you gotta build stuff based
on a theory.  And not everyone is going to believe the theory until the proof
is there.  And so it goes...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


RE: [singularity] Vista/AGI

2008-04-13 Thread Derek Zahn
Ben Goertzel: Yes -- it is true, we have not created a human-level AGI yet. No 
serious researcher disagrees. So why is it worth repeating the point?
Long ago I put Tintner in my killfile -- he's the only one there, and it's 
regrettable but it was either that or start taking blood pressure medicine... 
so *plonk*.  It's not necessarily that I disagree with most of his (usually 
rather obvious) points or think his own ideas (about image schemas or whatever) 
are worse than other stuff floating around, but his toxic personality makes the 
benefit not worth the cost.  Now I only have to suffer the collateral damage in 
responses.
 
However, I went to the archives to fetch this message.   I do think it would be 
nice to have tests or problems that one could point to as partial 
progress... but it's really hard.  Any such things have to be fairly rigorously 
specified (otherwise we'll argue all day about whether they are solved or not 
-- see Tintner's Creativity problem as an obvious example), and they need to 
not be AGI complete themselves, which is really hard.  For example, Tintner's 
Narrative Visualization task strikes me as needing all the machinery and a very 
large knowledge base so by the time a system could do a decent job of this in a 
general context it would already have demonstrably solved the whole thing.
 
The other common criticism of tests is that they can often be solved by 
Narrow-AI means (say, current face recognizers which are often better at this 
task than humans).  I don't necessarily think this is a disqualification 
though... if the solution is provided in the context of a particular 
architecture with a plausible argument for how the system could have produced 
the specifics itself, that seems like some sort of progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to measure 
the ease with which the system can be adapted by its builders to solve narrow 
AI problems -- sort of a cognitive enhancement measurement.  Such an approach 
makes a decent programming language and development environment be a tangible 
early step toward AGI but maybe that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my opinion 
to this field.  I can't think of any though, and they might not exist.  If it 
is in fact impossible to find such tasks, what does that say about AGI as an 
endeavor?
 

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Mike Tintner wrote:



Mike,

Your comments are irresponsible.  Many problems of AGI have been 
solved. If you disagree with that, specify exactly what you mean by a 
problem of AGI, and let us list them.


1.General Problem Solving and Learning (independently learning/solving 
problem in, a new domain)


2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna which can embrace rich variety of different faces/photos of her


3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to 
recognize same melody played in any form]


5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


In your ignorance, you named a set of targets, not a set of problems. 
 If you want to see these fully functioning, you will see them in the 
last year of a 10-year AGI project  but if we listed to you, the 
first nine years of that project would be condemned as a complete waste 
of time.


If, on the other hand, you want to see an *in* *principle* solution (an 
outline of how these can all be implemented), then these in principle 
solutions are all in existence.  It is just that you do not know them, 
and when we go to the trouble of pointing them out to you (or explaining 
them to you), you do not understand them for what they are.




[By all means let's identify some more unsolved problems BTW..]

I think Ben  I more or less agreed that if he had really solved 1) - if 
his pet could really independently learn to play hide-and-seek after 
having been taught to fetch, it would constitute a major breakthrough, 
worthy of announcement to the world. And you can be sure it would be 
provoking a great deal of discussion.


As for your discoveries,fine, have all the self-confidence you want, 
but they have had neither public recognition nor, as I understand, 
publication 


Okay, stop rght there.

This is a perfect example of the nonsense you utter on this list:  you 
know that I have published a paper on the complex systems problem 
because you told me recently that you have read the paper.


But even though you have read this published paper, all you can do when 
faced with the real achievement that it contains is to say that (a) you 
don't understand it, and (b) this published paper that you have already 
read  has not been published!


Are there no depths to which you will not stoop?



Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Richard Loosemore

Derek Zahn wrote:

Ben Goertzel:

  Yes -- it is true, we have not created a human-level AGI yet. No serious
  researcher disagrees. So why is it worth repeating the point?

Long ago I put Tintner in my killfile -- he's the only one there, and 
it's regrettable but it was either that or start taking blood pressure 
medicine... so *plonk*.  It's not necessarily that I disagree with most 
of his (usually rather obvious) points or think his own ideas (about 
image schemas or whatever) are worse than other stuff floating around, 
but his toxic personality makes the benefit not worth the cost.  Now I 
only have to suffer the collateral damage in responses.


Yes, he was in my killfile as well for a long time, then I decided to 
give him a second chance.  Now I am regretting it, so back he goes ... 
*plonk*.


Mike:  the only reason I am now ignoring you is that you persistently 
refuse to educate yourself about the topics discussed on this list, and 
instead you just spout your amateur opinions as if they were fact.  Your 
inability to distinguish real science from your amateur opinion is why, 
finally, I have had enough.


I apologize to the list for engaging him.  I should have just ignored 
his ravings.




However, I went to the archives to fetch this message.   I do think it 
would be nice to have tests or problems that one could point to as 
partial progress... but it's really hard.  Any such things have to be 
fairly rigorously specified (otherwise we'll argue all day about whether 
they are solved or not -- see Tintner's Creativity problem as an 
obvious example), and they need to not be AGI complete themselves, 
which is really hard.  For example, Tintner's Narrative Visualization 
task strikes me as needing all the machinery and a very large knowledge 
base so by the time a system could do a decent job of this in a general 
context it would already have demonstrably solved the whole thing.


It looks like you, Ben and I have now all said exactly the same thing, 
so we have a strong consensus on this.



The other common criticism of tests is that they can often be solved 
by Narrow-AI means (say, current face recognizers which are often better 
at this task than humans).  I don't necessarily think this is a 
disqualification though... if the solution is provided in the context of 
a particular architecture with a plausible argument for how the system 
could have produced the specifics itself, that seems like some sort of 
progress.
 
I sometimes wonder if a decent measurement of AGI progress might be to 
measure the ease with which the system can be adapted by its builders to 
solve narrow AI problems -- sort of a cognitive enhancement 
measurement.  Such an approach makes a decent programming language and 
development environment be a tangible early step toward AGI but maybe 
that's not all bad.
 
At any rate, if there were some clearly-specified tests that are not 
AGI-complete and yet not easily attackable with straightforward software 
engineering or Narrow AI techniques, that would be a huge boost in my 
opinion to this field.  I can't think of any though, and they might not 
exist.  If it is in fact impossible to find such tasks, what does that 
say about AGI as an endeavor?


My own feeling about this is that when a set of ideas start to gel into 
one coherent approach to the subject, with a description of those ideas 
being assembled as a book-length manuscript, and when you read those 
ideas and they *feel* like progress, you will know that substantial 
progress is happening.


Until then, the only people who might get an advanced feeling that such 
a work is on the way are the people on the front lines, you see all the 
pieces coming together just before they are assembled for public 
consumption.


Whether or not someone could write down tests of progress ahead of that 
point, I do not know.





Richard Loosemore



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner
Ben:So why is it worth repeating the point?Similarly, up till the moment 
when the first astronauts walked on the moon,

you could have run around yelping that no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise.

I repeated the details because I was challenged. (And unlike Richard, I do 
answer challenges). The original point -  a valid one, I think - is until 
you've solved one AGI problem, you can't make any reasonable prediction as 
to WHEN the rest will be solved and how much it will cost in resources. And 
it's not worth much discussion.


AGI is different from moonwalking - that WAS successfully predicted by JFK 
because they did indeed have technology reasonably likely to bring it about.


I would compare AGI predictions with predicting when we will have a 
mind-reading machine, (except that personally, I think AGI is much harder). 
Yes, you can have a bit of interesting discussion about that to begin with, 
but then the subject, i.e. making predictions,  exhausts itself, because 
there are too many unknowns. Ditto here. No? 



---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins

Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job.   If 
that is all

that is standing between us and AGI then surely we can get on with it in
all haste.

Oh for gawdsake, this is such a tedious discussion. I would suggest 
the following is a reasonable *framework* for any discussions - 
although it is also a framework to end discussions for the moment.
Sigh.  I *was* somewhat tedious in that I am getting more impatient year 
by year especially as I see more friends succumb to maladies that AGI or 
even more IA could solve and watch the world spin closer to chaos.   
However I do not believe you have any business proposing any framework 
or definitive solution as you do not have the knowledge or chops to do so. 



- samantha

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Jean-paul Van Belle
Hi Mike

Your 1 consists of two separate challenges: (1) reasoning  (2) learning
IMHO your 3 to 6 can be classified under (3) pattern recognition. I think 
perhaps even your 2 may flow out of pattern recognition.
Of course, the real challenge is to find an algorithmic way (or architecture) 
to do the above without bumping into exponential explosion.e. move the problem 
out of the NP-complete arena. (Else an AGI will never exceed human intelligence 
by a real margin.)

=Jean-Paul

 Mike Tintner [EMAIL PROTECTED] wrote:
 Your comments are irresponsible.  Many problems of AGI have been solved. 
 If you disagree with that, specify exactly what you mean by a problem of 
 AGI, and let us list them.
1.General Problem Solving and Learning (independently learning/solving  
problem in, a new domain)
 
 2.Conceptualisation [Invariant Representation] -  forming concept of Madonna 
 which can embrace rich variety of different faces/photos of her
 
 3.Visual Object Recognition
 
 4.Aural Object Recognition [dunno proper term here - being able to 
 recognize same melody played in any form]
 
 5.Analogy
 
 6.Metaphor
 
 7.Creativity
 
8.Narrative Visualisation - being able to imagine and create a visual 
scenario ( a movie)   [just made this problem up - but it's a good one]


---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Mike Tintner

Jean-Paul,

More or less yes to your points. (I was only tossing off something quickly). 
Actually I think there's a common core to 2)-7) and will be setting out 
something about that soon. But I don't think it's recognizing patterns - on 
the contrary, the common problem is partly that there ISN'T a pattern to be 
recognized. If you have to understand the metaphor, the dancing towers, 
there's no common pattern between human dancers and the skyscrapers referred 
to.


I also think that while there's a common core, each problem has its own 
complications. Maybe Hawkins is right that all the senses process inputs in 
basically the same hierarchical fashion - and any mechanical AGI's senses 
will have to do the same - but if you think about it, the senses evolved 
gradually, so there must be different reasons for that.


(And I would add another unsolved ( unrecognized) problem for AGI:

9)Common Sense Processing - being able to process an event in multiple 
sensory modalities, and switch between them to solve problems - for example, 
to be able to touch an object blindfolded, and then draw its outlines 
visually.  )



Jean-Paul:  Your 1 consists of two separate challenges: (1) reasoning 
 (2) learning
IMHO your 3 to 6 can be classified under (3) pattern recognition. I think 
perhaps even your 2 may flow out of pattern recognition.
Of course, the real challenge is to find an algorithmic way (or 
architecture) to do the above without bumping into exponential explosion.e. 
move the problem out of the NP-complete arena. (Else an AGI will never 
exceed human intelligence by a real margin.)




Mike Tintner [EMAIL PROTECTED] wrote:

Your comments are irresponsible.  Many problems of AGI have been solved.
If you disagree with that, specify exactly what you mean by a problem of
AGI, and let us list them.

1.General Problem Solving and Learning (independently learning/solving
problem in, a new domain)

2.Conceptualisation [Invariant Representation] -  forming concept of 
Madonna

which can embrace rich variety of different faces/photos of her

3.Visual Object Recognition

4.Aural Object Recognition [dunno proper term here - being able to
recognize same melody played in any form]

5.Analogy

6.Metaphor

7.Creativity

8.Narrative Visualisation - being able to imagine and create a visual
scenario ( a movie)   [just made this problem up - but it's a good one]





---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Testing AGI (was RE: [singularity] Vista/AGI)

2008-04-13 Thread Matt Mahoney
--- Derek Zahn [EMAIL PROTECTED] wrote:

 At any rate, if there were some clearly-specified tests that are not
 AGI-complete and yet not easily attackable with straightforward software
 engineering or Narrow AI techniques, that would be a huge boost in my
 opinion to this field.  I can't think of any though, and they might not
 exist.  If it is in fact impossible to find such tasks, what does that say
 about AGI as an endeavor?

Text compression is one such test, as I argue in
http://cs.fit.edu/~mmahoney/compression/rationale.html

The test is only for language modeling.  Theoretically it could be extended to
vision or audio processing.  For example, to maximally compress video the
compressor must understand the physics of the scene (e.g. objects fall down),
which can be arbitrarily complex (e.g. a video of people engaging in
conversation about Newton's law of gravity).  Likewise, maximally compressing
music is equivalent to generating or recognizing music that people like.  The
problem is that the information content of video and audio is dominated by
incompressible noise that is nontrivial to remove -- noise being any part of
the signal that people fail to perceive.  Deciding which parts of the signal
are noise is itself AI-hard, so it requires a lossy compression test with
human judges making subjective decisions about quality.  This is not a big
problem for text because the noise level (different ways of expressing the
same meaning) is small, or at least does not overwhelm the signal.  Long term
memory has an information rate of a few bits per second, so any signal you
compress should not be many orders of magnitude higher.

A problem with text compression is the lack of adequate hardware.  There is a
3 way tradeoff between compression ratio, memory, and speed.  The top
compressor in http://cs.fit.edu/~mmahoney/compression/text.html uses 4.6 GB of
memory.  Many of the best algorithms could be drastically improved if only
they ran on a supercomputer with 100 GB or more.  The result is that most
compression gains come from speed and memory optimization rather than using
more intelligent models.  The best compressors use crude models of semantics
and grammar.  They preprocess the text by token substitution from a dictionary
that groups words by topic and grammatical role, then predict the token stream
using mixtures of fixed-offset context models.  It is roughly equivalent to
the ungrounded language model of a 2 or 3 year old child at best.

An alternative would be to reduce the size of the test set to reduce
computational requirements, as the Hutter prize did. http://prize.hutter1.net/
I did not because I believe the proper way to test an adult level language
model is to train it on the same amount of language that an average adult is
exposed to, about 1 GB.  I would be surprised if a 100 MB test progressed past
the level of a 3 year old child.  I believe the data set is too small to train
a model to learn arithmetic, logic, or high level reasoning.  Including these
capabilities would not improve compression.

Tests on small data sets could be used to gauge early progress.  But
ultimately, I think you are going to need hardware that supports AGI to test
it.


-- Matt Mahoney, [EMAIL PROTECTED]

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread bfwible

Ben,


Good Afternoon.   I am a rather new addition to the AGI mailing list and 
just read your response concerning the future of AGI.  I agree with you. 
The funding is there.  The belief that AGI is right around the corner is 
not.   From the people I talk withthey have read Kurzweil and understand 
the rate of growth of technology (the curve).  They also understand that 
the exponential growth in Kurzweil's graphs represents  processing power and 
this dynamic will substantively increase as nanotechnology moves from MEM to 
a smaller and smaller (atomic possibly) operating environment.




What is difficult for people/investors to gauge is AI/AGI.  Businesses 
and/or government organizations (not including DARPA) need a strategic plan 
for large investments into future technologies. They understand risk but 
weigh it against current requirements and long term gain.  There are 
people/organizations ready to invest if a strong rational analysis on the 
timeline is developed and presented in language that they understand.  The 
latter comment is key.  Senior leaders (business, government and just very 
wealthy investors) are acutely aware of the hype cycle that occurs with all 
new technologies.  I have found that overselling is much worse than 
underselling.




In my previous position I served as a Deputy Chief of a Trends and 
Forecasting Center for the government.  My charter was to provide strategic 
assessments to corporate leadership for investment purposes.  Those 
investments could include people, funding or priorities of effort.  So, I am 
well versed in the interface between developers, customers, senior leaders 
and financial backers.




Just my personal opinion...but it appears that the exponential technology 
growth chart, which is used in many of the briefings, does not include 
AI/AGI. It is processing centric.  When you include AI/AGI the exponential 
technology curve flattens out in the coming years (5-7) and becomes part of 
a normal S curve of development.  While computer power and processing will 
increase exponentially (as nanotechnology grows) the area of AI will need 
more time to develop.


I would be interested in your thoughts.



Regards,

Ben



I am moving to a new position this summer and will be a visiting professor 
in academia for two years.












---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Hi,

  Just my personal opinion...but it appears that the exponential technology
 growth chart, which is used in many of the briefings, does not include
 AI/AGI. It is processing centric.  When you include AI/AGI the exponential
 technology curve flattens out in the coming years (5-7) and becomes part of
 a normal S curve of development.  While computer power and processing will
 increase exponentially (as nanotechnology grows) the area of AI will need
 more time to develop.

  I would be interested in your thoughts.

I think this is because progress toward general AI has been difficult
to quantify
in the past, and looks to remain difficult to quantify into the future...

I am uncertain as to the extent to which this problem can be worked around,
though.

Let me introduce an analogy problem

Understanding the operation of the brain better and better is to
scanning the brain with higher and higher spatiotemporal accuracy,
as Creating more and more powerful AGI is to what?

;-)

The point is that understanding the brain is also a nebulous and
hard-to-quantify goal, but we make charts for it by treating brain
scan accuracy as a more easily quantifiable proxy variable.  What's a
comparable proxy variable for AGI?

Suggestions welcome!

-- Ben

Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] A more accessible summary of the CSP

2008-04-13 Thread Richard Loosemore


Since I am making an effort to get a good chunk of stuff written this 
week and next, I want to let y'all know when I put out new stuff...


I have written a short, accessible summary of the CSP argument on my 
blog, as a preparation for the next phase tomorrow.


Hopefully this one will not be as demading as the last (a few hundred 
words instead of 4,200).





Richard Loosemore

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com