Re[4]: [agi] Do we need massive computational capabilities?

2007-12-11 Thread Dennis Gorelik
Matt,

 You can feed it with text. Then AGI would simply parse text [and
 optionally - Google it].
 
 No need for massive computational capabilities.

 Not when you can just use Google's 10^6 CPU cluster and its database with 10^9
 human contributors.

That's one of my points: our current civilization gives AGI researcher
ability to build AGI prototype on single PC using existing
civilization's achievements.

Human being cannot be intelligent without surrounding society anyway.
We all would loose our mind in less than 10 years if we are totally
separated from other intelligent systems.

Intelligence simply cannot function fully independently.


Bottom line: when building AGI - we should focus on building member
for our current civilization. Not fully independent intelligent
system.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=75096267-b51b43


RE: [agi] Do we need massive computational capabilities?

2007-12-09 Thread Gary Miller
The leading software packages in high speed facial recogniton are based upon
feature extraction.

If the face is analyzed into lets say 30 features perhaps, then 30 processes
could analyze the photo for these features in parallel.

After that the 30 features are just looked up in a relational database
against all the other features from the thousands of other images it has
predigested.

This is almost exactly the same methodology used to match fingerprints today
by law enforcement agencies but of course the feature extraction is a lot
more complicated.

The government is pouring large amounts of money into this research for
usage in terrorist identification at airports or other locations.

Searching on High Speed Facial Recognition yields several companies
already competing in this marketspace.


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 3:26 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?



Matt,

First of all, we are, I take it, discussing how the brain or a computer can
recognize an individual face from a video -  obviously the brain cannot
match a face to a selection of a  billion other faces.

Hawkins' answer to your point that the brain runs masses of neurons in
parallel in order to accomplish facial recognition is:

if I have many millions of neurons working together, isn't that like a
parallel computer? Not really. Brains operate in parallel  parallel
computers operate in parallel, but that's the only thing they have in
common..

His basic point, as I understand, is that no matter how many levels of brain
are working on this problem of facial recognition, they are each still only
going to be able to perform about ONE HUNDRED steps each in that half
second.  Let's assume there are levels for recognising the invariant
identity of this face, different features, colours, shape, motion  etc -
each of those levels is still going to have to reach its conclusions
EXTREMELY rapidly in a very few steps.

And all this, as I said, I would have thought all you guys should be able to
calculate within a very rough ballpark figure. Neurons only transmit signals
at relatively slow speeds, right? Roughly five million times slower than
computers. There must be a definite limit to how many neurons can be
activated and how many operations they can perform to deal with a facial
recognition problem, from the time the light hits the retina to a half
second later? This is the sort of thing you all love to calculate and is
really important - but where are you when one really needs you?

Hawkins' point as to how the brain can decide in a hundred steps what takes
a computer a million or billion steps (usually without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the
answers from memory. In essence, the answers were stored inmemory a long
time ago. It only takes a few steps to retrieve something from memory. Slow
neurons are not only fast enough to do this, but they constitute the memory
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

I was v. crudely arguing something like this in a discussion with Richard
about massive parallel computation.  If Hawkins is  right, and I think he's
at least warm, you guys have surely got it all wrong.  (although you might
still argue like Ben  that you can it do your way not the brain's - but
hell, the difference in efficiency is so vast it surely ought to break your
engineering heart).


Matt/ MT:
 Thanks. And I repeat my question elsewhere : you don't think that the 
 human brain which does this in say half a second, (right?), is using 
 massive computation to recognize that face?

So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and
this will all run on your PC?  Let me know when your program is finished so
I can try it out.

 You guys with all your mathematical calculations re the brain's total 
 neurons and speed of processing surely should be able to put ball-park 
 figures on the maximum amount of processing that the brain can do here.

 Hawkins argues:

 neurons are slow, so in that half a second, the information entering 
 your brain can only traverse a chain ONE HUNDRED neurons long. ..the 
 brain 'computes' solutions to problems like this in one hundred steps 
 or fewer, regardless of how many total neurons might be involved. From 
 the moment light enters your eye to the time you [recognize the 
 image], a chain no longer than one hundred neurons could be involved. 
 A digital computer attempting to solve the same problem would take 
 BILLIONS of steps. One hundred computer instructions are barely enough 
 to move a single character on the computer's display, let alone do
something interesting.

Which is why the human brain is so bad at arithmetic and other tasks

Re[4]: [agi] Do we need massive computational capabilities?

2007-12-08 Thread Dennis Gorelik
Mike,

What you describe - is set of AGI nodes.
AGI prototype is just one of such node.
AGI researcher doesn't have to develop all set at once. It's quite
sufficient to develop only one AGI node. Such node will be able to
work on single PC.


 I believe Matt's proposal is not as much about the exposure to
 memory or sheer computational horsepower - it's about access to
 learning experience. 

 Matt's proposed network enables IO to the us [existing examples of
 intelligence/teachers].  Maybe these nodes can ask questions, What
 does my owner know of A? - the answer becomes part of its local
 KB.  Hundreds of distributed agents are now able to query Matt's
 node about A (clearly Matt does not have time to answer 500 queries
 on topic A)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73990287-5aefe6

Re: Re[4]: [agi] Do we need massive computational capabilities?

2007-12-08 Thread Mike Dougherty
On Dec 8, 2007 5:33 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:

 What you describe - is set of AGI nodes.
 AGI prototype is just one of such node.
 AGI researcher doesn't have to develop all set at once. It's quite
 sufficient to develop only one AGI node. Such node will be able to
 work on single PC.


Then I'd like to quantify terminology.  What is the sum of N AGI Nodes
where N  1?  Is that a community of discrete AGI, or a single multi-nodal
entity?

I don't imagine that a single node is initially much more than a narrow-AI
data miner.  The twist that separates this from any commercially available
OLAP cube processor is the infrastructure for aquiring new information from
distributed nodes.  In this sense, I imagine that the internode
communications and transaction record contains the 'complexity' (from
another recent thread) that allows interesting behaviors to emerge - if not
AGI, then at least a novelty worth pursuing.

If the node that was a PC on the internet is a CPU in a supercomputer (or a
PC in a Beowulf cluster) is it more or less a part of the whole?
Semantically I'm not sure you can say this node is an AGI any more than
you can say This neuron contains the intelligence

I do agree with you that any intelligence that is capable of
asking/answering a question can be considered a 'node' in distributed AGI.
But this high level of agreement makes many assumptions about shared
definitions of important terms.  I would like to investigate those
definitions without the typical bickering about who is right or wrong
because (imo) there are only different perspectives.  The first team to
produce AGI will not necessarily disprove that any other strategy will not
work.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74013728-9789fb

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Jean-Paul Van Belle
Sounds like the worst case scenario: computations that need between say 20 and 
100 PCs. Too big to run on a very souped up server (4-way Quad processor with 
128GB RAM) but to scale up to a 100 Beowulf PC cluster typically means a factor 
10 slow-down due to communications (unless it's a 
local-data/computation-intensive algorithm) so you actually haven't gained much 
in the process. {Except your AGI is now ready for a distributed computing 
environment, which I believe luckily Novamenta was explicitely designed for.} 
:)
 
=Jean-Paul
 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21

 Benjamin Goertzel [EMAIL PROTECTED] 2007/12/07 15:06 
I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73568490-365c88

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote:


  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 

 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions.

Problems that would likely go beyond the capability of a current PC to solve
in a realistic amount of time, in the
current NM architecture, would include for instance:

-- Learning a new type of linguistic relationship (in the context of
link grammar, this would mean e.g. learning a new grammatical link type)

-- Learning a new truth value formula for a probabilistic inference rule

-- Recognizing objects in a complex, rapidly-changing visual scene

(Not that we have written the code to let the system solve these particular
problems yet ... but the architecture should allow it...)

I don't think we need more than hundreds of PCs to deal with these things,
but we need more than a current PC, according to the behavior of our
current algorithms.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73544012-c56a06


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bob Mottram
If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much.  Provided that I had the right knowledge I
think I could produce a proof of concept type AGI on a single PC
today, even if it ran like a tortoise.  It's the knowledge which is
mainly lacking I think.

Although I do a lot of stuff with computer vision I find myself not
being all that restricted by computational limitations.  This
certainly wasn't the case a few years ago.  Generally even the lowest
end hardware these days has enough compute power to do some pretty
sophisticated stuff, especially if you include the GPU.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73586802-476a69


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 If I had 100 of the highest specification PCs on my desktop today (and
 it would be a big desk!) linked via a high speed network this wouldn't
 help me all that much.  Provided that I had the right knowledge I
 think I could produce a proof of concept type AGI on a single PC
 today, even if it ran like a tortoise.  It's the knowledge which is
 mainly lacking I think.

I agree that at the moment hardware is NOT the bottleneck.

This is why, while we've instrumented the Novamente system to
be straightforwardly extensible to a distributed implementation, we
haven't done much actual distributed processing implementation yet.

We have build commercial systems incorporating the NCE in simple
distributed architectures, but haven't gone the distributed-AGI direction
yet in practice -- because, as you say, it seems likely that the key
AGI problems can be
worked out on a single machine, and you can then scale up afterwards.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73609156-15fdf3


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner




Matt,:AGI research needs

special hardware with massive computational capabilities.




Could you give an example or two of the kind of problems that your AGI 
system(s) will need such massive capabilities to solve? It's so good - in 
fact, I would argue, essential - to ground these discussions. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73537387-8fc58e


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner
Thanks. And I repeat my question elsewhere : you don't think that the human 
brain which does this in say half a second, (right?), is using massive 
computation to recognize that face?


You guys with all your mathematical calculations re the brain's total 
neurons and speed of processing surely should be able to put ball-park 
figures on the maximum amount of processing that the brain can do here.


Hawkins argues:

neurons are slow, so in that half a second, the information entering your 
brain can only traverse a chain ONE HUNDRED neurons long. ..the brain 
'computes' solutions to problems like this in one hundred steps or fewer, 
regardless of how many total neurons might be involved. From the moment 
light enters your eye to the time you [recognize the image], a chain no 
longer than one hundred neurons could be involved. A digital computer 
attempting to solve the same problem would take BILLIONS of steps. One 
hundred computer instructions are barely enough to move a single character 
on the computer's display, let alone do something interesting.


IOW, if that's true, the massive computational approach is surely 
RIDICULOUS - a grotesque travesty of engineering principles of economy, no? 
Like using an entire superindustry of people to make a single nut? And, of 
course, it still doesn't work. Because you just don't understand how 
perception works in the first place.


Oh right... so let's make our computational capabilities even more massive, 
right?  Really, really massive. No, no, even bigger than that?




 Matt,:AGI research needs
 special hardware with massive computational capabilities.


Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good - in
fact, I would argue, essential - to ground these discussions.


For example, I ask the computer who is this? and attach a video clip from 
my

security camera.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73724842-42226f


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 Thanks. And I repeat my question elsewhere : you don't think that the human 
 brain which does this in say half a second, (right?), is using massive 
 computation to recognize that face?

So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and this
will all run on your PC?  Let me know when your program is finished so I can
try it out.

 You guys with all your mathematical calculations re the brain's total 
 neurons and speed of processing surely should be able to put ball-park 
 figures on the maximum amount of processing that the brain can do here.
 
 Hawkins argues:
 
 neurons are slow, so in that half a second, the information entering your 
 brain can only traverse a chain ONE HUNDRED neurons long. ..the brain 
 'computes' solutions to problems like this in one hundred steps or fewer, 
 regardless of how many total neurons might be involved. From the moment 
 light enters your eye to the time you [recognize the image], a chain no 
 longer than one hundred neurons could be involved. A digital computer 
 attempting to solve the same problem would take BILLIONS of steps. One 
 hundred computer instructions are barely enough to move a single character 
 on the computer's display, let alone do something interesting.

Which is why the human brain is so bad at arithmetic and other tasks that
require long chains of sequential steps.  But somehow it can match a face to a
name in 0.5 seconds.  Neurons run in PARALLEL.  Your PC does not.  Your brain
performs 10^11 weighted sums of 10^15 values in 0.1 seconds.  Your PC will
not.


 
 IOW, if that's true, the massive computational approach is surely 
 RIDICULOUS - a grotesque travesty of engineering principles of economy, no? 
 Like using an entire superindustry of people to make a single nut? And, of 
 course, it still doesn't work. Because you just don't understand how 
 perception works in the first place.
 
 Oh right... so let's make our computational capabilities even more massive, 
 right?  Really, really massive. No, no, even bigger than that?
 
 
   Matt,:AGI research needs
   special hardware with massive computational capabilities.
  
 
  Could you give an example or two of the kind of problems that your AGI
  system(s) will need such massive capabilities to solve? It's so good - in
  fact, I would argue, essential - to ground these discussions.
 
 For example, I ask the computer who is this? and attach a video clip from 
 my
 security camera.
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73765756-f02c55


Re: Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Dougherty
On Dec 7, 2007 7:41 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:

  No, my proposal requires lots of regular PCs with regular network
 connections.

 Properly connected set of regular PCs would usually have way more
 power than regular PC.
 That makes your hardware request special.
 My point is - AGI can successfully run on singe regular PC.
 Special hardware would be required later, when you try to scale
 out working AGI prototype.


I believe Matt's proposal is not as much about the exposure to memory or
sheer computational horsepower - it's about access to learning experience.
A supercomputer atop an ivory tower (or in the deepest government
sub-basement) has an immense memory and speed (and dense mesh of
interconnects, etc., etc.) - but without interaction from outside itself,
it's really just a powerful navel-gazer.

Trees do not first grow a thick trunk and deep roots, then change to growing
leaves to capture sunlight.  As I see it, each node in Matt's proposed
network enables IO to the us [existing examples of intelligence/teachers].
Maybe these nodes can ask questions, What does my owner know of A? - the
answer becomes part of its local KB.  Hundreds of distributed agents are now
able to query Matt's node about A (clearly Matt does not have time to answer
500 queries on topic A)  During the course of processing the local KB on
topic A, there is a reference to topic B.  Matt's node automatically queries
every node that previously asked about topic A (seeking first likely
authority on the inference)  - My node asks me, What do you know of B?  Is
A-B?  I contribute to my node's local KB, and it weights the inference for
A-B.  This answer is returned to Matt's node (among potentially hundreds of
other relative weights) and Matt's node strengthen the A-B inference based
on received responses.  At this point, the distribution of weights for A-B
are all over the network depending on the local KB of each node and the
historical traffic of query/answer flow.   After some time, I ask my node
about topic C.  It knows nothing of topic C, so it asks me directly to
deposit information to the local KB (initial context) - through the course
of 'conversation' with other nodes, my answer comes back as the aggregate of
the P2P knowledge within a query radius.  On a simple question I may only
allow 1 hour of think time, for a deeper research project that radius of
query may be allowed to extend 2 weeks of interconnect.  During my research,
my node will necessarily become interested in topic C - and will likely
become known among the network as the local expert.  (local expert for a
topic would be a useful designation to weigh each node for primary query
targets as well as 'trusting' the weight of the answers from each node)

I don't think this is vastly different from how people (as working examples
of intelligence nodes) gather knowledge from peers.

Perhaps this approach to intelligence is not an absolute definition as
much as a best effort/most useful answer to date intention.  Even if this
schema does not extend to emergent AGI, it builds a useful infrastructure
that can be utilized by currently existing intelligences as well as whatever
AGI does eventually come into existence.

Matt, is this coherent with your view or am I off base?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73898638-6a4fad

RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Mike Tintner # Yes, I understood that (though sure, I'm capable of
misunderstanding anything here!) 

ED PORTER # Great, I am glad you understood this.  Part of what you
said indicated you did.  BTW, we are all capable of misunderstanding things.

 

Mike Tintner # Hawkins' basic point that the brain isn't a computer
at all -  which I think can be read less controversially as is a machine
that works on very fundamentally different principles to those of currently
programmed computers - especially when perceiving objects -  holds.

 

You're not dealing with that basic point, and I find it incredibly difficult
to get anyone here squarely to face it. People retreat into numbers and
millions.

 

ED PORTER # I think most of us understand that and are not disputing
it.  A Novamente-like approach to AGI is actually quite similar to Hawkins'
in many way.  For example, it uses hierarchical representation. So few of us
are talking about Old Fashioned AI as the major architecture for our systems
(although OFAI has its uses in certain areas).

 

Mike Tintner # P.S. You also don't answer my question re: how many
neurons  in total *can* be activated within a half second, or given period,
to work on a given problem - given their relative slowness of communication?
Is it indeed possible for hundreds of millions of messages about that one
subject to be passed among millions of neurons in that short space
(dunno-just asking)? Or did you pluck that figure out of the air?

 

ED PORTER # I was not aware I had been asked this question.  

 

If you are asking where I got the
it-probably-takes-hundreds-of-millions-of-steps-to-recognize-a-face, I was
sort of picking it out of the air, but I don't think it is unreasonable
pick.  I was including each synaptic transmission as a step.  Assume the
average neuron has roughly 1K active synapses (some people say several
thousand some say only about 100), and lets say an active cell fires at
least ten times during the 100 step process, and since you assume 100 levels
of activation, that would only be assuming an average of 100 neurons
activated on average at each of your 100 levels, which is not a terribly
broad search.  If a face were focused on, so that it took up just the size
of your thumbnail with you thumb sticking up with your arm extended fully in
front of your that would activate a portion of your foviated retna having a
resolution of roughly 10k pixels (if I recollect correctly from a
conversation with Tomaso Poggio).  Presumably this would include 3 color
inputs, a BW input, and with mangocellur and pravocellur inputs from each
eye, so you may well be talking about 100k neurons actived at just the V1
level.  If each has 1K neurons firing 10 times, that's 10k x 100K or 100M
synaptic firings right there, in just one of your 100 steps.  Now some of
that activation might be filtered out by the thalamus, but then you would
have to include all its activations used for such filtering, which according
to Stephen Grossberg involves multi-level activations in the
cortico-thalamic feedback loop, which probably would require roughly at
least 100M synaptic activations.  And when you recognize a face you normally
are seeing it substantially larger than your thumb nail at its furthest
extension from your face.  If you saw it as large as the length of your
entire thumb, rather than just your thumbnail, it would project on to about
10 times as many neurons in your thalamus and V1.  So, yes, I was guessing,
but I think hundreds of millions of steps. AKA synaptic activations, was a
pretty safe guess. 

 

Ed Porter

 

 

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 5:08 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?

 

ED PORTER # When you say It only takes a few steps to retrieve
something from memory. I hope you realize that depending how you count
steps, it actually probably takes hundreds of millions of steps or more.  It
is just that millions of them are performed in parallel, such that the
longest sequence of any one causal path among such steps is no longer than
100 steps.  That is a very, repeat very, different thing that suggesting
that only 100 separate actions were taken.  

 

Ed,

 

Yes, I understood that (though sure, I'm capable of misunderstanding
anything here!) But let's try and make it simple and as concrete as possible
- another way of putting Hawkins' point, as I understand,  is that at any
given level, if the brain is recognising a given feature of the face, it can
only compare it with very few comparable features in that half second with
its 100 operations  - whereas a computer will compare that same feature with
vast numbers of others.

 

And actually ditto, for that useful Hofstadter example you quoted, of
proceeding from aabc: aabd  to jjkl: ???  (although this is a somewhat more
complex operation which may take a couple of seconds

RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Bob,

I agree.  I think we should be able to make PC based AGI's.  With only about
50 million atoms they really wouldn't bea ble to have much world knowledge,
but they should be able to understand, say the world of a simple video game,
such as pong or PacMan.

As Richard Loosemore and I have just discussed in our last several emails on
the  Evidence complexity can be controlled by guiding hands thread, to
achieve powerful AGI's we will need very large complex systems and we need
to start experimenting with how to control the complexity of such larger
systems.

So building AGI's on a PC is a good start, which will hopefully start
happening ofter OpenCog comes out, but we also need to start building and
exploring larger system.  It is my very rough guess that human level 
AGI will need within several orders of magnitude of 10TBytes of RAM or
approximately as fast memory, 10T random RAM accesses/sec, and global x
sectional bandwidth of 100G 64 Byte messages/sec.  So you won't have that on
your desktop any time soon.  

But in twenty years you might.

We should be exploring constantly bigger and bigger machines between a PC
AGI and human level AGI's to learn more and more about the problem of
scaling up large systems.

Ed Porter

-Original Message-
From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 10:21 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?

If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much.  Provided that I had the right knowledge I
think I could produce a proof of concept type AGI on a single PC
today, even if it ran like a tortoise.  It's the knowledge which is
mainly lacking I think.

Although I do a lot of stuff with computer vision I find myself not
being all that restricted by computational limitations.  This
certainly wasn't the case a few years ago.  Generally even the lowest
end hardware these days has enough compute power to do some pretty
sophisticated stuff, especially if you include the GPU.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73637520-930b42

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney

--- Dennis Gorelik [EMAIL PROTECTED] wrote:

 Matt,
 
  For example, I disagree with Matt's claim that AGI research needs
  special hardware with massive computational capabilities.
 
  I don't claim you need special hardware.
 
 But you claim that you need massive computational capabilities
 [considerably above capabilities of regular modern PC], right?
 That means special.

No, my proposal requires lots of regular PCs with regular network connections.
 It is a purely software approach.  But more hardware is always better. 
http://www.mattmahoney.net/agi.html


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73635143-725e61


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Benjamin Goertzel
 Clearly the brain works VASTLY differently and more efficiently than current
 computers - are you seriously disputing that?

It is very clear that in many respects the brain is much less efficient than
current digital computers and software.

It is more energy-efficient by and large, as Read Montague has argued ...
but OTOH sometimes it is wy less algorithmically efficient

For instance, in spite of its generally high energy efficiency, my brain wastes
a lot more energy calculating 969695775755/ 8884 than my computer
does.

And e.g. visual cortex, while energy-efficient, is horribly algorithmically
inefficient, involving e.g. masses of highly erroneous motion-sensing neurons
whose results are averaged together to give reasonably accurate values..

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73893310-401039


Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

 No, my proposal requires lots of regular PCs with regular network connections.

Properly connected set of regular PCs would usually have way more
power than regular PC.
That makes your hardware request special.
My point is - AGI can successfully run on singe regular PC.
Special hardware would be required later, when you try to scale
out working AGI prototype.

  It is a purely software approach.  But more hardware is always better.

Not always.
More hardware costs money and requires more maintenance.

 http://www.mattmahoney.net/agi.html



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892920-985965


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Bryan Bishop
On Friday 07 December 2007, Mike Tintner wrote:
 P.S. You also don't answer my question re: how many neurons  in total
 *can* be activated within a half second, or given period, to work on
 a given problem - given their relative slowness of communication? Is
 it indeed possible for hundreds of millions of messages about that
 one subject to be passed among millions of neurons in that short
 space (dunno-just asking)? Or did you pluck that figure out of the
 air?

I suppose that the number of neurons that are working on a problem at a 
moment will have to expand exponentially based on the number of 
synaptic connections per neuron as well as the number of hits/misses 
per neuron that are receiving the signals, viewed as if an expanding 
light-cone sphere in the brain (it's of course, a neural activity 
cone / sphere, not light). I am sure this rate can be made into a 
model. 

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73877309-9727c9

Re[2]: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Dennis Gorelik
Matt,

  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 
 
 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - in
 fact, I would argue, essential - to ground these discussions. 

 For example, I ask the computer who is this? and attach a video clip from my
 security camera.


Why do you need image recognition in your AGI prototype?
You can feed it with text. Then AGI would simply parse text [and
optionally - Google it].

No need for massive computational capabilities.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73892756-356b26


RE: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Ed Porter
Mike,

MIKE TINTNER # Hawkins' point as to how the brain can decide in a
hundred steps what takes a computer a million or billion steps (usually
without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

ED PORTER # When you say It only takes a few steps to retrieve
something from memory. I hope you realize that depending how you count
steps, it actually probably takes hundreds of millions of steps or more.  It
is just that millions of them are performed in parallel, such that the
longest sequence of any one causal path among such steps is no longer than
100 steps.  That is a very, repeat very, different thing that suggesting
that only 100 separate actions were taken.  

You many already know and mean this, but from a quick read of your argument
it was not clear you did.

So I don't know which side of the Do we need massive computational
capabilities? you are on, but we do need massive computational
capabilities.  That 100 step task you referred, which often involves
recognizing a person at a different scale, angle, body position, facial
expression, and lighting, than we have seen them before, would probably
require many hundreds of millions of neuron to neuron messages in the brain,
and many hundreds of millions of computations in a computer.

I hope you realize that Hawkin's theory of Hierarchical memory means that
images are not stored as anything approaching photographs or drawings.  They
are stored as distributed hierarchical representations, in which a match
would often require parallel computing involving matching and selection at
multiple different representational levels.  The answer is not retrieved
from memory by any simple process, like vectoring into a look-up table, and
hopping to an address where the matching image is simply retrieved like a
jpg file.  The quote retrieval is a relatively massively parallel
operation.

You may already understand all of this, but it was not obvious from your
below post.  Some parts of your post seemed to reflect the correct
understanding, others didn't, at least from my quick read.

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Friday, December 07, 2007 3:26 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Do we need massive computational capabilities?



Matt,

First of all, we are, I take it, discussing how the brain or a computer can 
recognize an individual face from a video -  obviously the brain cannot 
match a face to a selection of a  billion other faces.

Hawkins' answer to your point that the brain runs masses of neurons in 
parallel in order to accomplish facial recognition is:

if I have many millions of neurons working together, isn't that like a 
parallel computer? Not really. Brains operate in parallel  parallel 
computers operate in parallel, but that's the only thing they have in 
common..

His basic point, as I understand, is that no matter how many levels of brain

are working on this problem of facial recognition, they are each still only 
going to be able to perform about ONE HUNDRED steps each in that half 
second.  Let's assume there are levels for recognising the invariant 
identity of this face, different features, colours, shape, motion  etc - 
each of those levels is still going to have to reach its conclusions 
EXTREMELY rapidly in a very few steps.

And all this, as I said, I would have thought all you guys should be able to

calculate within a very rough ballpark figure. Neurons only transmit signals

at relatively slow speeds, right? Roughly five million times slower than 
computers. There must be a definite limit to how many neurons can be 
activated and how many operations they can perform to deal with a facial 
recognition problem, from the time the light hits the retina to a half 
second later? This is the sort of thing you all love to calculate and is 
really important - but where are you when one really needs you?

Hawkins' point as to how the brain can decide in a hundred steps what takes 
a computer a million or billion steps (usually without much success) is:

The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]

I was v. crudely arguing something like this in a discussion with Richard 
about massive parallel computation

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner



Matt,

First of all, we are, I take it, discussing how the brain or a computer can 
recognize an individual face from a video -  obviously the brain cannot 
match a face to a selection of a  billion other faces.


Hawkins' answer to your point that the brain runs masses of neurons in 
parallel in order to accomplish facial recognition is:


if I have many millions of neurons working together, isn't that like a 
parallel computer? Not really. Brains operate in parallel  parallel 
computers operate in parallel, but that's the only thing they have in 
common..


His basic point, as I understand, is that no matter how many levels of brain 
are working on this problem of facial recognition, they are each still only 
going to be able to perform about ONE HUNDRED steps each in that half 
second.  Let's assume there are levels for recognising the invariant 
identity of this face, different features, colours, shape, motion  etc - 
each of those levels is still going to have to reach its conclusions 
EXTREMELY rapidly in a very few steps.


And all this, as I said, I would have thought all you guys should be able to 
calculate within a very rough ballpark figure. Neurons only transmit signals 
at relatively slow speeds, right? Roughly five million times slower than 
computers. There must be a definite limit to how many neurons can be 
activated and how many operations they can perform to deal with a facial 
recognition problem, from the time the light hits the retina to a half 
second later? This is the sort of thing you all love to calculate and is 
really important - but where are you when one really needs you?


Hawkins' point as to how the brain can decide in a hundred steps what takes 
a computer a million or billion steps (usually without much success) is:


The answer is the brain doesn't 'compute' the answers ; it retrieves the 
answers from memory. In essence, the answers were stored inmemory a long 
time ago. It only takes a few steps to retrieve something from memory. Slow 
neurons are not only fast enough to do this, but they constitute the memory 
themselves. The entire cortex is a memory system. It isn't a computer at 
all.[ON INtelligence - Chapter on Memory]


I was v. crudely arguing something like this in a discussion with Richard 
about massive parallel computation.  If Hawkins is  right, and I think he's 
at least warm, you guys have surely got it all wrong.  (although you might 
still argue like Ben  that you can it do your way not the brain's - but 
hell, the difference in efficiency is so vast it surely ought to break your 
engineering heart).



Matt/ MT:
Thanks. And I repeat my question elsewhere : you don't think that the 
human

brain which does this in say half a second, (right?), is using massive
computation to recognize that face?


So if I give you a video clip then you can match the person in the video to
the correct photo out of 10^9 choices on the Internet in 0.5 seconds, and 
this

will all run on your PC?  Let me know when your program is finished so I can
try it out.


You guys with all your mathematical calculations re the brain's total
neurons and speed of processing surely should be able to put ball-park
figures on the maximum amount of processing that the brain can do here.

Hawkins argues:

neurons are slow, so in that half a second, the information entering your
brain can only traverse a chain ONE HUNDRED neurons long. ..the brain
'computes' solutions to problems like this in one hundred steps or fewer,
regardless of how many total neurons might be involved. From the moment
light enters your eye to the time you [recognize the image], a chain no
longer than one hundred neurons could be involved. A digital computer
attempting to solve the same problem would take BILLIONS of steps. One
hundred computer instructions are barely enough to move a single character
on the computer's display, let alone do something interesting.


Which is why the human brain is so bad at arithmetic and other tasks that
require long chains of sequential steps.  But somehow it can match a face to 
a
name in 0.5 seconds.  Neurons run in PARALLEL.  Your PC does not.  Your 
brain

performs 10^11 weighted sums of 10^15 values in 0.1 seconds.  Your PC will
not.




IOW, if that's true, the massive computational approach is surely
RIDICULOUS - a grotesque travesty of engineering principles of economy, 
no?

Like using an entire superindustry of people to make a single nut? And, of
course, it still doesn't work. Because you just don't understand how
perception works in the first place.

Oh right... so let's make our computational capabilities even more 
massive,

right?  Really, really massive. No, no, even bigger than that?


  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 

 Could you give an example or two of the kind of problems that your AGI
 system(s) will need such massive capabilities to solve? It's so good - 
 in

 fact, I would argue, essential - 

Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:

 
 
  Matt,:AGI research needs
  special hardware with massive computational capabilities.
 
 
 Could you give an example or two of the kind of problems that your AGI 
 system(s) will need such massive capabilities to solve? It's so good - in 
 fact, I would argue, essential - to ground these discussions. 

For example, I ask the computer who is this? and attach a video clip from my
security camera.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73639920-0e69de


Re: [agi] Do we need massive computational capabilities?

2007-12-07 Thread Mike Tintner
RE: [agi] Do we need massive computational capabilities?ED PORTER # When 
you say It only takes a few steps to retrieve something from memory. I hope 
you realize that depending how you count steps, it actually probably takes 
hundreds of millions of steps or more.  It is just that millions of them are 
performed in parallel, such that the longest sequence of any one causal path 
among such steps is no longer than 100 steps.  That is a very, repeat very, 
different thing that suggesting that only 100 separate actions were taken.  

Ed,

Yes, I understood that (though sure, I'm capable of misunderstanding anything 
here!) But let's try and make it simple and as concrete as possible - another 
way of putting Hawkins' point, as I understand,  is that at any given level, if 
the brain is recognising a given feature of the face, it can only compare it 
with very few comparable features in that half second with its 100 operations  
- whereas a computer will compare that same feature with vast numbers of others.

And actually ditto, for that useful Hofstadter example you quoted, of 
proceeding from aabc: aabd  to jjkl: ???  (although this is a somewhat more 
complex operation which may take a couple of seconds for the brain),  again a 
typical intelligent brain will almost certainly consider v. few options, 
compared with the vast numbers of options considered by that computer.

Ditto, for godsake,  a human chessplayer like Kasparov's brain considers an 
infinitesimal percentage of the moves considered by Big Blue in any given 
period - and yet can still win (occasionally) because of course it's working on 
radically different principles.

Hawkins' basic point that the brain isn't a computer at all -  which I think 
can be read less controversially as is a machine that works on very 
fundamentally different principles to those of currently programmed computers - 
especially when perceiving objects -  holds.

You're not dealing with that basic point, and I find it incredibly difficult to 
get anyone here squarely to face it. People retreat into numbers and millions.

Clearly the brain works VASTLY differently and more efficiently than current 
computers - are you seriously disputing that?

P.S. You also don't answer my question re: how many neurons  in total *can* be 
activated within a half second, or given period, to work on a given problem - 
given their relative slowness of communication? Is it indeed possible for 
hundreds of millions of messages about that one subject to be passed among 
millions of neurons in that short space (dunno-just asking)? Or did you pluck 
that figure out of the air?

P.P. S. A recent book by Read Montague on neuroeconomics makes much the same 
point from a v. different angle - highlighting that computers have a vastly 
wasteful search heritage which he argues has its roots in Turing and  Bletchley 
Park's attempts to decode Engima.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73842990-515452