Re: Re[2]: [agi] Self-building AGI

2007-12-03 Thread Bob Mottram
On 02/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 I currently think there are some human human-level intelligences who know
 how to build most of an AGI, at least enough to get up and running systems
 that would solve many aspects of the AGI problem and help us better
 understand what, if any other aspects of the problem needed to be solved.  I
 think the Novamente team is one example.


I think you may be right, although there have been many people in the
past who believed they knew how to create an intelligent system, but
who made little progress on the problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71392535-36c9a6


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Sunday 02 December 2007, John G. Rose wrote:
 Building up parse trees and word sense models, let's say that would
 be a first step. And then say after a while this was accomplished and
 running on some peers. What would the next theoretical step be?

I am not sure what the next step would be. The first step might be 
enough for the moment. When you have the network functioning at all, 
expose an API so that other programmers can come in and try to utilize 
sentence analysis (and other functions) as if the network is just 
another lobe of the brain or another component for ai. This would allow 
others who are possibly more creative than us to take advantage of what 
looks to be interesting work.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71422338-8cb1da


[agi] AGI first mention on NPR!

2007-12-03 Thread Richard Loosemore


Yesterday I heard the phrase Artificial General Intelligence on the 
radio for the first time ever:


http://www.npr.org/templates/story/story.php?storyId=16816185


Weekend Edition Sunday, December 2, 2007 · The idea of what Artificial 
Intelligence should be has evolved over the past 50 years — from solving 
puzzles and playing chess to emulating the abilities of a child: 
walking, recognizing objects. A recent conference brought together those 
who invent the future.


A recent Singularity Summit brought together those who imagine — and 
invent — the future.




Unfortunately, most of the report was filled with sound bites that were, 
to my mind, ridiculously naive extrapolations and speculations, like:


Paul Saffo: The optimistic scenario is they will treat us like pets

most of which were calculated to horrify the audience.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71450599-b9df52


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:

Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities described
and generalizations from them.

Assuming this is a continuation of the discussion of an AGI-at-home P2P
system, you are going to be very limited by the lack of bandwidth,
particularly for attacking the high dimensional problem of seeking to
understand the meaning of text, which often involve multiple levels of
implication, which would normally be accomplished by some sort of search of
a large semantic space, which is going to be difficult with limited
bandwidth.

But a large amount of text with appropriate parsing and word sense labeling
would still provide a valuable aid for web and text search and for many
forms of automatic learning.  And the level of understanding that such a P2P
system could derive from reading huge amounts of text could be a valuable
initial source of one component of world knowledge for use by AGI.


I know you always find it teious when I express scepticism, so I will 
preface my remarks with:  take this advice or ignore it, your choice.


This description of how to get AGI done reminds me of my childhood 
project to build a Mars-bound spacecraft modeled after James Blish's 
Book Welcome to Mars.  I Knew that I could build it in time for the 
next conjunction of Mars, but I hadn't quite gotten the anti-gravity 
drive sorted out, so instead I collected all the other materials 
described in the book, so everything would be ready when the AG drive 
started working...


The reason it reminds me of this episode is that you are calmly talking 
here about the high dimensional problem of seeking to understand the 
meaning of text, which often involve multiple levels of implication, 
which would normally be accomplished by some sort of search of a large 
semantic space . this is your equivalent of the anti-gravity 
drive.  This is the part that needs extremely detailed knowledge of AI 
and psychology, just to be understand the nature of the problem (never 
mind to solve it).  If you had any idea bout how to solve this part of 
the problem, everything else would drop into your lap.  You wouldn't 
need a P2P AGI-at-home system, because with this solution in hand you 
would have people beating down your door to give you a supercomputer.


Menawhile, unfortunately, solving all those other issues like making 
parsers and trying to do word-sense disambiguation would not help one 
whit to get the real theoretical task done.


I am not being negative, I am just relaying the standard understanding 
of priorities in the AGI field as a whole.  Send complaints addressed to 
AGI Community, not to me, please.




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71451441-4352c5


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter
Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities described
and generalizations from them.

Assuming this is a continuation of the discussion of an AGI-at-home P2P
system, you are going to be very limited by the lack of bandwidth,
particularly for attacking the high dimensional problem of seeking to
understand the meaning of text, which often involve multiple levels of
implication, which would normally be accomplished by some sort of search of
a large semantic space, which is going to be difficult with limited
bandwidth.

But a large amount of text with appropriate parsing and word sense labeling
would still provide a valuable aid for web and text search and for many
forms of automatic learning.  And the level of understanding that such a P2P
system could derive from reading huge amounts of text could be a valuable
initial source of one component of world knowledge for use by AGI.

Ed Porter

-Original Message-
From: Bryan Bishop [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 7:33 AM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

On Sunday 02 December 2007, John G. Rose wrote:
 Building up parse trees and word sense models, let's say that would
 be a first step. And then say after a while this was accomplished and
 running on some peers. What would the next theoretical step be?

I am not sure what the next step would be. The first step might be 
enough for the moment. When you have the network functioning at all, 
expose an API so that other programmers can come in and try to utilize 
sentence analysis (and other functions) as if the network is just 
another lobe of the brain or another component for ai. This would allow 
others who are possibly more creative than us to take advantage of what 
looks to be interesting work.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71438525-d92982

Re: [agi] AGI first mention on NPR!

2007-12-03 Thread Richard Loosemore

Bob Mottram wrote:

Perhaps a good word of warning is that it will be really easy to
satirise/lampoon/misrepresent AGI and its proponents until such time
as one is actually created.


The problem is that these two activities - denigrating AGI, and actually 
building one - are not two independent things.


The first could have a serious effect on the second.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71453422-d780e4


Re: [agi] AGI first mention on NPR!

2007-12-03 Thread Bob Mottram
Perhaps a good word of warning is that it will be really easy to
satirise/lampoon/misrepresent AGI and its proponents until such time
as one is actually created.




On 03/12/2007, Richard Loosemore [EMAIL PROTECTED] wrote:

 Yesterday I heard the phrase Artificial General Intelligence on the
 radio for the first time ever:

 http://www.npr.org/templates/story/story.php?storyId=16816185

 
 Weekend Edition Sunday, December 2, 2007 · The idea of what Artificial
 Intelligence should be has evolved over the past 50 years — from solving
 puzzles and playing chess to emulating the abilities of a child:
 walking, recognizing objects. A recent conference brought together those
 who invent the future.

 A recent Singularity Summit brought together those who imagine — and
 invent — the future.
 


 Unfortunately, most of the report was filled with sound bites that were,
 to my mind, ridiculously naive extrapolations and speculations, like:

 Paul Saffo: The optimistic scenario is they will treat us like pets

 most of which were calculated to horrify the audience.





 Richard Loosemore

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71452571-92cff0

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 The reason it reminds me of this episode is that you are calmly talking
 here about the high dimensional problem of seeking to understand the
 meaning of text, which often involve multiple levels of implication,
 which would normally be accomplished by some sort of search of a large
 semantic space . this is your equivalent of the anti-gravity
 drive.  This is the part that needs extremely detailed knowledge of AI
 and psychology, just to be understand the nature of the problem (never
 mind to solve it).  If you had any idea bout how to solve this part of
 the problem, everything else would drop into your lap.  You wouldn't
 need a P2P AGI-at-home system, because with this solution in hand you
 would have people beating down your door to give you a supercomputer.


This is naïve. It almost never works this way, where if someone has a
solution to a well known unsolved engineering problem that resources just
come knocking at the door.

 
 Menawhile, unfortunately, solving all those other issues like making
 parsers and trying to do word-sense disambiguation would not help one
 whit to get the real theoretical task done.

This is impractical. ... 
 
 I am not being negative, I am just relaying the standard understanding
 of priorities in the AGI field as a whole.  Send complaints addressed to
 AGI Community, not to me, please.

You are being negative! And since when have the priorities of understandings
in the AGI field been standardized? Perhaps that is part the limiting factor
and self-defeating narrow-mindedness.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71501965-68a77a

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:
 Menawhile, unfortunately, solving all those other issues like making 
 parsers and trying to do word-sense disambiguation would not help one 
 whit to get the real theoretical task done.

I agree.  AI has a long history of doing the easy part of the problem first:
solving the mathematics or logic of a word problem, and deferring the hard
part, which is extracting the right formal statement from the natural language
input.  This is the opposite order of how children learn.  The proper order
is: lexical rules first, then semantics, then grammar, and then the problem
solving.  The whole point of using massive parallel computation is to do the
hard part of the problem.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71493437-c427ac


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Tintner

Matt::  The whole point of using massive parallel computation is to do the
hard part of the problem.

I get it : you and most other AI-ers are equating hard with very, very 
complex, right?  But you don't seriously think that the human mind 
successfully deals with language by massive parallel computation, do you? 
Isn't it obvious that the brain is able to understand the wealth of language 
by relatively few computations - quite intricate, hierarchical, 
multi-levelled processing, yes, (in order to understand, for example, any of 
the sentences you or I are writing here), but only a tiny fraction of the 
operations that computers currently perform?


The whole idea of massive parallel computation here, surely has to be wrong. 
And yet none of you seem able to face this to my mind obvious truth.


I only saw this term recently - perhaps it's v. familiar to you (?) - that 
the human brain works by look-up rather than search.  Hard problems can 
have relatively simple but ingenious solutions.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71504832-b01a2d


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread John G. Rose
 From: Bryan Bishop [mailto:[EMAIL PROTECTED]
 I am not sure what the next step would be. The first step might be
 enough for the moment. When you have the network functioning at all,
 expose an API so that other programmers can come in and try to utilize
 sentence analysis (and other functions) as if the network is just
 another lobe of the brain or another component for ai. This would allow
 others who are possibly more creative than us to take advantage of what
 looks to be interesting work.
 

This is true and a way to get utility out of it. And getting the first step
accomplished is quite a bit of work as is maintaining it. Having just a few
basic baby steps actually materialize in front of you eliminates some of the
complexity so that the larger problem may appear just a bit less daunting.
Also communal developer feedback is a constructive motivator.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71510117-536e83


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread John G. Rose
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 Once you build up good models for parsing and word sense, then you read
 large amounts of text and start building up model of the realities
 described
 and generalizations from them.
 
 Assuming this is a continuation of the discussion of an AGI-at-home P2P
 system, you are going to be very limited by the lack of bandwidth,
 particularly for attacking the high dimensional problem of seeking to
 understand the meaning of text, which often involve multiple levels of
 implication, which would normally be accomplished by some sort of search
 of
 a large semantic space, which is going to be difficult with limited
 bandwidth.
 
 But a large amount of text with appropriate parsing and word sense
 labeling
 would still provide a valuable aid for web and text search and for many
 forms of automatic learning.  And the level of understanding that such a
 P2P
 system could derive from reading huge amounts of text could be a
 valuable
 initial source of one component of world knowledge for use by AGI.


I kind of see the small bandwidth between (most) individual nodes as not a
limiting factor as sets of nodes act as temporary single group entities. IOW
the BW between one set of 50 nodes and another set of 50 nodes is quite
large actually and individual nodes' data access would depend on - indexes
of indexes to minimize their individual BW requirements.

Does this not apply to your model?

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71511001-15807d


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

[snip]

I am not being negative, I am just relaying the standard understanding
of priorities in the AGI field as a whole.  Send complaints addressed to
AGI Community, not to me, please.


You are being negative! And since when have the priorities of understandings
in the AGI field been standardized? Perhaps that is part the limiting factor
and self-defeating narrow-mindedness.


It is easy for a research field to agree that certain problems are 
really serious and unsolved.


A hundred years ago, the results of the Michelson-Morley experiments 
were a big unsolved problem, and pretty serious for the foundations of 
physics.  I don't think it would have been self-defeating 
narrow-mindedness for someone to have pointed to that problem and said 
this is a serious problem.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71517612-4f04ee


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Mike Tintner wrote:

Matt::  The whole point of using massive parallel computation is to do the
hard part of the problem.

I get it : you and most other AI-ers are equating hard with very, 
very complex, right?  But you don't seriously think that the human mind 
successfully deals with language by massive parallel computation, do 
you? Isn't it obvious that the brain is able to understand the wealth of 
language by relatively few computations - quite intricate, hierarchical, 
multi-levelled processing, yes, (in order to understand, for example, 
any of the sentences you or I are writing here), but only a tiny 
fraction of the operations that computers currently perform?


The whole idea of massive parallel computation here, surely has to be 
wrong. And yet none of you seem able to face this to my mind obvious truth.


I only saw this term recently - perhaps it's v. familiar to you (?) - 
that the human brain works by look-up rather than search.  Hard 
problems can have relatively simple but ingenious solutions.


You need to check the psychology data:  it emphatically disagrees with 
your position here.


One thing that can be easily measured is the activation of lexical 
items related in various ways to a presented word (i.e. show the subject 
the word Doctor and test to see if the word Nurse gets activated).


It turns out that within an extremely short time of the forst word being 
seen, a very large numbmer of other words have their activations raised 
significantly.  Now, whichever way you interpret these (so called 
priming) results, one thing is not in doubt:  there is massively 
parallel activation of lexical units going on during language processing.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71515718-ac1ab7


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:

Richard,

It is false to imply that knowledge of how to draw implications from a
series of statements by some sort of search mechanism is equally unknown as
that of how to make an anti-gravity drive -- if by anti-gravity drive you
mean some totally unknown form of physics, rather than just anything, such
as human legs, that can push against gravity.  


It is unfair because there is a fair amount of knowledge about how to draw
implications from sequences of statements.  For example view Shastri's
www.icsi.berkeley.edu/~shastri/psfiles/cogsci00.ps.  Also Ben Goertzel has
demonstrated a program that draws implications from statements contained in
different medical texts.

Ed Porter 


P.S., I have enclosed an inexact, but, at least to me, useful drawing I made
of the type of search involved in understanding the multiple implications
contained in the series of statements contained in Shastri's John fell in
the Hallway. Tom had cleaned it.  He was hurt example.  Of course, what is
most missing from this drawing are all the other, dead end, implications
which do not provide a likely implication.  Only one of such dead end is
shown (the implication between fall and trip).  As a result you don't sense
how many dead ends have to be searched to find the implications which best
explain the statements.   EWP


Well, bear in mind that I was not meaning the analogy to be *that* 
exact, or I would have given up on AGI long ago - I'm sure you know that 
I don't believe that getting an understanding system working is as 
impossible as getting an AG drive built.


The purpose of my comment was to point to a huge gap in understanding, 
and the mistaken strategy of dealing with all the peripheral issues 
before having a clear idea how to solve the central problem.


I cannot even begin to do justice, here, to the issues involved in 
solving the high dimensional problem of seeking to understand the 
meaning of text, which often involve multiple levels of implication, 
which would normally be accomplished by some sort of search of a large 
semantic space


You talk as if an extension of some current strategy will solve this ... 
but it is not at all clear that any current strategy for solving this 
problem actually does scale up to a full solution to the problem.  I 
don't care how many toy examples you come up with, you have to show a 
strategy for dealing with some of the core issues, AND reasons to 
believe that those strategies really will work (other than I find them 
quite promising).


Not only that, but there at least some people (to wit, myself) who 
believe there are positive reasons to believe that the current 
strategies *will* not scale up.




Richard Loosemore




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 10:07 AM

To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed Porter wrote:

Once you build up good models for parsing and word sense, then you read
large amounts of text and start building up model of the realities

described

and generalizations from them.

Assuming this is a continuation of the discussion of an AGI-at-home P2P
system, you are going to be very limited by the lack of bandwidth,
particularly for attacking the high dimensional problem of seeking to
understand the meaning of text, which often involve multiple levels of
implication, which would normally be accomplished by some sort of search

of

a large semantic space, which is going to be difficult with limited
bandwidth.

But a large amount of text with appropriate parsing and word sense

labeling

would still provide a valuable aid for web and text search and for many
forms of automatic learning.  And the level of understanding that such a

P2P

system could derive from reading huge amounts of text could be a valuable
initial source of one component of world knowledge for use by AGI.


I know you always find it teious when I express scepticism, so I will 
preface my remarks with:  take this advice or ignore it, your choice.


This description of how to get AGI done reminds me of my childhood 
project to build a Mars-bound spacecraft modeled after James Blish's 
Book Welcome to Mars.  I Knew that I could build it in time for the 
next conjunction of Mars, but I hadn't quite gotten the anti-gravity 
drive sorted out, so instead I collected all the other materials 
described in the book, so everything would be ready when the AG drive 
started working...


The reason it reminds me of this episode is that you are calmly talking 
here about the high dimensional problem of seeking to understand the 
meaning of text, which often involve multiple levels of implication, 
which would normally be accomplished by some sort of search of a large 
semantic space . this is your equivalent of the anti-gravity 
drive.  This is the part that needs extremely detailed knowledge of AI 
and psychology, just to be 

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 It is easy for a research field to agree that certain problems are
 really serious and unsolved.
 
 A hundred years ago, the results of the Michelson-Morley experiments
 were a big unsolved problem, and pretty serious for the foundations of
 physics.  I don't think it would have been self-defeating
 narrow-mindedness for someone to have pointed to that problem and said
 this is a serious problem.
 

Well the definition of problems and the approaches to solving the problems
can be narrow-minded or looked at with a narrow-human-psychological AI
perspective.

Most of these problems boil down to engineering problems and the theory
already exists in some other form; it is a matter of putting things together
IMO.

But myself not being in the cog sci world for that long, only thinking of
AGI in terms of computers, math and AI, I am unaware of the details of some
of the particular AGI unsolved mysteries that are talked about. Not to say I
haven't thought about them from my own narrow-human-psychological AI
perspective :)

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71519373-6b5212


[agi] What are the real unsolved issues in AGI [WAS Re: Hacker intelligence

2007-12-03 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]
It is easy for a research field to agree that certain problems are
really serious and unsolved.

A hundred years ago, the results of the Michelson-Morley experiments
were a big unsolved problem, and pretty serious for the foundations of
physics.  I don't think it would have been self-defeating
narrow-mindedness for someone to have pointed to that problem and said
this is a serious problem.



Well the definition of problems and the approaches to solving the problems
can be narrow-minded or looked at with a narrow-human-psychological AI
perspective.

Most of these problems boil down to engineering problems and the theory
already exists in some other form; it is a matter of putting things together
IMO.


I think this is a very important issue in AGI, which is why I felt 
compelled to say something.


As you know, I keep trying to get meaningful debate to happen on the 
subject of *methodology* in AGI.  That is what my claims about the 
complex systems problem are all about:  the very serious possibility 
that the existing AGI/AI methodology is so seriously broken that 
virtually everything going on right now will be written up by future 
historians as a complete waste of effort.


In that context - where there is something of an agreement about what 
the big unsolved problems are, and where I have raised questions about 
the very foundations of today's AGI methodology - it is truly 
astonishing to hear people talking about issues being more or less 
solved, bar the shouting.




Richard Loosemore

P.S.  BTW, it isn't really anything to do with taking a cognitive 
science perspective.  Don't forget that I come from a hybrid background: 
I am not a cognitive scientist encroaching on hard-science AI and 
computing, I have done both sides in equal measure.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71525665-a80bc7


[agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Ed Porter

My suggestion, criticized below (criticism can be valuable), was for just
one of many possible uses of an open-source P2P AGI-at-home type system.  I
am totally willing to hear other proposals.  Considering how little time I
spent coming up with the one being criticized, I have a relatively low ego
investment in it and I assume there will be better suggestions from others.

I think the hard part of AGI will be difficult to address on a P2P system
with low interconnect bandwidth.  I do because I believe the hard part of
AGI will be learning appropriate dynamic controls for massively parallel
systems computing over massive amounts of data, and the creation of
automatically self organizing knowledge bases derived from computing from
such massive amounts of knowledge in a highly non-localized way.  For
progress on these fronts at any reasonable speed you need massive bandwidth,
which a current P2P system would lack, according to the previous
communications on this thread.  So a current P2P system on the web is not
going to be a good test bed for anything approaching human-level AGI.

But interesting things could be learned with P2P AGI-at-Home networks.  In
the NL example I proposed, the word senses and parsing were all to be
learned with generalized AGI learning algorithms (although bootstrapped with
some narrow AI tools)  I think they could be a good test bed for AGI
learning of self organizing gen-comp hierarchies because the training data
is plentiful and easy to get, many of the gen-comp hierarchy of patterns
that would be formed would be ones that we humans could understand, and the
capabilities of the system would be ones we could compare to human level
performance in a somewhat intuitive manner.  

With regard to the statement that The proper order is: lexical rules first,
then semantics, then grammar, and then the problem solving.  The whole point
of using massive parallel computation is to do the hard part of the problem
I have the following two comments:  

(1) As I have said before, the truly hard part of AGI is almost certainly
going to be beyond a P2P network of PCs.  

And (2) with regard to the order of NL learning, I think a child actually
learns semantics first (words associated with sets of experience), since
most young children I have met start communicating first in single word
statements.  The word sense experts I proposed in the P2P system would be
focusing on this level of knowledge.  Unfortunately, they would be largely
limited to experience in the form of a textual context, resulting in a quite
limited form of experiential grounding. 


The type of generalized AGI learning algorithm I proposed would address
lexical rules and grammar as part of both its study of grammar and word
senses.  I have only separated out different forms of expertise because each
PC can only contain a relatively small amount of information, so there has
to be some attempt to separate the P2P's AGI representation into regions
with the highest locality of reference.  In and ideal world this should be
done automatically, but to do this well automatically would tend to require
high bandwidth, which the P2P system wouldn't have.  So at least initially
it probably makes sense to have humans decide what the various fields of
expertise are (although such decisions could be based on AGI derived data,
such as that obtained from data access patterns on singe PC AGI prototypes,
or even on an initial networked system).

Also, I think we should take advantage of some of the narrow AI tools we
have, such as parsers, WordNet, dictionaries, and word-sense quessers, to
bootstrap the system so that we could get more deeply into the more
interesting aspect of AGI such as semantic understanding faster.  These
narrow AI tools could be used in conjunction with AGI learning.  For
example, the output of a narrow AI parser or word sense labeler could be
used to provide initial data used to train up AGI models, which could then
replace or run in conjunction with the narrow AGI tools in a set of EM
cycles, with the AGI models hopefully providing more consistent labeling at
time progresses, and increasingly getting more weight relative to the narrow
AI tools.  

Perhaps one aspect of the AGI-at-home project would be to develop a good
generalized architecture for wedding various classes of narrow AI and AGI in
such a learning environment.  Narrow AI's are often very efficient, but they
have very limitations which AGI can often overcome.  Perhaps learning how to
optimally wed the two could create systems that had the best features of
both AGI and narrow AI, greatly increasing the efficiency of AGI.

But there are all sorts of other interesting things that could be done with
an AGI-at-home P2P system. I am claiming no special expertise as to what is
the best use of it.  

For example, I think it would be interesting to see what sort of AGI's could
be built on current PCs with up to 4G or RAM.  It would be interesting to
see just what 

Re: [agi] What are the real unsolved issues in AGI [WAS Re: Hacker intelligence

2007-12-03 Thread Bob Mottram
On 03/12/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 it is truly
 astonishing to hear people talking about issues being more or less
 solved, bar the shouting.


You'll usually find that such people never trouble themselves with
implementational details.  Intuitive notions about how easy some task
ought to be are a notoriously poor guide in this area.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71564707-4a8b1c


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter

MIKE TINTNER Isn't it obvious that the brain is able to understand the
wealth of language by relatively few computations - quite intricate,
hierarchical, multi-levelled processing,

ED PORTER How do you find the right set of relatively few computations
and/or models that are appropriate in a complex context without massive
computation?  

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 12:12 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Matt::  The whole point of using massive parallel computation is to do the
hard part of the problem.

I get it : you and most other AI-ers are equating hard with very, very 
complex, right?  But you don't seriously think that the human mind 
successfully deals with language by massive parallel computation, do you? 
Isn't it obvious that the brain is able to understand the wealth of language

by relatively few computations - quite intricate, hierarchical, 
multi-levelled processing, yes, (in order to understand, for example, any of

the sentences you or I are writing here), but only a tiny fraction of the 
operations that computers currently perform?

The whole idea of massive parallel computation here, surely has to be wrong.

And yet none of you seem able to face this to my mind obvious truth.

I only saw this term recently - perhaps it's v. familiar to you (?) - that 
the human brain works by look-up rather than search.  Hard problems can 
have relatively simple but ingenious solutions.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71590357-a986d6

RE: [agi] What are the real unsolved issues in AGI [WAS Re: Hacker intelligence

2007-12-03 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 I think this is a very important issue in AGI, which is why I felt
 compelled to say something.
 
 As you know, I keep trying to get meaningful debate to happen on the
 subject of *methodology* in AGI.  That is what my claims about the
 complex systems problem are all about:  the very serious possibility
 that the existing AGI/AI methodology is so seriously broken that
 virtually everything going on right now will be written up by future
 historians as a complete waste of effort.

I don't think that will happen, sometimes a lot of energy expenditure needs
to be made to just move ahead an inch. Also there is some spinning of wheels
going on as other technologies mature which is happening quite well BTW. And
there has been an awful lot of directly applicable and related theoretical
work accomplished and proliferated over the last few decades.

 In that context - where there is something of an agreement about what
 the big unsolved problems are, and where I have raised questions about
 the very foundations of today's AGI methodology - it is truly
 astonishing to hear people talking about issues being more or less
 solved, bar the shouting.

Excuse my ignorance - top 3 unsolved problems are? - NLP, and what else? And
then from what I have gathered on this email list you favor a complex
systems emergent approach? But you somehow don't agree with mathematical
models. That's an immediate turn-off for implementationalists so it's hard
to gain acceptance. Could you give a one liner (or more) description of your
theory again if you don't mind, or an URL - my interest is somewhat
captivated.

John





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71597210-e47d1c


RE: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread John G. Rose
For some lucky cable folks the BW is getting ready to increase soon:

http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html

I'm yet to fully understand the limitations of a P2P based AGI design or the
augmentational ability of a public P2P network on a private P2P network
constructed for AGI. I would count out P2P AGI so quickly.

John

_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 12:20 PM
To: agi@v2.listbox.com
Subject: [agi] RE:P2P and/or communal AGI development [WAS
Hacker intelligence level...]



My suggestion, criticized below (criticism can be valuable),
was for just one of many possible uses of an open-source P2P AGI-at-home
type system.  I am totally willing to hear other proposals.  Considering how
little time I spent coming up with the one being criticized, I have a
relatively low ego investment in it and I assume there will be better
suggestions from others.

I think the hard part of AGI will be difficult to address on
a P2P system with low interconnect bandwidth.  I do because I believe the
hard part of AGI will be learning appropriate dynamic controls for massively
parallel systems computing over massive amounts of data, and the creation of
automatically self organizing knowledge bases derived from computing from
such massive amounts of knowledge in a highly non-localized way.  For
progress on these fronts at any reasonable speed you need massive bandwidth,
which a current P2P system would lack, according to the previous
communications on this thread.  So a current P2P system on the web is not
going to be a good test bed for anything approaching human-level AGI.

But interesting things could be learned with P2P AGI-at-Home
networks.  In the NL example I proposed, the word senses and parsing were
all to be learned with generalized AGI learning algorithms (although
bootstrapped with some narrow AI tools)  I think they could be a good test
bed for AGI learning of self organizing gen-comp hierarchies because the
training data is plentiful and easy to get, many of the gen-comp hierarchy
of patterns that would be formed would be ones that we humans could
understand, and the capabilities of the system would be ones we could
compare to human level performance in a somewhat intuitive manner.  

With regard to the statement that The proper order is:
lexical rules first, then semantics, then grammar, and then the problem
solving.  The whole point of using massive parallel computation is to do the
hard part of the problem I have the following two comments:  

(1) As I have said before, the truly hard part of AGI is
almost certainly going to be beyond a P2P network of PCs.  

And (2) with regard to the order of NL learning, I think a
child actually learns semantics first (words associated with sets of
experience), since most young children I have met start communicating first
in single word statements.  The word sense experts I proposed in the P2P
system would be focusing on this level of knowledge.  Unfortunately, they
would be largely limited to experience in the form of a textual context,
resulting in a quite limited form of experiential grounding. 


The type of generalized AGI learning algorithm I proposed
would address lexical rules and grammar as part of both its study of grammar
and word senses.  I have only separated out different forms of expertise
because each PC can only contain a relatively small amount of information,
so there has to be some attempt to separate the P2P's AGI representation
into regions with the highest locality of reference.  In and ideal world
this should be done automatically, but to do this well automatically would
tend to require high bandwidth, which the P2P system wouldn't have.  So at
least initially it probably makes sense to have humans decide what the
various fields of expertise are (although such decisions could be based on
AGI derived data, such as that obtained from data access patterns on singe
PC AGI prototypes, or even on an initial networked system).

Also, I think we should take advantage of some of the narrow
AI tools we have, such as parsers, WordNet, dictionaries, and word-sense
quessers, to bootstrap the system so that we could get more deeply into the
more interesting aspect of AGI such as semantic understanding faster.  These
narrow AI tools could be used in conjunction with AGI learning.  For
example, the output of a narrow AI parser or word sense labeler could be
used to provide initial data used to train up AGI models, which could then
replace or run in conjunction with the narrow AGI tools in a set of EM
cycles, with the AGI models hopefully providing more 

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread J. Andrew Rogers


On Dec 3, 2007, at 12:52 PM, John G. Rose wrote:

For some lucky cable folks the BW is getting ready to increase soon:

http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html

I'm yet to fully understand the limitations of a P2P based AGI  
design or the
augmentational ability of a public P2P network on a private P2P  
network

constructed for AGI. I would count out P2P AGI so quickly.



Distributed algorithms tend to be far more sensitivity to latency than  
bandwidth, except to the extent that low bandwidth induces latency.   
As a practical matter, the latency floor of P2P is so high that most  
algorithms would run far faster on a small number of local machines  
than a large number of geographically distributed machines.


There is a reason people interested in high-performance computing tend  
to spend more on their interconnect than their compute nodes.


J. Andrew Rogers

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71602529-7ec12e


RE: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread John G. Rose
 From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
 
 Distributed algorithms tend to be far more sensitivity to latency than
 bandwidth, except to the extent that low bandwidth induces latency.
 As a practical matter, the latency floor of P2P is so high that most
 algorithms would run far faster on a small number of local machines
 than a large number of geographically distributed machines.
 
 There is a reason people interested in high-performance computing tend
 to spend more on their interconnect than their compute nodes.

The P2P public network is not homogenous. Lower quality nodes far outnumber
high quality nodes but high quality nodes do exist. High quality meaning
both low latency and high bandwidth (example 3ms ping at 44 mbits).

For human equivalent AGI a private P2P network MIGHT be required,
inexpensive would be 3ms ping on a gig E clustered segment. More pricier
could require an external switched fabric of say around a 5+ gigbit
interconnect cluster.

Lazy processing on low quality P2P - exactly how invaluable is that?
Distributed P2P computing for AGI needs to be self-organizing, detect and
adapt to resource conditions. It's not a perfect world.

John


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71615162-02022c


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Matt Mahoney
--- Ed Porter [EMAIL PROTECTED] wrote:
 And (2) with regard to the order of NL learning, I think a child actually
 learns semantics first

Actually Jusczyk showed that babies learn the rules for segmenting continuous
speech at 7-10 months.  I did some experiments in 1999 following the work of
Hutchens and Alder showing that it is possible to learn the rules for
segmenting text without spaces using only the simple character n-gram
statistics of the input.  The word boundaries occur where the mutual
information across the boundary is lowest.
http://cs.fit.edu/~mmahoney/dissertation/lex1.html

Children begin learning the meanings of words around 12 months, and start
forming simple sentences around age 2-3.

 For example, I think it would be interesting to see what sort of AGI's could
 be built on current PCs with up to 4G or RAM.

I did something like that with language models, up to 2 GB.  So far, my
research suggests you need a LOT more memory.
http://cs.fit.edu/~mmahoney/compression/text.html

With regard to distributed AI, I believe the protocol should be natural
language at the top level (perhaps on top of HTTP), because I think it is
essential that live humans can participate.  The idea is that each node in the
P2P network might be relatively stupid, but would be an expert on some narrow
topic, and know how to find other experts on related topics.  A node would
scan queries for keywords and ignore the messages it doesn't understand (which
would be most of them).  Overall the network would appear intelligent because
*somebody* would know.

When a user asks a question or posts information, the message would be
broadcast to many nodes, which could choose to ignore them or relay them to
other nodes that it believes would find the message more relevant.  Eventually
the message gets to a number of experts, who then reply to the message.  The
source and destination nodes would then update their links to each other,
replacing the least recently used links.

The system would be essentially a file sharing or message posting service with
a distributed search engine.  It would make no distinctions between queries
and updates, because asking a question about a topic indicates knowledge of
related topics.  Every message you post becomes a permanent part of this
gigantic distributed database, tagged with your name (or anonymous ID) and a
time stamp.

I wrote my thesis on the question of whether such a system would scale to a
large, unreliable network.  (Short answer: yes).
http://cs.fit.edu/~mmahoney/thesis.html

Implementation detail: how to make a P2P client useful enough that people will
want to install it?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71629115-649a10


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 5:07 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 When a user asks a question or posts information, the message would be
 broadcast to many nodes, which could choose to ignore them or relay them to
 other nodes that it believes would find the message more relevant.  Eventually
 the message gets to a number of experts, who then reply to the message.  The
 source and destination nodes would then update their links to each other,
 replacing the least recently used links.

 I wrote my thesis on the question of whether such a system would scale to a
 large, unreliable network.  (Short answer: yes).
 http://cs.fit.edu/~mmahoney/thesis.html

 Implementation detail: how to make a P2P client useful enough that people will
 want to install it?

That sounds almost word-for-word like something I was visualizing
(though not producing as a thesis)

I believe the next step of such a system is to become an abstraction
between the user and the network they're using.  So if you can hook
into your P2P network via a firefox extension, (consider StumbleUpon
or Greasemonkey) so it (the agent) can passively monitor your web
interaction - then it could be learn to screen emails (for example) or
pre-chew either your first 10 google hits or summarize the next 100
for relevance.  I have been told that by the time you have an agent
doing this well, you'd already have AGI - but i can't believe this
kind of data mining is beyond narrow AI (or requires fully general
adaptive intelligence)

Maybe when I get around to the Science part of my BS degree (after the
Arts filler) I will explore to a greater depth for a thesis.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71648663-f0a7ee


Re: [agi] What are the real unsolved issues in AGI [WAS Re: Hacker intelligence

2007-12-03 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]
I think this is a very important issue in AGI, which is why I felt
compelled to say something.

As you know, I keep trying to get meaningful debate to happen on the
subject of *methodology* in AGI.  That is what my claims about the
complex systems problem are all about:  the very serious possibility
that the existing AGI/AI methodology is so seriously broken that
virtually everything going on right now will be written up by future
historians as a complete waste of effort.


I don't think that will happen, sometimes a lot of energy expenditure needs
to be made to just move ahead an inch. Also there is some spinning of wheels
going on as other technologies mature which is happening quite well BTW. And
there has been an awful lot of directly applicable and related theoretical
work accomplished and proliferated over the last few decades.


In that context - where there is something of an agreement about what
the big unsolved problems are, and where I have raised questions about
the very foundations of today's AGI methodology - it is truly
astonishing to hear people talking about issues being more or less
solved, bar the shouting.


Excuse my ignorance - top 3 unsolved problems are? - NLP, and what else? And
then from what I have gathered on this email list you favor a complex
systems emergent approach? But you somehow don't agree with mathematical
models. That's an immediate turn-off for implementationalists so it's hard
to gain acceptance. Could you give a one liner (or more) description of your
theory again if you don't mind, or an URL - my interest is somewhat
captivated.


Top three?  I don't know if anyone ranks them.

Try:

1) Grounding Problem (the *real* one, not the cheap substitute that 
everyone usually thinks of as the symbol grounding problem).


2) The problem of desiging an inference control engine whose behavior is 
 predictable/governable etc.


3) A way to represent things - and in particular, uncertainty - without 
getting buried up to the eyeballs in (e.g.) temporal logics that nobody 
believes in.


Take this with a pinch of salt:  I am sure there are plenty of others. 
But if you came up with a *principled* solution to these issues, I'd be 
impressed.


One linear description of my theory?  I'll think about it.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71653318-ec0059


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Dougherty
On Dec 3, 2007 12:12 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 I get it : you and most other AI-ers are equating hard with very, very
 complex, right?  But you don't seriously think that the human mind
 successfully deals with language by massive parallel computation, do you?

Very very complex tends to exceed one's ability to properly model and
especially predict.  Even if the human mind invokes some special kind
of magical cleverness, do you think you (judging from your writing)
have some unique ability to isolate that function (noun) without
simultaneously using that function (verb) ?   I often imagine that I
understand the working of my own mind almost perfectly.  Those that
claim to have grasped the quintessential bit typically end up so far
over the edge that they are unable to express it in meaningful or
useful terms.

 Isn't it obvious that the brain is able to understand the wealth of language
 by relatively few computations - quite intricate, hierarchical,
 multi-levelled processing, yes, (in order to understand, for example, any of
 the sentences you or I are writing here), but only a tiny fraction of the
 operations that computers currently perform?

I believe you are making that statement because you wish it to be
true.  I see no basis for anything to be obvious - especially the
formalism required to define what the term means.  This is due
primarily to the complexity associated with recursive self-reflection.

 The whole idea of massive parallel computation here, surely has to be wrong.
 And yet none of you seem able to face this to my mind obvious truth.

We each continue to persist in our delusions.  Yours may be no
different in the end. :)

 I only saw this term recently - perhaps it's v. familiar to you (?) - that
 the human brain works by look-up rather than search.  Hard problems can
 have relatively simple but ingenious solutions.

How is the look-up table built?  Usually by experience.  When we have
enough similar experiences to look up a solution to general adaptive
intelligence, we will have likely been close enough to it for so long
that (probably) nobody will be surprised.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71652723-808348


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Tintner

RL: One thing that can be easily measured is the activation of lexical
items related in various ways to a presented word (i.e. show the subject
the word Doctor and test to see if the word Nurse gets activated).
It turns out that within an extremely short time of the forst word being
seen, a very large numbmer of other words have their activations raised
significantly.  Now, whichever way you interpret these (so called
priming) results, one thing is not in doubt:  there is massively
parallel activation of lexical units going on during language processing.

Thanks for reply. How many associations are activated? How do we know 
neuroscientifically they are associations to the words being processed and 
not something else entirely? Out of interest, can you give me a ball park 
estimate of how many associations you personally think are activated, say, 
in in a few seconds, in processing sentences like:


The doctor made a move on the nurse.
Relationships between staff in health organizations are fraught with 
complexities


No, I'm not trying to be ridiculously demanding or asking you to be 
ridiculously exact. As you probably know by now, I see the processing of 
sentences as involving several levels, especially for the second sentence, 
but I don't see the number of associations as that many. Let's be generous 
and guess hundreds for the items in the above sentences. But a computer 
program, as I understand, will be typically searching through anywhere 
between thousands, millions and way upwards.


On the one hand, we can perhaps agree that one of the brain's glories is 
that it can very rapidly draw analogies - that I can quickly produce a 
string of associations like, say,  snake, rope, chain, spaghetti 
strand, - and you may quickly be able to continue that string with further 
associations, (like string). I believe that power is mainly based on 
look-up - literally finding matching shapes at speed. But I don't see the 
brain as checking through huge numbers of such shapes. (It would be 
enormously demanding on resources, given that these are complex pictures, 
no?).


As evidence , I'd point to what happens if you try to keep producing further 
analogies. The brain rapidly slows down. It gets harder and harder. And yet 
you will be able to keep producing further examples from memory virtually 
for ever - just slower and slower. Relevant images/ concepts are there, but 
it's not easy to access them. That's why copywriters get well paid to, in 
effect, keep searching for similar analogies (as cool/refreshing as...). 
It's hard work. If that many relevant shapes were being unconsciously 
activated as you seem to be suggesting, it shouldn't be such protracted 
work.


The brain can literally connect any thing to any other thing with, so to 
speak, 6 degrees of separation - but I don't think it can conect that many 
things at once.


I accept that this is still neuroscientifically an open issue, ( I'd be 
grateful for  pointers to the research you're referring to). But I would 
have thought it obvious that the brain has massively inferior search 
capabilities to those of computers - that, surely, is a major reasonwhy we 
invented computers in the first place - they're a massive extension of our 
powers.


And yet the brain can draw analogies, and basically, with minor exceptions, 
computers still can't. I think it's clear that computers won't catch up here 
by quantitatively increasing their powers still further. If you're digging a 
hole in the wrong place, digging further  quicker won't help. (I'm arguing 
a variant of your own argument against  Edward P!). But of course when your 
education and technology dispose you to dig in just those places, it's 
extremely hard to change your ways - or even believe, pace Edward, that 
change is necessary at all. After all, look at the size of those holes.. 
surely, we'll hit the Promised Land anytime now.


P.S. In general, the brain is hugely irrational - it can only maintain a 
reflective, concentrated train of thought for literally seconds, not minutes 
before going off at tangents. It continually and necessarily jumps to 
conclusions. Such irrationality is highly adaptive in a fast-moving world 
where you can't hang around thinking about things for long.  The idea that 
this same brain is systematically, thoroughly searching through, let's say, 
thousands or millions of variants on ideas, seems to me seriously at odds 
with this irrationality. (But I'm interested in all relevant research).




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71651016-b43e51


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
--- Mike Tintner [EMAIL PROTECTED] wrote:
 On the one hand, we can perhaps agree that one of the brain's glories is 
 that it can very rapidly draw analogies - that I can quickly produce a 
 string of associations like, say,  snake, rope, chain, spaghetti 
 strand, - and you may quickly be able to continue that string with further 
 associations, (like string). I believe that power is mainly based on 
 look-up - literally finding matching shapes at speed. But I don't see the 
 brain as checking through huge numbers of such shapes. (It would be 
 enormously demanding on resources, given that these are complex pictures, 
 no?).

Semantic models learn associations by proximity in the training text.  The
degree to which you associate snake and rope depends on how often these
words appear near each other.  You can create an association matrix A, e.g.
A[snake][rope] is the degree of association between these words.

Among the most successful of these models is latent semantic analysis (LSA),
where A is factored: A = USV by singular value decomposition (SVD), such that
U and V are orthonormal and S is diagonal, and then discard all but the
largest elements of S.  In a typical LSA model, A is 20K by 20K, and S is
reduced to about 200.  This approximates A to two 20K by 200 matrices, using
about 2% as much space.

One effect of lossy compression by LSA is to derive associations by the
transitive property of semantics.  For example, if snake is associated with
rope and rope with chain, then the LSA approximation will derive an
association of snake with chain even if it was not seen in the training
data.

SVD has an efficient parallel implementation.  It is most easily visualized as
a 20K by 200 by 20K 3-layer linear neural network [1].  But this really should
not be surprising, because natural language evolved to be processed
efficiently on a slow but highly parallel computer.

1. Gorrell, Genevieve (2006), “Generalized Hebbian Algorithm for Incremental
Singular Value Decomposition in Natural Language Processing”, Proceedings of
EACL 2006, Trento, Italy.
http://www.aclweb.org/anthology-new/E/E06/E06-1013.pdf


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71675396-27fd0e


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter
RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search of a large
semantic space

You talk as if an extension of some current strategy will solve this ... but
it is not at all clear that any current strategy for solving this 
problem actually does scale up to a full solution to the problem.  I don't
care how many toy examples you come up with, you have to show a 
strategy for dealing with some of the core issues, AND reasons to believe
that those strategies really will work (other than I find them 
quite promising).

Not only that, but there at least some people (to wit, myself) who believe
there are positive reasons to believe that the current 
strategies *will* not scale up.

ED PORTER  I don't know if you read the Shastri paper I linked to or
not, but it shows we do know how to do many of the types of implication
which are used in NL.  What he shows needs some extensions, so it is more
generalized, but it and other known inference schemes explain a lot of how
text understanding could be done.  

With regard to the scaling issue, it is a real issue.  But there are
multiple reasons to believe the scaling problems can be overcome.  Not
proofs, Richard, so you are entitled to your doubts.  But open your mind to
the possibilities they present.  They include:

-1-the likely availability of roughly brain level representational,
computational, and interconnect capacities within the several hundred
thousand to 1 million dollar range in seven to ten years.

-2-the fact that human experience and representation does not
explode combinatorially.  Instead it is quite finite.  It fits insides our
heads.  

Thus, although you are dealing with extremely high dimensional spaces, most
of that space is empty.  There are know ways to deal with extremely high
dimensional spaces while avoiding the exponential explosion made possible by
such high dimensionality.  

Take the well know Growing Neural Gas (GNG) algorithm.  It automatically
creates a relative compact representation of a possibly infinite dimensional
space, by allocated nodes to only those parts of the high dimensional space
where there is stuff, or, if resource are more limited, where the most stuff
is.

Or take indexing, it takes one only to places in the hyperspace where
something actually occurred or was thought about.  One can have
probabilitistically selected hierarchical indexing (something like John Rose
suggested) which make indexing much more efficient.

-3-experiential computers focus most learning, most models, and most
search on things that actually have happened in the past or on things that
in many ways are similar to what has happened in the past.  This tends to
greatly reduce representational and search spaces.

When such a system synthesizes or perceives new patterns that have never
happened before the system will normally have to explore large search
spaces, but because of the capacity of brain level hardware it will have
considerable capability to do so.  The type of hardware that will be
available for human-level agi in the next decade will probably have
sustainable cross sectional bandwidths of 10G to 1T messages/sec with 64Byte
payloads/msg.  With branching tree activations and the fact that many
messages will be regional, the total amount of messaging could well be 100G
to 100T such msg/sec.

Lets assume our hardware has 10T msg/sec and that we want to read 10 words a
second.  That would allow 1T msg/word.  With a dumb spreading activation
rule that would allow you to: active the 30K most probably implications; and
for each of them the 3K most probable implications; and for each of them the
300 most probable implications; and for each of them the 30 most probable
implications.  As dumb as this method of inferencing would be, it actually
would make a high percent of the appropriate multi-step inferences,
particularly when you consider that the probability of activation at the
successive stages would be guided by probabilities from other activations in
the current context.

Of course there are much more intelligent ways to guide activation that
this.

Also it is important to understand that at every level in many of the
searches or explorations in such a system there will be guidance and
limitations provided by similar models from past experience, greatly
reducing the amount of or the number of explorations that are required to
produce reasonable results.

-4-Michael Collins a few years ago had was many AI researches
considered to be the best grammatical parser, which used the kernel trick to
effectively match parse trees in, I think it was, 500K dimensions.  By use
of the Kernel trick the actual computation usually was performed in a small
subset of these dimensions and the parser was 

[agi] Priming of associates [WAS Re: Hacker intelligence level]

2007-12-03 Thread Richard Loosemore

Mike Tintner wrote:

RL: One thing that can be easily measured is the activation of lexical
items related in various ways to a presented word (i.e. show the subject
the word Doctor and test to see if the word Nurse gets activated).
It turns out that within an extremely short time of the forst word being
seen, a very large numbmer of other words have their activations raised
significantly.  Now, whichever way you interpret these (so called
priming) results, one thing is not in doubt:  there is massively
parallel activation of lexical units going on during language processing.

Thanks for reply. How many associations are activated? How do we know 
neuroscientifically they are associations to the words being processed 
and not something else entirely? Out of interest, can you give me a ball 
park estimate of how many associations you personally think are 
activated, say, in in a few seconds, in processing sentences like:


The doctor made a move on the nurse.
Relationships between staff in health organizations are fraught with 
complexities


No, I'm not trying to be ridiculously demanding or asking you to be 
ridiculously exact. As you probably know by now, I see the processing of 
sentences as involving several levels, especially for the second 
sentence, but I don't see the number of associations as that many. Let's 
be generous and guess hundreds for the items in the above sentences. But 
a computer program, as I understand, will be typically searching through 
anywhere between thousands, millions and way upwards.


I am not sure how many, but my understanding of the literature is that 
very large numbers show priming, and that it is proportional to 
association strength or semantic relatedness, measured some other way.


The speed is also interesting:  the effect can occur within about 140 ms 
of the word being shown.  At the brain's clock speed, that would be 
maybe 300 clocks.  Not much time for anything except parallel processing 
in that short a time.


On the one hand, we can perhaps agree that one of the brain's glories is 
that it can very rapidly draw analogies - that I can quickly produce a 
string of associations like, say,  snake, rope, chain, spaghetti 
strand, - and you may quickly be able to continue that string with 
further associations, (like string). I believe that power is mainly 
based on look-up - literally finding matching shapes at speed. But I 
don't see the brain as checking through huge numbers of such shapes. (It 
would be enormously demanding on resources, given that these are complex 
pictures, no?).


It would be a problem if it were checking pictures.  The standard model 
is that there are links already established between concepts, as a 
result of experience, and all it is doing is propagating activation 
along links, in parallel.


It does depend on whether these are analogies or just associations. 
(related, of course).



As evidence , I'd point to what happens if you try to keep producing 
further analogies. The brain rapidly slows down. It gets harder and 
harder. And yet you will be able to keep producing further examples from 
memory virtually for ever - just slower and slower. Relevant images/ 
concepts are there, but it's not easy to access them. That's why 
copywriters get well paid to, in effect, keep searching for similar 
analogies (as cool/refreshing as...). It's hard work. If that many 
relevant shapes were being unconsciously activated as you seem to be 
suggesting, it shouldn't be such protracted work.


Generating analogies of that sort would not be the same effect.  I make 
no claims for that specific thing, only for the activation of 
semantically related or associated concepts.


The brain can literally connect any thing to any other thing with, so to 
speak, 6 degrees of separation - but I don't think it can conect that 
many things at once.


That is just low-level (neuron connectivity).  That doesn't speak to 
higher level systems.


I accept that this is still neuroscientifically an open issue, ( I'd be 
grateful for  pointers to the research you're referring to). But I would 
have thought it obvious that the brain has massively inferior search 
capabilities to those of computers - that, surely, is a major reasonwhy 
we invented computers in the first place - they're a massive extension 
of our powers.


Too many imponderable here, but in general, no:  the brain may still 
have the edge for some types of parallel search.


And yet the brain can draw analogies, and basically, with minor 
exceptions, computers still can't.


Now you skip to different issue:  we don't know the *mechanism* involved 
in analogy finding.  That is why compters cannot do it.  It is not that 
computers lack the processing power or connectivity.


I think it's clear that computers
won't catch up here by quantitatively increasing their powers still 
further. If you're digging a hole in the wrong place, digging further  
quicker won't help. (I'm arguing a variant of your own 

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Tintner

MIKE TINTNER Isn't it obvious that the brain is able to understand the
wealth of language by relatively few computations - quite intricate,
hierarchical, multi-levelled processing,

ED PORTER How do you find the right set of relatively few computations
and/or models that are appropriate in a complex context without massive
computation?

Ed, Contrary to my PM, maybe I should answer this in more precise detail.My 
hypothesis is as follows: the brain does most of its thinking, and 
particularly adaptive thinking, by look-up not by blind search.


How can you or I deal with :

Get that box out of this house now..

How is it say, that I will be able to think of a series of ideas like get 
ten men to carry it, get a fork-lift truck to move it, use large 
levers,  get hold of some heavy ropes ... etc etc. straight off the top 
of my head in well under a minute?


All of those ideas are derived from visual/sensory images/ schemas of large 
objects being moved.  The brain does not, I suggest, consult digital/ verbal 
lists or networks of verbal ideas about moving boxes out of houses or any 
similar set of verbal concepts, (except v. occasionally).


How then does the brain rapidly pull relevant large-object-moving shapes out 
of  memory? (There are obviously more operations involved here than just 
shape search, but that's what I want to concentrate on).  Now this is where 
I confess again to being a general techno-idiot (although I suspect that in 
this particular area most of you may be, too). My confused idea is that if 
you have a stack of shapes, there are ways to pull out/ spot the relevant 
ones quickly without sorting through the stack one by one. I think Hawkins 
suggests something like this in ON INtelligence. Maybe you can have thoughts 
about this.


(Alternatively, the again confused idea occurs that certain neuronal areas, 
when stimulated with a certain shape, may be able to remember similar shapes 
that have been there before -  v. loosely as certain metals when heated, can 
remember/ resume old forms)


Whatever, I am increasingly confident  that the brain does work v. 
extensively by matching shapes physically, (rather than by first converting 
them into digital/symbolic form). And I recommend here Sandra Blakeslee's 
latest book on body maps -  the opening Ramachandran quote -


When a reporter asked the famous biologist JBS Haldane what his biological 
studies had taught about God, Haldane replied:The creator if he exists must 
have an inordinate fondness for beetles since there are more species of 
beetle than any other group of living creqtures. By the same token, a 
neurologist might conclude that God is a cartographer. He must have an 
inordinate fondness for maps, for everywhere you look in the brain maps 
abound.


If I'm headed even loosely in the right direction here,  only analog 
computation will be able to handle the kind of rapid shape matching and 
searches I'm talking about, as opposed to the inordinately long, blind 
symbolic searches of digital computation. And you're going to need a whole 
new kind of computer. But none of you guys are prepared to even contemplate 
that.


P.S. One important feature of shape searches by contrast with digital, 
symbolic searches is that you don't make mistakes.  IOW when we think 
about a problem like getting the box out of a house, all our ideas, I 
suggest, will be to some extent relevant. They may not totally solve the 
problem, but they will fit some of the requirements, precisely because they 
have been derived by shape comparison. When a computer blindly searches 
lists of symbols by contrast, most of them of course are totally irrelevant.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71680486-77dd12


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:

RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search of a large
semantic space

You talk as if an extension of some current strategy will solve this ... but
it is not at all clear that any current strategy for solving this 
problem actually does scale up to a full solution to the problem.  I don't
care how many toy examples you come up with, you have to show a 
strategy for dealing with some of the core issues, AND reasons to believe
that those strategies really will work (other than I find them 
quite promising).


Not only that, but there at least some people (to wit, myself) who believe
there are positive reasons to believe that the current 
strategies *will* not scale up.


ED PORTER  I don't know if you read the Shastri paper I linked to or
not, but it shows we do know how to do many of the types of implication
which are used in NL.  What he shows needs some extensions, so it is more
generalized, but it and other known inference schemes explain a lot of how
text understanding could be done.  


With regard to the scaling issue, it is a real issue.  But there are
multiple reasons to believe the scaling problems can be overcome.  Not
proofs, Richard, so you are entitled to your doubts.  But open your mind to
the possibilities they present.  They include:

-1-the likely availability of roughly brain level representational,
computational, and interconnect capacities within the several hundred
thousand to 1 million dollar range in seven to ten years.

-2-the fact that human experience and representation does not
explode combinatorially.  Instead it is quite finite.  It fits insides our
heads.  


Thus, although you are dealing with extremely high dimensional spaces, most
of that space is empty.  There are know ways to deal with extremely high
dimensional spaces while avoiding the exponential explosion made possible by
such high dimensionality.  


Take the well know Growing Neural Gas (GNG) algorithm.  It automatically
creates a relative compact representation of a possibly infinite dimensional
space, by allocated nodes to only those parts of the high dimensional space
where there is stuff, or, if resource are more limited, where the most stuff
is.

Or take indexing, it takes one only to places in the hyperspace where
something actually occurred or was thought about.  One can have
probabilitistically selected hierarchical indexing (something like John Rose
suggested) which make indexing much more efficient.


I'm sorry, but this is not addressing the actual issues involved.

You are implicitly assuming a certain framework for solving the problem 
of representing knowledge ... and then all your discussion is about 
whether or not it is feasible to implement that framework (to overcome 
various issues to do with searches that have to be done within that 
framework).


But I am not challenging the implementation issues, I am challenging the 
viability of the framework itself.


My mind is completely open.  But right now I raised one issue, and this 
is not answered.


I am talking about issues that could prevent that framework from ever 
working no matter how much computing power is available.


You must be able to see this:  you are familiar with the fact that it is 
possible to frame a solution to certain problems in such a way that the 
proposed solution is KNOWN to not converge on an answer?  An answer can 
be perfectly findable IF you use a different representation, but there 
are some ways of representing the problem that lead to a type of 
solution that is completely incomputable.


This is an analogy:  I suggest to you that the framework you have in 
mind when you discuss the solution of the AGI problem is like those 
broken representations.




-3-experiential computers focus most learning, most models, and most
search on things that actually have happened in the past or on things that
in many ways are similar to what has happened in the past.  This tends to
greatly reduce representational and search spaces.

When such a system synthesizes or perceives new patterns that have never
happened before the system will normally have to explore large search
spaces, but because of the capacity of brain level hardware it will have
considerable capability to do so.  The type of hardware that will be
available for human-level agi in the next decade will probably have
sustainable cross sectional bandwidths of 10G to 1T messages/sec with 64Byte
payloads/msg.  With branching tree activations and the fact that many
messages will be regional, the total amount of messaging could well be 100G
to 100T such msg/sec.

Lets assume our hardware has 10T msg/sec and that we want to read 10 words a
second.  That would allow 1T msg/word.  With a dumb spreading 

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:

RICHARD LOOSEMORE I cannot even begin to do justice, here, to the issues
involved in solving the high dimensional problem of seeking to understand
the meaning of text, which often involve multiple levels of implication,
which would normally be accomplished by some sort of search of a large
semantic space

You talk as if an extension of some current strategy will solve this ... but
it is not at all clear that any current strategy for solving this 
problem actually does scale up to a full solution to the problem.  I don't
care how many toy examples you come up with, you have to show a 
strategy for dealing with some of the core issues, AND reasons to believe
that those strategies really will work (other than I find them 
quite promising).


Not only that, but there at least some people (to wit, myself) who believe
there are positive reasons to believe that the current 
strategies *will* not scale up.


ED PORTER  I don't know if you read the Shastri paper I linked to or
not, but it shows we do know how to do many of the types of implication
which are used in NL.  What he shows needs some extensions, so it is more
generalized, but it and other known inference schemes explain a lot of how
text understanding could be done.  


With regard to the scaling issue, it is a real issue.  But there are
multiple reasons to believe the scaling problems can be overcome.  Not
proofs, Richard, so you are entitled to your doubts.  But open your mind to
the possibilities they present.  They include:

-1-the likely availability of roughly brain level representational,
computational, and interconnect capacities within the several hundred
thousand to 1 million dollar range in seven to ten years.

-2-the fact that human experience and representation does not
explode combinatorially.  Instead it is quite finite.  It fits insides our
heads.  


Thus, although you are dealing with extremely high dimensional spaces, most
of that space is empty.  There are know ways to deal with extremely high
dimensional spaces while avoiding the exponential explosion made possible by
such high dimensionality.  


Take the well know Growing Neural Gas (GNG) algorithm.  It automatically
creates a relative compact representation of a possibly infinite dimensional
space, by allocated nodes to only those parts of the high dimensional space
where there is stuff, or, if resource are more limited, where the most stuff
is.

Or take indexing, it takes one only to places in the hyperspace where
something actually occurred or was thought about.  One can have
probabilitistically selected hierarchical indexing (something like John Rose
suggested) which make indexing much more efficient.

-3-experiential computers focus most learning, most models, and most
search on things that actually have happened in the past or on things that
in many ways are similar to what has happened in the past.  This tends to
greatly reduce representational and search spaces.

When such a system synthesizes or perceives new patterns that have never
happened before the system will normally have to explore large search
spaces, but because of the capacity of brain level hardware it will have
considerable capability to do so.  The type of hardware that will be
available for human-level agi in the next decade will probably have
sustainable cross sectional bandwidths of 10G to 1T messages/sec with 64Byte
payloads/msg.  With branching tree activations and the fact that many
messages will be regional, the total amount of messaging could well be 100G
to 100T such msg/sec.

Lets assume our hardware has 10T msg/sec and that we want to read 10 words a
second.  That would allow 1T msg/word.  With a dumb spreading activation
rule that would allow you to: active the 30K most probably implications; and
for each of them the 3K most probable implications; and for each of them the
300 most probable implications; and for each of them the 30 most probable
implications.  As dumb as this method of inferencing would be, it actually
would make a high percent of the appropriate multi-step inferences,
particularly when you consider that the probability of activation at the
successive stages would be guided by probabilities from other activations in
the current context.

Of course there are much more intelligent ways to guide activation that
this.

Also it is important to understand that at every level in many of the
searches or explorations in such a system there will be guidance and
limitations provided by similar models from past experience, greatly
reducing the amount of or the number of explorations that are required to
produce reasonable results.

-4-Michael Collins a few years ago had was many AI researches
considered to be the best grammatical parser, which used the kernel trick to
effectively match parse trees in, I think it was, 500K dimensions.  By use
of the Kernel trick the actual computation usually was performed in a small
subset of these dimensions 

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Tintner
Matt: Semantic models learn associations by proximity in the training text. 
The

degree to which you associate snake and rope depends on how often these
words appear near each other

Correct me - but it's the old, old problem here, isn't it? Those semantic 
models/programs  won't be able to form any *new* analogies, will they? Or 
understand newly minted analogies in texts?  And I'm v. dubious about their 
powers to even form valid associations of much value in the ways you 
describe from existing texts.


You're saying that there's a semantic model/program that can answer, if 
asked,:


yes - 'snake, chain, rope, spaghetti strand'  is a legitimate/ valid series 
of associations/ yes, they fit together  (based on previous textual 
analysis) ?


or:  the odd one out in 'snake/ chain/ cigarette/ rope  is 'cigarette'?

I have yet to find or be given a single useful analogy drawn by computers 
(despite asking many times). The only kind of analogy I can remember here is 
Ed, I think,  pointing to Hofstader's analogies along the lines of  xxyy 
is  like .  Not exactly a big deal. No doubt there must be more, 
but my impression is that in general computers are still pathetic here. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71683316-d0bd3c


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney
--- Ed Porter [EMAIL PROTECTED] wrote:
 We do not know the number and width of the spreading activation that is
 necessary for human level reasoning over world knowledge.  Thus, we really
 don't know how much interconnect is needed and thus how large of a P2P net
 would be needed for impressive AGI.  But I think it would have to be larger
 than say 10K nodes.

In complex systems on the boundary between stability and chaos, the degree of
interconnectedness per node is constant.  Complex systems always evolve to
this boundary because stable systems aren't complex and chaotic systems can't
be incrementally updated.

In my thesis ( http://cs.fit.edu/~mmahoney/thesis.html ) I did not estimate
the communication bandwidth.  But it is O(n log n) because the distance
between nodes grows as O(log n).  For each message sent or received, a node
must also relay O(log n) messages.

If the communication protocol is natural language text, then I am pretty sure
our existing networks can handle it.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71684400-910726


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Matt Mahoney

--- Mike Tintner [EMAIL PROTECTED] wrote:

 Matt: Semantic models learn associations by proximity in the training text. 
 The
 degree to which you associate snake and rope depends on how often these
 words appear near each other
 
 Correct me - but it's the old, old problem here, isn't it? Those semantic 
 models/programs  won't be able to form any *new* analogies, will they? Or 
 understand newly minted analogies in texts?  And I'm v. dubious about their 
 powers to even form valid associations of much value in the ways you 
 describe from existing texts.
 
 You're saying that there's a semantic model/program that can answer, if 
 asked,:
 yes - 'snake, chain, rope, spaghetti strand'  is a legitimate/ valid series
 of associations/ yes, they fit together  (based on previous textual 
 analysis) ?

Yes, because each adjacent pair of words has a high frequency of co-occurrence
in a corpus of training text.

 or:  the odd one out in 'snake/ chain/ cigarette/ rope  is 'cigarette'?

Yes, because cigarette does not have a high co-occurrence with the other
words.

 I have yet to find or be given a single useful analogy drawn by computers 
 (despite asking many times). The only kind of analogy I can remember here is
 Ed, I think,  pointing to Hofstader's analogies along the lines of  xxyy 
 is  like .  Not exactly a big deal. No doubt there must be more, 
 but my impression is that in general computers are still pathetic here.

This simplistic vector space model I described has been used to pass the word
analogy section of the SAT exams.  See: 

Turney, P., Human Level Performance on Word Analogy Questions by Latent
Relational Analysis (2004), National Research Council of Canada,
http://iit-iti.nrc-cnrc.gc.ca/iit-publications-iti/docs/NRC-47422.pdf


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71685861-05fe0f


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter
Mike

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 8:25 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

MIKE TINTNER Isn't it obvious that the brain is able to understand
the
wealth of language by relatively few computations - quite intricate,
hierarchical, multi-levelled processing,

ED PORTER How do you find the right set of relatively few
computations
and/or models that are appropriate in a complex context without massive
computation?

MIKE TINTNER How then does the brain rapidly pull relevant
large-object-moving shapes out 
of  memory? (There are obviously more operations involved here than just 
shape search, but that's what I want to concentrate on).  Now this is where 
I confess again to being a general techno-idiot (although I suspect that in 
this particular area most of you may be, too). My confused idea is that if 
you have a stack of shapes, there are ways to pull out/ spot the relevant 
ones quickly without sorting through the stack one by one. I think Hawkins 
suggests something like this in ON INtelligence. Maybe you can have thoughts

about this.

ED One way is by indexing some thing by its features, but this is a form
of a search, which if done completely activates each occurrence of each
feature searched for, and then selects the one or more pattern with the best
activation score.  Others on the list can probably name other methods

Another used in perception is to hierarchically match inputs against
patterns that represent given shapes under different conditions.

MIKE TINTNER (Alternatively, the again confused idea occurs that
certain neuronal areas, 
when stimulated with a certain shape, may be able to remember similar shapes

that have been there before -  v. loosely as certain metals when heated, can

remember/ resume old forms)

Whatever, I am increasingly confident  that the brain does work v. 
extensively by matching shapes physically, (rather than by first converting 
them into digital/symbolic form). And I recommend here Sandra Blakeslee's 
latest book on body maps -  the opening Ramachandran quote -

ED there clearly is some shape matching in the brain.

MIKE TINTNER P.S. One important feature of shape searches by contrast
with digital, 
symbolic searches is that you don't make mistakes.  IOW when we think 
about a problem like getting the box out of a house, all our ideas, I 
suggest, will be to some extent relevant. They may not totally solve the 
problem, but they will fit some of the requirements, precisely because they 
have been derived by shape comparison. When a computer blindly searches 
lists of symbols by contrast, most of them of course are totally irrelevant.


ED Yes, but there are a lot of types of thinking that cannot be done by
shape alone, and shape is actually much more complicated than shape.  There
is shape, and shape distorted by perspective, and shape changed by bending,
and shape changed by size.  There is shape of objects, shape of
trajectories, 2d shapes, 3d shapes.  There are visual memories, where we
don't really remember all the shapes, but instead remember the types of
things that were their and fill in most of the actual shapes.  In sum, it's
a lot more complicated that just finding a matching photograph.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71691780-efaeb1attachment: winmail.dat

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter



RICHARD LOOSEMORE= I'm sorry, but this is not addressing the actual
issues involved.

You are implicitly assuming a certain framework for solving the problem 
of representing knowledge ... and then all your discussion is about 
whether or not it is feasible to implement that framework (to overcome 
various issues to do with searches that have to be done within that 
framework).

But I am not challenging the implementation issues, I am challenging the 
viability of the framework itself.


ED PORTER= So what is wrong with my framework?  What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc?  

I know it doesn't genuflect before the alter of complexity.  But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?



RICHARD LOOSEMORE= These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.

Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more.  Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points.  I can assure you this is not the case.



ED PORTER= Shastri's Shruiti is a major piece of work.  Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful.  It has been very helpful to my thinking about AGI.  Please give
me some excuse for calling it trivial exercise in public relations.  I
certainly have not published anything as important.  Have you?

The same for Mike Collins's parsers which, at least several years ago I was
told by multiple people at MIT was considered one of the most accurate NL
parsers around.  Is that just a trivial exercise in public relations?  

With regard to Hecht-Nielsen's work, if it does half of what he says it does
it is pretty damned impressive.  It is also a work I think about often when
thinking how to deal with certain AI problems.  

Richard if you insultingly dismiss such valid work as trivial exercises in
public relations it sure as hell seems as if either you are quite lacking
in certain important understandings -- or you have a closed mind -- or both.



Ed Porter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71696956-846847

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter

Richard Loosemore= None of the above is relevant.  The issue is not
whether toy problems 
set within the current paradigm can be done with this or that search 
algorithm, it is whether the current paradigm can be made to converge at 
all for non-toy problems.

Ed Porter= Richard, I wouldn't call a state of the art NL parser that
matches parse trees in 500K dimensions a toy problem.  Yes, it is much less
than a complete human brain, but it is not a toy problem.

With regard to Hecht-Nielsen's sentence completion program it is arguably a
toy problem, but it operates extremely efficiently (i.e., converges) in an
astronomically large search space, with a significant portion of that search
space having some arguable activation.  The fact that there is such
efficient convergence in such a large search space is meaningful, and the
fact that you just dismiss it, as you did in your last email as a trivial
publicity stunt is also meaningful.

Ed Porter


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71705619-d121f2

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter
Matt,

IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably
quoting below I discussed the bandwidth issues.  I am assuming nodes
directly talk to each other, which is probably overly optimistic, but still
are limited by the fact that each node can only receive somewhere roughly
around 100 128 byte messages a second.  Unless you have a really big P2P
system, that just isn't going to give you much bandwidth.  If you had 100
million P2P nodes it would.  Thus, a key issue is how many participants is
an AGI-at-Home P2P system going to get.  

I mean, what would motivate the average American, or even the average
computer geek turn over part of his computer to it?  It might not be an easy
sell for more than several hundred or several thousand people, at least
until it could do something cool, like index their videos for them, be a
funny chat bot, or something like that.

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 8:51 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

--- Ed Porter [EMAIL PROTECTED] wrote:
 We do not know the number and width of the spreading activation that is
 necessary for human level reasoning over world knowledge.  Thus, we really
 don't know how much interconnect is needed and thus how large of a P2P net
 would be needed for impressive AGI.  But I think it would have to be
larger
 than say 10K nodes.

In complex systems on the boundary between stability and chaos, the degree
of
interconnectedness per node is constant.  Complex systems always evolve to
this boundary because stable systems aren't complex and chaotic systems
can't
be incrementally updated.

In my thesis ( http://cs.fit.edu/~mmahoney/thesis.html ) I did not estimate
the communication bandwidth.  But it is O(n log n) because the distance
between nodes grows as O(log n).  For each message sent or received, a node
must also relay O(log n) messages.

If the communication protocol is natural language text, then I am pretty
sure
our existing networks can handle it.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71708450-da8cab

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Ed Porter
Matt,

In addition to my last email, I don't understand what your were saying below
about complexity.  Are you saying that as a system becomes bigger it
naturally becomes unstable, or what?

Ed Porter 

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 8:51 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

--- Ed Porter [EMAIL PROTECTED] wrote:
 We do not know the number and width of the spreading activation that is
 necessary for human level reasoning over world knowledge.  Thus, we really
 don't know how much interconnect is needed and thus how large of a P2P net
 would be needed for impressive AGI.  But I think it would have to be
larger
 than say 10K nodes.

In complex systems on the boundary between stability and chaos, the degree
of
interconnectedness per node is constant.  Complex systems always evolve to
this boundary because stable systems aren't complex and chaotic systems
can't
be incrementally updated.

In my thesis ( http://cs.fit.edu/~mmahoney/thesis.html ) I did not estimate
the communication bandwidth.  But it is O(n log n) because the distance
between nodes grows as O(log n).  For each message sent or received, a node
must also relay O(log n) messages.

If the communication protocol is natural language text, then I am pretty
sure
our existing networks can handle it.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71710422-50e2fa

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Bryan Bishop
On Thursday 29 November 2007, Ed Porter wrote:
 Somebody (I think it was David Hart) told me there is a shareware
 distributed web crawler already available, but I don't know the
 details, such as how good or fast it is.

http://grub.org/
Previous owner went by the name of 'kordless'. I found him on Slashdot.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71712384-417a60


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:

Richard Loosemore= None of the above is relevant.  The issue is not
whether toy problems 
set within the current paradigm can be done with this or that search 
algorithm, it is whether the current paradigm can be made to converge at 
all for non-toy problems.


Ed Porter= Richard, I wouldn't call a state of the art NL parser that
matches parse trees in 500K dimensions a toy problem.  Yes, it is much less
than a complete human brain, but it is not a toy problem.


This is a toy problem.

Parsing is a deep problem?  Do you understand the relationship between 
parsing NL and extracting semantics?  Do you understand what this great 
NL parser would do if confronted with a syntactically incorrect but 
contextually meaningful sentence?  Has it been analysed to see what its 
behavior is on ambiguous sentences?  Could it learn to cope with someone 
speaking a pidgin version of NL, or would someone have to write an 
entire grammar for the language before the system could even start 
parsing it?  Can it generate syntactically correct sentences that 
express an idea?  Can it cope with speech errors, recgnising the nature 
o fteh error and backfilling, or does it just collapse with no viable 
parse?  Would the parser have to be completely rewritten in the future 
when someone else finally solves the problem of representing the 
semantics of language?


Finally, if you are impressed by the claim about 500K dimensions then 
what can I say?  Can you explain to me in what sense it matches parse 
trees in 500K dimensions, and why that is so impressive?


Perhaps I am being unnecessarily hard on you, Ed.  I don't mean to be 
personally rude, you know, but it is sometimes exhausting to have 
someone trying to teach you how to suck eggs



Richard Loosemore



With regard to Hecht-Nielsen's sentence completion program it is arguably a
toy problem, but it operates extremely efficiently (i.e., converges) in an
astronomically large search space, with a significant portion of that search
space having some arguable activation.  The fact that there is such
efficient convergence in such a large search space is meaningful, and the
fact that you just dismiss it, as you did in your last email as a trivial
publicity stunt is also meaningful.

Ed Porter


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71714474-5576ff


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-03 Thread Bryan Bishop
On Monday 03 December 2007, Mike Dougherty wrote:
 I believe the next step of such a system is to become an abstraction
 between the user and the network they're using.  So if you can hook
 into your P2P network via a firefox extension, (consider StumbleUpon
 or Greasemonkey) so it (the agent) can passively monitor your web
 interaction - then it could be learn to screen emails (for example)
 or pre-chew either your first 10 google hits or summarize the next
 100 for relevance.  I have been told that by the time you have an
 agent doing this well, you'd already have AGI - but i can't believe
 this kind of data mining is beyond narrow AI (or requires fully
 general adaptive intelligence)

Another method of doing search agents, in the mean time, might be to 
take neural tissue samples (or simple scanning of the brain) and try to 
simulate a patch of neurons via computers so that when the simulated 
neurons send good signals, the search agent knows that there has been a 
good match that excites the neurons, and then tells the wetware human 
what has been found. The problem that immediately comes to mind is that 
neurons for such searching are probably somewhere deep in the 
prefrontal cortex ... does anybody have any references to studies done 
with fMRI on people forming Google queries?

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71715011-399ee5

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Richard Loosemore

Ed Porter wrote:




RICHARD LOOSEMORE= I'm sorry, but this is not addressing the actual

issues involved.

You are implicitly assuming a certain framework for solving the problem 
of representing knowledge ... and then all your discussion is about 
whether or not it is feasible to implement that framework (to overcome 
various issues to do with searches that have to be done within that 
framework).


But I am not challenging the implementation issues, I am challenging the 
viability of the framework itself.



ED PORTER= So what is wrong with my framework?  What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc?  


I know it doesn't genuflect before the alter of complexity.  But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?




RICHARD LOOSEMORE= These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.


Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more.  Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points.  I can assure you this is not the case.




ED PORTER= Shastri's Shruiti is a major piece of work.  Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful.  It has been very helpful to my thinking about AGI.  Please give
me some excuse for calling it trivial exercise in public relations.  I
certainly have not published anything as important.  Have you?

The same for Mike Collins's parsers which, at least several years ago I was
told by multiple people at MIT was considered one of the most accurate NL
parsers around.  Is that just a trivial exercise in public relations?  


With regard to Hecht-Nielsen's work, if it does half of what he says it does
it is pretty damned impressive.  It is also a work I think about often when
thinking how to deal with certain AI problems.  


Richard if you insultingly dismiss such valid work as trivial exercises in
public relations it sure as hell seems as if either you are quite lacking
in certain important understandings -- or you have a closed mind -- or both.


Ed,

You have no idea of the context in which I made that sweeping dismissal. 
 If you have enough experience of research in this area you will know 
that it is filled with bandwagons, hype and publicity-seeking.  Trivial 
models are presented as if they are fabulous achievements when, in fact, 
they are just engineered to look very impressive but actually solve an 
easy problem.  Have you had experience of such models?  Have you been 
around long enough to have seen something promoted as a great 
breakthrough even though it strikes you as just a trivial exercise in 
public relations, and then watch history unfold as the great 
breakthrough leads to  absolutely nothing at all, and is then 
quietly shelved by its creator?  There is a constant ebb and flow of 
exaggeration and retreat, exaggeration and retreat.  You are familiar 
with this process, yes?


This entire discussion baffles me.  Does it matter at all to you that I 
have been working in this field for decades?  Would you go up to someone 
at your local university and tell them how to do their job?  Would you 
listen to what they had to say about issues that arise in their field of 
expertise, or would you consider your own opinion entirely equal to 
theirs, with only a tiny fraction of their experience?




Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71711822-0e911b


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread Mike Tintner
RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]ED Yes, 
but there are a lot of types of thinking that cannot be done by shape alone, 
and shape is actually much more complicated than shape.  There is shape, and 
shape distorted by perspective, and shape changed by bending, and shape changed 
by size.  There is shape of objects, shape of trajectories, 2d shapes, 3d 
shapes.  There are visual memories, where we don't really remember all the 
shapes, but instead remember the types of things that were their and fill in 
most of the actual shapes.  In sum, it's a lot more complicated that just 
finding a matching photograph.

Ed,

I am not suggesting that shape matching is everything, merely that it is 
central to a great many of the brain's operations - and to its ability to 
search rapidly and briefly and locate analogical ideas (and if that's true, as 
I believe it is, then, sorry, AGI's stuckness is going to continue for a long 
time yet).

The reason I'm replying though is a further thought occurred to me. Essentially 
I've been suggesting that the brain has some means to locate matching shapes 
quickly in very few operations where a digital computer laboriously searches 
through long lists or networks of symbols in a great many operations. One v. 
crude idea for the mechanism I suggested was that neuronal areas somehow 
retain memories of shapes, which can be stimulated by similar incoming shapes - 
so that analogies can be drawn with extreme rapidity, more or less on the spot. 
[Spot checks]

It's occurred to me that this may well happen over and over throughout the body 
 related brain areas.  The same body areas that today feel stiff / expanded/ 
cold  , felt loose/ contracted/ warm yesterday. The same hand that was a ball, 
and many other shapes, is now a fist. So perhaps these memories are all somehow 
laid on top of each in the same brain areas..Map upon map upon map .Just an 
extremely rough idea, but I think it does go some way to showing how shape 
matching could indeed be extremely rapid and effective in the brain, by 
contrast with computers' blind, disembodied search. 

It follows BTW re your points above, that the same brain areas will also retain 
many morphic variations on the same basic shapes - objects/cups seen say 
moving, from different angles, zooming in and out etc. 

And if it's true, as I believe, that the brain uses loose, highly flexible 
templates for visual object perception - then that too should mean that it will 
easily and rapidly be able to connect closely related shapes as in snake/ 
chain/ rope/ spaghetti strand. Analogies and perception are interwoven for the 
brain. Blakeslee makes a good deal of the brain using flexible, morphic body 
maps. 

Thanks for your reply. Further thoughts re mechanisms welcome. As Blakeslee 
points out, this whole area is just beginning to open up.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71724560-1bc574

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-03 Thread John G. Rose
Ed,

Well it'd be nice having a supercomputer but P2P is a poor man's
supercomputer and beggars can't be choosy.

Honestly the type of AGI that I have been formulating in my mind has not
been at all closely related to simulating neural activity through
orchestrating partial and mass activations at low frequencies and I had been
avoiding those contagious cog sci memes on purpose. But your expose on the
subject is quite interesting and I wasn't that aware that that is how things
have been being done.

But getting more than a few thousand P2P nodes is difficult. Going from 10K
to 20K nodes and up, getting more difficult to the point of being
prohibitively expensive to being impossible or extremely lucky.  There are
ways to do it but according to your calculations the supercomputer mayt be
more of a wise choice as going out and scrounging up funding for that would
be easier.

Still though (besides working on my group theory heavy design) exploring the
crafting and chiseling of an activation model you are talking about to the
P2P network could be fruitful. I feel that through a number of up front and
unfortunately complicated design changes/adaptations that the activation
orchestrations could be improved thus bringing down the message rate
requirements, reducing activation requirements, depths and frequencies,
through a sort of computational resource topology consumption,
self-organizational design molding.

You do indicate some dynamic resource adaption and things like intelligent
inference guiding schemes in your description but it doesn't seem like it
melts enough into the resource space. But having a design be less static
risks excessive complications...

A major problem though with P2P and the activation methodology is that there
are so many variances in the latencies and availability that serious
synchronicity/simultaneity issues would exist that even more messaging might
be required. Since there are so many variables in public P2P, empirical data
also would be necessary to get a gander on feasibility.

I still feel strongly that the way to do AGI P2P (with public P2P as core
not augmental) is to understand the grid, and build the AGI design based on
that and what it will be in a few years, instead of taking a design and
morphing it to the resource space. That said, there are finite designs that
will work so the number of choices is few.

John


_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 6:17 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi]
Funding AGI research]


John, 

You raised some good points.  The problem is that the total
number of messages/sec that can be received is relatively small.  It is not
as if you are dealing with a multidimensional grid or toroidal net in which
spreading tree activation can take advantage of the fact that the total
parallel bandwidth for regional messaging can be much greater than the
x-sectional bandwidth.  

In a system where each node is a server class node with
multiple processors and 32 or 64Gbytes of ram, much of which is allocable to
representation, sending messages to local indices on each machine could
fairly efficiently activate all occurrences of something in a 32 to 64 TByte
knowledge base with a max of 1K internode messages, if there was only 1K
nodes.

But in a PC based P2P system the ratio of nodes to
representation space is high and the total number of 128 byte messages/sec
than can be received is limited to about 100, so neither methods of trying
to increase number of patterns than can be activated with the given
interconnect of the network buy you as much.

Human level context sensitivity arises because a large
number of things that can depend on a large number of things in the current
context are made aware of those dependencies.  This takes a lot of
messaging, and I don't see how a P2P system where each node can only receive
about 100 relatively short messages a second is going to make this possible
unless you had a huge number of nodes. As Richard Loosemore said in his Mon
12/3/2007 12:57 PM post.

It turns out that within an extremely short
time of the forst word being 
seen, a very large numbmer of other words
have their activations raised 
significantly.  Now, whichever way you
interpret these (so called 
priming) results, one thing is not in
doubt:  there is massively 
parallel activation of lexical units going
on during language processing. 

With special software, a $10M dollar supercomputer cluster
with 1K nodes, 32TBytes of Ram, and a dual ported 20Mb infiniband
interconnect send about 1 

RE: [agi] What are the real unsolved issues in AGI [WAS Re: Hacker intelligence

2007-12-03 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 Top three?  I don't know if anyone ranks them.
 
 Try:
 
 1) Grounding Problem (the *real* one, not the cheap substitute that
 everyone usually thinks of as the symbol grounding problem).
 
 2) The problem of desiging an inference control engine whose behavior is
   predictable/governable etc.
 
 3) A way to represent things - and in particular, uncertainty - without
 getting buried up to the eyeballs in (e.g.) temporal logics that nobody
 believes in.
 
 Take this with a pinch of salt:  I am sure there are plenty of others.
 But if you came up with a *principled* solution to these issues, I'd be
 impressed.
 

Thanks Richard for listing these. I have thought about 1 and 3 more than 2.
But I am not sure I understand any of them fully enough to comment as I can
tell from some of your other emails that many people have made speculations
and declarations in guerilla warfare fashion afterwards yet to be found
amidst the jungle causing frustration and anxiety to those who have truly
dove deep into the conundrums.

I think that you have some emails on the Grounding Problem way back.

Also I feel that I have some good ideas on #3... including analogies. But
inference control engines can be tough. That is more of a #1 I think.

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=71731204-c8cf46