For some lucky cable folks the BW is getting ready to increase soon:

http://arstechnica.com/news.ars/post/20071130-docsis-3-0-possible-100mbps-sp
eeds-coming-to-some-comcast-users-in-2008.html

I'm yet to fully understand the limitations of a P2P based AGI design or the
augmentational ability of a public P2P network on a private P2P network
constructed for AGI. I would count out P2P AGI so quickly.

John

                _____________________________________________
                From: Ed Porter [mailto:[EMAIL PROTECTED] 
                Sent: Monday, December 03, 2007 12:20 PM
                To: [email protected]
                Subject: [agi] RE:P2P and/or communal AGI development [WAS
Hacker intelligence level...]
                

                
                My suggestion, criticized below (criticism can be valuable),
was for just one of many possible uses of an open-source P2P AGI-at-home
type system.  I am totally willing to hear other proposals.  Considering how
little time I spent coming up with the one being criticized, I have a
relatively low ego investment in it and I assume there will be better
suggestions from others.

                I think the hard part of AGI will be difficult to address on
a P2P system with low interconnect bandwidth.  I do because I believe the
hard part of AGI will be learning appropriate dynamic controls for massively
parallel systems computing over massive amounts of data, and the creation of
automatically self organizing knowledge bases derived from computing from
such massive amounts of knowledge in a highly non-localized way.  For
progress on these fronts at any reasonable speed you need massive bandwidth,
which a current P2P system would lack, according to the previous
communications on this thread.  So a current P2P system on the web is not
going to be a good test bed for anything approaching human-level AGI.

                But interesting things could be learned with P2P AGI-at-Home
networks.  In the NL example I proposed, the word senses and parsing were
all to be learned with generalized AGI learning algorithms (although
bootstrapped with some narrow AI tools)  I think they could be a good test
bed for AGI learning of self organizing gen-comp hierarchies because the
training data is plentiful and easy to get, many of the gen-comp hierarchy
of patterns that would be formed would be ones that we humans could
understand, and the capabilities of the system would be ones we could
compare to human level performance in a somewhat intuitive manner.  

                With regard to the statement that "The proper order is:
lexical rules first, then semantics, then grammar, and then the problem
solving.  The whole point of using massive parallel computation is to do the
hard part of the problem" I have the following two comments:  

                (1) As I have said before, the truly hard part of AGI is
almost certainly going to be beyond a P2P network of PCs.  

                And (2) with regard to the order of NL learning, I think a
child actually learns semantics first (words associated with sets of
experience), since most young children I have met start communicating first
in single word statements.  The word sense experts I proposed in the P2P
system would be focusing on this level of knowledge.  Unfortunately, they
would be largely limited to experience in the form of a textual context,
resulting in a quite limited form of experiential grounding. 


                The type of generalized AGI learning algorithm I proposed
would address lexical rules and grammar as part of both its study of grammar
and word senses.  I have only separated out different forms of expertise
because each PC can only contain a relatively small amount of information,
so there has to be some attempt to separate the P2P's AGI representation
into regions with the highest locality of reference.  In and ideal world
this should be done automatically, but to do this well automatically would
tend to require high bandwidth, which the P2P system wouldn't have.  So at
least initially it probably makes sense to have humans decide what the
various fields of expertise are (although such decisions could be based on
AGI derived data, such as that obtained from data access patterns on singe
PC AGI prototypes, or even on an initial networked system).

                Also, I think we should take advantage of some of the narrow
AI tools we have, such as parsers, WordNet, dictionaries, and word-sense
quessers, to bootstrap the system so that we could get more deeply into the
more interesting aspect of AGI such as semantic understanding faster.  These
narrow AI tools could be used in conjunction with AGI learning.  For
example, the output of a narrow AI parser or word sense labeler could be
used to provide initial data used to train up AGI models, which could then
replace or run in conjunction with the narrow AGI tools in a set of EM
cycles, with the AGI models hopefully providing more consistent labeling at
time progresses, and increasingly getting more weight relative to the narrow
AI tools.  

                Perhaps one aspect of the AGI-at-home project would be to
develop a good generalized architecture for wedding various classes of
narrow AI and AGI in such a learning environment.  Narrow AI's are often
very efficient, but they have very limitations which AGI can often overcome.
Perhaps learning how to optimally wed the two could create systems that had
the best features of both AGI and narrow AI, greatly increasing the
efficiency of AGI.

                But there are all sorts of other interesting things that
could be done with an AGI-at-home P2P system. I am claiming no special
expertise as to what is the best use of it.  

                For example, I think it would be interesting to see what
sort of AGI's could be built on current PCs with up to 4G or RAM.  It would
be interesting to see just what capability they could have.  If such systems
were built, one could use multiples of them to more rapidly optimize the
tuning of their various internal parameters for various tasks.  

                Once such PC level AGI's were built, it would be interesting
to see what communities of them would be able to do.  

                It would also be interested to see what forms of useful AGI
could be built using multiple machines on a P2P network, given the low
bandwidth connecting them.

                It has been suggested that OpenCog be developed on
Launchpad.com, which seems pretty cool.  It is an open source website for
cooperative software development.  It has multiple different levels at which
members of a community can collaborate, including at the
what-do-we-want-to-do level.  It least initially that will probably be one
of the most interesting levels of the site.  Hopefully over time people will
come back with more and more working modules, and such feedback can better
inform the high level discussions about what direction in which to go.  The
system is designed to let project have different branches that go in
different directions, and to let people see which branches currently are
being most used and attended to.

                So Richard Loosemore will be free to try to develop a group
to explore complexity, another group will be able to focus on individual PC
level AGI, some will be able to specialize on generalization, others on
inference control, etc.  

                It seems to me the key variables will be how much
participation from talented people the project will be able to attract and
how easy its initial code base is to work with.
                
                Ed Porter
                
                -----Original Message-----
                From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
                Sent: Monday, December 03, 2007 11:39 AM
                To: [email protected]
                Subject: Re: Hacker intelligence level [WAS Re: [agi]
Funding AGI research]

                --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
                > Menawhile, unfortunately, solving all those other issues
like making 
                > parsers and trying to do word-sense disambiguation would
not help one 
                > whit to get the real theoretical task done.

                I agree.  AI has a long history of doing the easy part of
the problem first:
                solving the mathematics or logic of a word problem, and
deferring the hard
                part, which is extracting the right formal statement from
the natural language
                input.  This is the opposite order of how children learn.
The proper order
                is: lexical rules first, then semantics, then grammar, and
then the problem
                solving.  The whole point of using massive parallel
computation is to do the
                hard part of the problem.


                -- Matt Mahoney, [EMAIL PROTECTED]

                -----
                This list is sponsored by AGIRI: http://www.agiri.org/email
                To unsubscribe or change your options, please go to:
        
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71598426-4dad67

Reply via email to