James,

 

I read your paper.  Your project seems right on the mark.  It provides a
domain-limited example of the general type of learning algorithm that will
probably be the central learning algorithm of AGI, i.e., finding patterns,
and hierarchies of patterns in the AGI's experience in a largely
unsupervised manner.

 

The application of the type of learning algorithm to text makes sense
because, with the web, it is one of the easiest types of experience to get
in large volumes.  It is very much the type of project I have been
advocating for years.  When I first heard of the Google project to put
millions of books into digital form, I assumed it was for exactly such
purposes, and told multiple people so.  (Ditto for the CMU million book
project.)  It seems to be the conventional wisdom that Google is not using
its vast resources for such an obvious purpose, but I wouldn't be so sure.

 

It seems to me that fiction books, at an estimated average length of 300
pages at 300 words/page, would only have about 100K words each, so that 600
of them would only be about 60 Million words, which is amazingly small for
learning from corpora studies.  That you were able to learn so much from so
little is encouraging, but it would really be interesting to see such a
project done on very large corpora, 10 or 100s of billions of words.  It
would be interesting to see how much of human common sense (and expertise)
they could, and could not, derive.

 

Ed Porter

-----Original Message-----
From: James Ratcliff [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 11, 2007 11:26 AM
To: [email protected]
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 

Here's a basic abstract I did last year I think:

http://www.falazar.com/AI/AAAI05_Student_Abtract_James_Ratcliff.pdf

Would like to work with others on a full fledged Reprensentation system that
could use these kind of techniques.... 
I hacked this together by myself, so I know a real team could put this kind
of stuff to much better use.

James


Ed Porter <[EMAIL PROTECTED]> wrote:

James,

 

Do you have any description or examples of you results.  

 

This is something I have been telling people for years.   That you should be
able to extract a significant amount (but probably far from all) world
knowledge by scanning large corpora of text.  I would love to see how well
it actually works for a given size of corpora, and for a given level of
algorithmic sophistication.

 

Ed Porter

 

-----Original Message-----
From: James Ratcliff [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 4:51 PM
To: [email protected]
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 

Richard,
  What is your specific complaint about the 'viability of the framework'?


Ed,
  This line of data gathering is very interesting to me as well, though I
found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational
information on a range of topics. 
   With a well defined ontology system, and some human overview, a large
amount of information can be extracted and many probabilities learned.

James


Ed Porter <[EMAIL PROTECTED]> wrote:


>RICHARD LOOSEMORE=====>
You are implicitly assuming a certain framework for solving the problem of
representing knowledge ... and then all your discussion is about whether or
not it is feasible to implement that framework (to overcome various issues
to do with searches that have to be done within that framework).

But I am not challenging the implementation issues, I am challenging the
viability of the framework itself.

JAMES---> What e


ED PORTER=====> So what is wrong with my framework? What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc? 

I know it doesn't genuflect before the alter of complexity. But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?



>RICHARD LOOSEMORE=====> These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.

Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more. Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points. I can assure you this is not the case.



ED PORTER=====> Shastri's Shruiti is a major piece of work. Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful. It has been very helpful to my thinking about AGI. Please give
me some excuse for calling it "trivial exercise in public relations." I
certainly have not published anything as important. Have you?

The same for Mike Collins's parsers which, at least several years ago I was
told by multiple people at MIT was considered one of the most accurate NL
parsers around. Is that just a "trivial exercise in public relations"? 

With regard to Hecht-Nielsen's work, if it does half of what he says it does
it is pretty damned impressive. It is also a work I think about often when
thinking how to deal with certain AI problems. 

Richard if you insultingly dismiss such valid work as "trivial exercises in
public relations" it sure as hell seems as if either you are quite lacking
in certain important understandings -- or you have a closed mind -- or both.



Ed Porter

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;




_______________________________________
James Ratcliff - http://falazar.com
Looking for something...

  

  _____  

Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try
<http://us.rd.yahoo.com/evt=51733/*http:/mobile.yahoo.com/;_ylt=Ahu06i62sR8H
DtDypao8Wcj9tAcJ%20>  it now.

  _____  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &

  _____  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? <http://v2.listbox.com/member/?&;> &




_______________________________________
James Ratcliff - http://falazar.com
Looking for something...

  

  _____  

Looking for last minute shopping deals? Find
<http://us.rd.yahoo.com/evt=51734/*http:/tools.search.yahoo.com/newsearch/ca
tegory.php?category=shopping>  them fast with Yahoo! Search.

  _____  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;>
&

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74604068-f94c47

Reply via email to