On Wed, Dec 4, 2013 at 4:59 PM, Bill Hibbard <[email protected]> wrote:
> Matt,
>
> Do you have any journal, conference or on-line
> publications that model text prediction accuracy
> as a function of computing power?

No, I just have these two graphs just below the main table in
http://mattmahoney.net/dc/text.html

During my dissertation proposal in 1999 I made a similar graph.
http://cs.fit.edu/~mmahoney/dissertation/
But I had to change my dissertation topic to get funding. After I
graduated I took up data compression as a hobby for several years
before I got a job in the field and could continue research.

The original data suggests you need 1 GB of text to train a language
model to pass the Turing test. The newer data is harder to interpret,
but suggests we are far short of the computing power needed. I can't
tell from the data how much that is, except to say we aren't close.
The best compressors are a long way away from what I would call AI.
I'm hoping to see new winners using more powerful computers, but we
may have to wait.

So I have to use other methods. We can estimate the computing power of
the human brain and the information content of our DNA to get some
idea. Again, computers are not even close. Getting human knowledge
into computers is a big problem because of the slow rate at which we
can communicate.


-- Matt Mahoney, [email protected]


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to