On Wed, Dec 26, 2012 at 6:23 PM, Lotzi Bölöni <lotzi.bol...@gmail.com> wrote:
> Matt,
>
> Just to add a note to what Ben was saying about funding.
>
> I if you think that Ben is exceptional in having difficulty finding funding
> to AGI, I can tell
> you that he is comparatively more successful than most. It is basically
> impossible to get
> AGI funding from places like NSF: you need to pass a panel with 4-5 reviews,
> and if one
> person ranks it low, you are out.

It seems hardly worth the effort. I once applied to NSF to hold a text
compression contest with a $50,000 prize to promote research in
language modeling. They turned it down, as they do 90% of proposals. I
created a benchmark anyway, but without the prize money.

> So the things you are saying represent the current "common wisdom" in the AI
> community.

"Common wisdom" is that lossless text compression has nothing to do
with AI. I see that attitude even here. I explain that compression
measures prediction, and that predicting text dialogue with human
level accuracy is equivalent to generating a distribution
indistinguishable from a human source, and therefore equivalent to
passing the Turing test.

"But the brain doesn't compress losslessly", the argument goes. I
explain that the brain doesn't decompress. That requires an arithmetic
coder and a model that you can reset to an earlier state in order to
make the exact same prediction sequence. That's hard for a brain, but
easy for a computer.

> One can say a lot of things about Cyc, but at least they have made a
> thorough exploration
> of one particular avenue towards AGI.

Cyc did a great service by showing us that the obvious approach won't
work. AGI is much harder than anyone had expected. I realize that.
That is why I am not trying to solve the problem. Minsky and Kurzweil
aren't trying to solve it either. My goal is to accurately estimate
the cost. Nobody at Cyc knew in 1984 how many rules would be needed,
and they still don't know. When people have an idea for AGI, they can
never say how much computing power they will need, or how much
training data, or how many lines of code. With text compression, you
can measure these things, measure prediction accuracy, compare it with
humans, and observe a trend.

--
-- Matt Mahoney, mattmahone...@gmail.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to