--- On Thu, 9/4/08, Valentina Poletti <[EMAIL PROTECTED]> wrote:
>Ppl like Ben argue that the concept/engineering aspect of intelligence is
>independent of the type of environment. That is, given you understand how
>to make it in a virtual environment you can then tarnspose that concept
>into a real environment more safely.
>
>Some other ppl on the other hand believe intelligence is a property of
>humans only. So you have to simulate every detail about humans to get
>that intelligence. I'd say that among the two approaches the first one
>(Ben's) is safer and more realistic.

The issue is not what is intelligence, but what do you want to create? In order 
for machines to do more work for us, they may need language and vision, which 
we associate with human intelligence. But building artificial humans is not 
necessarily useful. We already know how to create humans, and we are doing so 
at an unsustainable rate.

I suggest that instead of the imitation game (Turing test) for AI, we should 
use a preference test. If you prefer to talk to a machine vs. a human, then the 
machine passes the test.

Prediction is central to intelligence. If you can predict a text stream, then 
for any question Q and any answer A, you can compute the probability 
distribution P(A|Q) = P(QA)/P(Q). This passes the Turing test. More 
importantly, it allows you to output max_A P(QA), the most likely answer from a 
group of humans. This passes the preference test because a group is usually 
more accurate than any individual member. (It may fail a Turing test for giving 
too few wrong answers, a problem Turing was aware of in 1950 when he gave an 
example of a computer incorrectly answering an arithmetic problem).

Text compression is equivalent to AI because we have already solved the coding 
problem. Given P(x) for string x, we know how to optimally and efficiently code 
x in log_2(1/P(x)) bits (e.g. arithmetic coding). Text compression has an 
advantage over the Turing or preference tests in that that incremental progress 
in modeling can be measured precisely and the test is repeatable and verifiable.

If I want to test a text compressor, it is important to use real data (human 
generated text) rather than simulated data, i.e. text generated by a program. 
Otherwise, I know there is a concise code for the input data, which is the 
program that generated it. When you don't understand the source distribution 
(i.e. the human brain), the problem is much harder, and you have a legitimate 
test.

I understand that Ben is developing AI for virtual worlds. This might produce 
interesting results, but I wouldn't call it AGI. The value of AGI is on the 
order of US $1 quadrillion. It is a global economic system running on a smarter 
internet. I believe that any attempt to develop AGI on a budget of $1 million 
or $1 billion or $1 trillion is just wishful thinking.

-- Matt Mahoney, [EMAIL PROTECTED]




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to