On Tue, Feb 4, 2020, 8:26 AM stefan.reich.maker.of.eye via AGI <
[email protected]> wrote:

> > It means predicting text as well as a human.
>
> So you would basically design blank-space test (a text with missing
> words), check how well humans do on that one and then do the same with
> computers.
>
> Are any of those tests online?
>

I haven't done any tests like this, but it would probably be easier to
compare prediction accuracy this way than estimating entropy. Cover and
King in the 1970s tried to refine Shannon's 1950 estimate by having people
set odds and bet on the next character in text in order to assign a
probability distribution. Their estimate was 1.3 to 1.7 bits per character
for individuals and 1.3 collectively.

We now know this is too high because compressors get 1.0 bpc. People are
bad at assigning probabilities and the test is very slow, about 5 hours for
one sentence. I'm not aware of any more recent tests.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T409fc28ec41e6e3a-Mad23cc59e7788a688f91a148
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to