Also see the below link about Perplexity Evaluation for AI! As I said, Lossless 
Compression evaluation in the Hutter Prize is *the best* and see it really is 
the same thing, prediction accuracy. Except it allows errors.

https://planspace.org/2013/09/23/perplexity-what-it-is-and-what-yours-is/

https://www.youtube.com/watch?v=BAN3NB_SNHY

Hmm. I assume they take words or sentences and check if the prediction is 
close/exact, then carry on. With lossless compression, it stores the arithmetic 
encoded decimal of the probability and the resulting file size shows the 
probability error for the whole file, no matter if your predictor did poor on 
some or not, as well, just like Perplexity. However they don't consider the 
neural network size, it could just copy the data. That's why they use a test 
set after/during training. The goal is same, make a good neural network 
predictor though. The test set/compression is also, similar a lot, they are 
seeing how well it understands the data while not copying the data directly.

So which is better? I'm not sure now. Perplexity, or Lossless Compression?
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-Mdd8c32dae7701a14c4a1485d
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to