ok i got it!!!

Here is the Perplexity evaluation to score the prediction, below. You can make 
this more compact, I only had just wrote it up. So how it works is see those 
strings a, b, c below? Those are the correct predictions, how right/wrong you 
got them for 3 sets of 256 letter predictions, all you had made. You may want 
to add them to a list first then eval at end of file. Now, in this case you got 
them 90% correct and will get a Low score like 1.0 which is really good in the 
AI field. Now see the long line of code at the bottom? Each letter you should 
have predicted is there, a b c, you can math.log them before adding them to the 
list. Also, see the 1/3? The 3 is the number of items in the list, a b c, so 3 
in this case, make it 1/N to adapt to length of the list. This gives you the 
final score, invariant to dataset size. I will add it to mine and see what my 
score is today. As you can see it is just adding up each prediction you should 
have predicted fully preferably, and then norming it to dataset size, it is a 
simple sum up of all prediction error. Lower score is better.

import math
a = 0.9
b = 0.9
c = 0.9
print(math.exp(1/3 * - (math.log(a) + math.log(b) + math.log(c))))

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Te74ace9de14080e4-M94a19c2d82527b4c0748683f
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to