My context model mixing high order modelling (order0-20) with Online Learning 
got the 100MB losslessly compressed to 24MB bytes in 34 mins. C++. Now I'm 9MB 
away from world record. I had to come up with my own mixing formula. It uses an 
exponential curve, I'm unsure if this is what others use the curve for though. 
I can probably fine tune the settings more to get it lower, I set them by hand 
for 2 hours.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T409fc28ec41e6e3a-M78900341401c5061cd3048b1
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to