That compression of mine above can be massively improved by doing Deep 
Translation (commonsense reasoning). That means a lot, and one of such is 
something like the following context requiring a prediction at the end (to do 
compression):

"Those witches who were spotted on the house left in a hurry to see the monk in 
the cave near the canyon and there was the pot of gold they left and when they 
returned back they knew where to go if they wanted it back. They knew the 
keeper now owned it and if they waited too long then he would forever own it 
for now on."
Who owns what?
Possible Answers: Witches own monk/witches own canyon/monk owns house/monk owns 
cave/monk owns gold/cave owns pot/there was pot/he owns it

The answer is easy. Can any of yous answer it below? This is part of Deep 
Translation. Through neighboring frequency leads to extra prediction accuracy, 
instead of exact matches. I'm going to code it up after I finish other 
improvements but would be nice to chat about it too!
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tcfc4df5e57c62b43-M100ecada9e1029f1cf2370af
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to