The problem with NNs is that they don't distinguish lies from the truth. They 
just learn all the input->output pairs without critical opinion, possibly with 
some good generalization magic.

To detect lies, one approach may be to build a symbolic model of the stories 
told. Feeding statements one by one, we can detect if the new statement is in 
contradiction to already accepted statements. Of course, there can be any 
combination of statements that may hold the truth, but each combination should 
be mutually non-contradicting (in the sense of theorem proving).

When the contradicting statement is detected, another problem may be in 
deciding whether to keep the current theory and to reject the new statement, or 
to start building a new theory based on the new statement.

I believe that's a missing piece required to create an AGI: possibility to form 
a non-contradicting model of the world.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tef2462d212b37e50-M7ed7bb3319da75424ab615e6
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to