Nanograte: I do not want to be rude, but if you have interpretable AI networks 
figured out then just apply those rules to an actual AI program.  I cannot 
understand what precisely you are saying, but the graph or model is never 
mature (in a constrained sense of graph theory) if you introduce hypergraphs, 
which are necessary.  It is not only Godelean self-referential step-state 
contradictions, but there are an extensive range of automated referential 
logical issues (like circular reasoning) that would create complications that 
need to be managed.  I have circular reasoning "all figured out" but there are 
other more subtle problems that cannot be foreseen by the programmer at a 
purely abstract level. I am not claiming to be an expert (at graph theory or 
hypergraphs and so on) but I believe that I have thought about these issues at 
a more intuitive level.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T1675e049c274c867-Ma1e5122b12a75539dacd7068
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to