Most interesting. Thanks for sharing. From the little I understand about this 
large, body of work, this makes sense to me. However, I would contend that by 
adopting - what is called by some - a network structure (closing loops in a 
3-entity structure) would lead to confusing results.

For example, one cannot reliably infer a vertex from that, which may then skew 
the rest of the structural results. . I think it's a classical "copout" in 
systems design; when in doubt, then to close the loop to open the associative 
option i.e., A=> B and C and B => C. Result: A indirectly causing C, but it was 
already inferred that A directly caused C. Did it, or didn't it?

This would present as a self-made paradox, not so?


________________________________
From: Robert Levy via AGI <[email protected]>
Sent: Thursday, 13 September 2018 10:08 PM
To: AGI
Subject: [agi] Judea Pearl on AGI

I don't think I've seen a discussion on this mailing list yet about Pearl's 
hypothesis that causal inference is the key to AGI.  His breakthroughs on 
causation have been in use for almost 2 decades.  The new Book of Why, other 
than being the most accessible presentation of these ideas to a broader 
audience, is interesting in that it expressly goes into applying causal 
calculus to AGI.
Artificial General Intelligence List<https://agi.topicbox.com/latest> / AGI / 
see discussions<https://agi.topicbox.com/groups/agi> + 
participants<https://agi.topicbox.com/groups/agi/members> + delivery 
options<https://agi.topicbox.com/groups/agi/subscription> 
Permalink<https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M9e6c354c9f8ac56c414a651f>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T0f9fecad94e3ce7e-M4e059f68d10346e680f74b75
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to