On Tue, Aug 5, 2008 at 10:45 AM, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > Also, one can "splice" new predicates into old rules, thus eliminating > them. So strictly speaking, new predicates do not add new content. > But they may make machine learning easier in heuristics (just > speculation). Oh wait... the exception is that new predicates are > necessary for defining recursive predicates. So yes, predicate > invention is necessary if resulting logic program need to contain > recursive predicates. > > As for functions... ILP is complex enough with function-free FOL > already. Researchers usually use a "flattening" technique to > eliminate functions. So I don't know much about function creation. > >> So, you are not trying to create your own new probabilistic logic, you >> are just trying to develop 1st-order bayesian networks further? > > Yeah, I'm trying to distribute probabilities over fuzziness, with > first-order Bayes nets. This is already quite complex. And even this > seems unable to model some subtleties of commonsense concepts.... so I > need more thinking of it. > > YKY
You made some remarks, (I did not keep a record of them), that sounds similar to some of the problems of conceptual complexity (or complicatedness) that I am interested in. Can you describe something of what you are working on in a little more detail in a way that should make it easy for me to understand? To start with, what does "distribute probabilities over fuzziness," mean exactly. Are you trying to use first order Bayes nets to examine different distribution patterns? Does first order Bayes nets refer to something similar to the inductive logical probability that you titled this thread after? Jim Bromer ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
