On 8/5/08, Abram Demski <[EMAIL PROTECTED]> wrote: > As I understand it, FOL is only Turing complete when > predicates/relations/functions beyond the ones in the data are > allowed. Would PLN naturally invent predicates, or would it need to be > told to specifically? Is this what "concept creation" does? More > concretely: if I gave PLN a series of data, and asked it to guess what > the next item in the series would be, what sort of process would it > employ?
Prolog (and logic programming) is Turing complete, but FOL is not a programming language so I'm not sure. Predicate invention is an aspect of ILP that has recieved research, and it seems important to AGI. I guess "concept creation" involves inventing a predicate, and then finding its definition via logical rules. The problem of ILP is: given a set of examples, define a concept (with logical rules) that "covers" all positive examples and excludes all negative examples. (Or approximately so.) After that, you can use those concepts / rules to make predictions. To make predictions on a series, one may use ILP to induce rules that relate S_n+1 to S_n, or some such. This may be different from the predictive learning algorithm you have in mind. I'm not so familiar with other machine learning paradigms... Also, note that the KR I'm using is actually called "first-order Bayesian networks". I use FOL in discussions because it's easier to understand. If I say "first-order Bayes net" more people would be scratching their heads. YKY ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121 Powered by Listbox: http://www.listbox.com
