Jeffrey, I am sorry if I wasn't clear enough when posing my problem. Yes, there is a lot of methods capable of learn symbolic rules from data: greedy schemes, genetic algorithms, connectionist and so on. The point is that I am not interested in a particular learning method but in a test methodology able to asses the performance of an specific method based on the "resemblance" between the rules outputted by the algorithm at hand and the rules which were used to generate the training set (always assuming the availability of a software generator that generates a training set from a set of seed "if-then" rules).
Regards, Enric Hernandez Universitat Politecnica de Catalunya. Barcelona. (Spain) > I was just curious about your problem. If you are looking at quantifying > data that originates from a set of "if-then" rules to results that have a > similar structure, have you considered using a neural network/connectionist > model that is designed to learn symbolic rules? Below are some examples, > but there is most definitely a great deal more of research in the area. > Then again, I might be completely off about what exactly want. > > M. W. Craven & J. W. Shavlik (1993). > Learning Symbolic Rules Using Artificial Neural Networks. Proceedings of the > Tenth International Conference on Machine Learning, pp. 73-80, Amherst, MA. > Morgan Kaufmann > http://www.cs.wisc.edu/~shavlik/abstracts/craven.mlc93.abstract.html > > Symbolic knowledge extraction from trained neural networks: A sound approach > http://citeseer.nj.nec.com/garcez01symbolic.html > > A Connectionist Inductive Learning System for Modal Logic Programming > http://citeseer.nj.nec.com/517633.html > > > > > ----- Original Message ----- > From: "Enric Hernandez" <[EMAIL PROTECTED]> > To: <[EMAIL PROTECTED]> > Sent: Thursday, February 13, 2003 12:44 PM > Subject: Re: [UAI] alternative rule learning testing methodology > > > > Hello, > > > > First of all thank you for answering my message. > > > > I've been reading some refences related to the topic stated in your > > reply since I wasn't aware of it. Concretely I have read "causal > > discovery from a mixture of experimental and observational data" (1999) > > by Cooper & Yoo, and some reference by yours (Cordon & Herrera) that I > > have found in Citeseer. From these readings I came to the conclusion > > that the methods involved could not be applied to the problem at hand > > since their evaluation metrics assume a "bayesian" like structure as the > > seed from which the experimental data are generated. > > > > This is not my case. I use an artificial dataset generator that produces > > a data set from a set of "if-then" rules (seed production rules) where > > both antecedent and consequent are defined in terms of linguistic labels > > (as introduced by Zadeh) over the proper domains. I have not tried yet > > to adapt the above mentioned metrics to this framework, but it does not > > seem quite straightforward. > > > > What I am trying to do is quantifying the resemblance between the set of > > seed-rules and the rules obtained by the application of some algorithm > > for the induction of rules (both sets of rules defined in terms of > > linguistic variables previously defined). > > > > One idea could be using the set of obtained rules as a new set of seed > > rules in order to generate a new data set (by using the data set > > generator already mentioned), and compare it with the original data > > set (that which was generated from the initial set of seed-rules). The > > underlying idea is that the more equivalent are the initial set of > > seed rules and the set of rules obtained by the application of the > > algorithm, the more alike the data set generated from both of them > > will be. > > > > I would like to know your opinion. > > > > Regards, and thanks again for your interest. > > > > Enric Hernandez > > Universitat Politecnica de Catalunya. > > Barcelona. (Spain) > > > > > > > > > hi! > > > Your work sounds like mine-----finding under causal > > > structures from statistical data. > > > It like bayesian network learning. > > > waiting for more discuss. > > > > > > - --- Enric Hernandez <[EMAIL PROTECTED]> wrote: > > > > Hello, > > > > > > > > I have developed (well, adapted some old ones would > > > > be more precise) > > > > some algorithms for my Ph.D.intended to extract > > > > if-then rules from a > > > > supervised set of data in uncertain environments > > > > (assuming > > > > <attribute,value> paradigm for the description of > > > > data). > > > > > > > > I am mainly interested in obtaining rules which > > > > express the underlying > > > > structure of data more than in getting highly > > > > accurate classification > > > > rules (of course, both approaches are not mutually > > > > exclusive. Instead, > > > > both are > > > > usually correlated: the better the set of rules > > > > "reflect" the > > > > underlying structure, the better they perform on > > > > classification > > > > tasks).In order to do that, I am using artificial > > > > dataset generators, > > > > where you can specify a set of "seed" rules which > > > > will generate the > > > > dataset. > > > > > > > > My purpose is testing how good the extracted rules > > > > resemble the > > > > original set of "seed" rules. As far as I know, this > > > > is not the common > > > > approach which usually performs statistical tests > > > > aimed to test > > > > cassification accuracy. > > > > > > > > Does anyone of you have any hint on that? Any idea > > > > or reference will > > > > be of great help to me. > > > > > > > > Thank you very much and kind regards! > > > > > > > > Enric Hernandez > > > > Universitat Politecnica de Catalunya. > > > > Barcelona. (Spain) > > > > > > > > > > > > P.D: apologies if my question falls out of the scope > > > > of this group > > > > > > > > >
