Hello,

First of all thank you for answering my message.

I've been reading some refences related to the topic stated in your
reply since I wasn't aware of it. Concretely I have read "causal
discovery from a mixture of experimental and observational data" (1999)
by Cooper & Yoo, and some reference by yours (Cordon & Herrera) that I
have found in Citeseer. From these readings I came to the conclusion
that the methods involved could not be applied to the problem at hand
since their evaluation metrics assume a "bayesian" like structure as the
seed from which the experimental data are generated.

This is not my case. I use an artificial dataset generator that produces
a data set from a set of "if-then" rules (seed production rules) where
both antecedent and consequent are defined in terms of linguistic labels
(as introduced by Zadeh) over the proper domains. I have not tried yet
to adapt the above mentioned metrics to this framework, but it does not
seem quite straightforward.

What I am trying to do is quantifying the resemblance between the set of
 seed-rules and the rules obtained by the application of some algorithm
for the induction of rules (both sets of rules defined in terms of
linguistic variables previously defined).

One idea could be using the set of obtained rules as a new set of seed
rules in order to generate a new data set (by using the data set
generator already mentioned), and compare it with the original data
set (that which was generated from the initial set of seed-rules). The
underlying idea is that the more equivalent are the initial set of
seed rules and the set of rules obtained by the application of the
algorithm, the more alike the data set generated from both of them
will be.

I would like to know your opinion.

Regards, and thanks again for your interest.

Enric Hernandez
Universitat Politecnica de Catalunya.
Barcelona. (Spain)



> hi!
> Your work sounds like mine-----finding under causal
> structures from statistical data.
> It like bayesian network learning.
> waiting for more discuss.
> 
> - --- Enric Hernandez <[EMAIL PROTECTED]> wrote:
> > Hello,
> > 
> > I have developed (well, adapted some old ones would
> > be more precise)
> > some algorithms for my Ph.D.intended to extract
> > if-then rules from a
> > supervised set of data in uncertain environments
> > (assuming
> > <attribute,value> paradigm for the description of
> > data).
> > 
> > I am mainly interested in obtaining rules which
> > express the underlying
> > structure of data more than in getting highly
> > accurate classification
> > rules (of course, both approaches are not mutually
> > exclusive. Instead,
> > both are
> > usually correlated: the better the set of rules
> > "reflect" the
> > underlying structure, the better they perform on
> > classification
> > tasks).In order to do that, I am using artificial
> > dataset generators,
> > where you can specify a set of "seed" rules which
> > will generate the
> > dataset.
> > 
> > My purpose is testing how good the extracted rules
> > resemble the
> > original set of "seed" rules. As far as I know, this
> > is not the common
> > approach which usually performs statistical tests
> > aimed to test
> > cassification accuracy.
> > 
> > Does anyone of you have any hint on that? Any idea
> > or reference will
> > be of great help to me.
> > 
> > Thank you very much and kind regards!
> >  
> > Enric Hernandez
> > Universitat Politecnica de Catalunya.
> > Barcelona. (Spain)
> > 
> > 
> > P.D: apologies if my question falls out of the scope
> > of this group
> > 
> 









------- End of Forwarded Message

Reply via email to