Hello,

I have developed (well, adapted some old ones would be more precise)
some algorithms for my Ph.D.intended to extract if-then rules from a
supervised set of data in uncertain environments (assuming
<attribute,value> paradigm for the description of data).

I am mainly interested in obtaining rules which express the underlying
structure of data more than in getting highly accurate classification
rules (of course, both approaches are not mutually exclusive. Instead,
both are
usually correlated: the better the set of rules "reflect" the
underlying structure, the better they perform on classification
tasks).In order to do that, I am using artificial dataset generators,
where you can specify a set of "seed" rules which will generate the
dataset.

My purpose is testing how good the extracted rules resemble the
original set of "seed" rules. As far as I know, this is not the common
approach which usually performs statistical tests aimed to test
cassification accuracy.

Does anyone of you have any hint on that? Any idea or reference will
be of great help to me.

Thank you very much and kind regards!
 
Enric Hernandez
Universitat Politecnica de Catalunya.
Barcelona. (Spain)


P.D: apologies if my question falls out of the scope of this group

Reply via email to