Hi all,

A widely used measure for the performance of algorithms that learn
Bayesian network from data is cross-entropy (Kullback & Leibler,
1951).  It seems to me (maybe I'm wrong) that cross-entropy is used in
different ways by different researchers (for example Heckerman, Geiger
& Chickering, 1995 vs Lam & Bacchus, 1994).  Cross-entropy seems to be
a general purpose measure that is used to quantify the distance
between two distributions (first: gold standard model - second:
induced model ), but the nature of these distributions can be
different for different researchers.  What are the theoretical
implications of these differences ?  Is there any paper comparing the
properties of various performance measures for Bayesian network
inducers ?  Can anyone give me suggestions for readings on the
cross-entropy topic ?

Thanks to all.

Fabio Del Missier



====================================
Fabio Del Missier
PhD student

Department of Psychology, Univ. of Trieste.
Via S. Anastasio, 12, 34123, Trieste, Italy

tel:               +39 040 676 2716
e-mail:        [EMAIL PROTECTED]
====================================

Reply via email to