Computing the most likely configuration of a set of variables, at least 
in the BN community, is known as computing the Maximum a Posteriori 
hypothesis, or MAP.  If all of the nodes in your model are either 
observed or ones you want to maximize over, then it is a special case of 
MAP often called Most Probable Explanation, or MPE.  Assuming that you 
have a model already, there are a variety of inference algorithms that 
will solve it for you.  A web search on MPE turns up a number of papers 
on it.  If instead, you are attacking it from a learning perspective, 
trying to generate an especially good model for producing accurate MAP 
configurations relative to some training data, I am not aware of any 
algorithms developed specifically for that purpose.
Hope that helps,
J.D. Park

On Friday, August 9, 2002, at 12:39  PM, DENVER H DASH wrote:

> Hi all,
>
> I'm looking for references, or even some vocabulary, addressing
> or describing the problem of classification with multiple, non-disjoint
> and possibly related class variables.
>
> For example, in a medical diagnosis BN classifier with several possible
> disease nodes: the disease nodes are not mutually exclusive, so the
> classification problem is to decide which joint configuration of class
> variable states is most likely, rather than which state of a single 
> class
> variable is most likely.
>
> I'd be grateful for any pointers to this problem.
>
> Cheers,
> Denver.
>

Reply via email to