Padhraic Smyth wrote:
> One of the most useful developments (from my own viewpoint
> at least) has been the realization that one can treat a HMM
> as a type of belief network - in fact once one does this one
> sees immediately that it is in fact a relatively *simple* model, i.e.,
> 
> x_1 -- x_2 --...... -- x_T
> |       |               |
> |       |               |
> y_1    y_2             y_T
> 
> where here the x's are the hidden states and y's are the observed
> variables, and the directionality of the edges is usually assumed
> to be from x_t to x_t+1 and from x_t to y_t.

Yes, I've found this view of HMMs very useful in my work in speech recognition.

Bayesian networks are *very* useful as a conceptual tool.  When I draw a
Bayesian network, it's not because I want to input it to Netica or Hugin or
whatever and evaluate it on some data; it's because I want to construct a
probabilistic model of some problem, or better understand an existing one, so
that I can derive recognition and/or training algorithms from the model.  With a
Bayesian network in hand, it's easy to keep track of what conditional
independencies I can use; without one, it's very easy to mistakenly assume that
two quantities are conditionally independent when they are not.

-- Kevin S. Van Horn

Reply via email to