On Wed, May 22, 2013 at 5:20 PM, yikes aroni <[email protected]> wrote:

> thanks for the reply ... I've discretized the continuous time series
> observations and assigned them to symbols.


Ahh... excellent.  But see below as well.


> The number of hidden states is
> 2: "out of control" and "not out of control -- 0 and 1.


I don't think so.  Could be wrong.

Normally the way an HMM works is that the current hidden state and current
input are passed to a next state function.  The output of the next state
function is the distribution of new hidden state AND an output symbol given
the current state.  To use the HMM to decode something, you need to search
over all possible state transitions to find the most likely sequence of
hidden states and output symbols.  HMM's can be used in real-time decoding,
but you usually have some chance of rewriting recent history.  Because
dependencies decay rapidly over time, you don't usually have much chance of
rewriting much history.


> With the scenario
> defined this way, i'm able to get good predictions from HMM. What i don't
> know how to do is get a measure of the model's "confidence" in the
> prediction. How do i get that out of the HMM API?
>

Well, the output of hte HMM is a decoding lattice.  You can build a sampler
which samples from alternative decodings rather than finds the maximum
likelihood decoding to get some idea of how certain the model is about the
particular decoding.

If you combine this with some notion of the underlying uncertainty of the
probabilities in the model itself, you get a Bayesian HMM.  This
uncertainty is pretty easily integrated into the typical Baum-Welch
training algorithm but the Mahout implementation doesn't do this.


>
> As for your interesting reply, i'm not sure i understand it. So I would use
> the k-means clustering but what would i be clustering?

The nearness of the
> points? Some aggregate of the points?

The distance between the points of
> one sub-sequence from another (that's probably it). The purpose of such
> clustering would be to reduce the dimension of my sequence of n time series
> observations to a symbol (i.e., the cluster ID)?
>

If each time step gives you m values (i.e. one sample from R^m) that may
not be enough information about your sequence.  What you can do to help
with this is a technique called state space embedding.  What you do is take
n sequential points as your sample instead of just one,  you may have a
much better state description.  You then have a sample from R^(n+m) which
you can cluster.

This may or may not help.  If you are already getting good results, then I
thikn that your problem is probably pretty simple (thank your lucky stars)
and doesn't need the fancy stuff.


>      > You can now quantize your data using this clustering
>
> so you are suggesting i use the membership in a particular cluster as the
> symbolic representation of each subsequence to then plug into the HMM? Not
> sure but i assume these would be the observed values, since the hidden
> state i'm after is "in control" / "out of control".
>

Yes.  The nearest cluster is the symbol.


>
> Sorry if i'm completely missing it.
>


You aren't missing much.

Reply via email to