I am working on implementing hidden-markov-models in pymc3 that is using 
theano to implement the probabilistic programming.  I was able to implement 
a two state HMM in pymc using theano to vectorize the implementation.  In 
particular, I had to create a chain of states (lets say A, B) which have 
different transition probabilities.  In order to vectorize, I used switch 
to switch between two different transition probabilities depending on which 
state the markov chain is.
I then tried to implement a multi-state HMM with more than two states. 
 tensor.switch was perfect for two states but there is an equivalent 
theano.tensor method called choose.  After implementing the method, pymc3 
(through theano) tells me that choose does not have a gradient method and 
therefore it cannot use the most advanced Monte-Carlo samples since these 
depend on having the gradient of the probability distribution.  The 
two-state HMM perfectly ran using the gradient method.

My code can be found 
here: https://github.com/hstrey/Hidden-Markov-Models-pymc3

My question is:  Why doesn't tensor.choose have a gradient method if 
tensor.switch has one.  I can imagine to implement choose using several 
switches.  Is there a deeper reason, or is it just not implemented?

Thanks


-- 

--- 
You received this message because you are subscribed to the Google Groups 
"theano-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to