That can cause training to fail sometimes. Instead, optimization seems to perform better when using random intiialization.
This is a fix that came out of some debugging in the IRC channel. You can view, comment on, or merge this pull request online at: https://github.com/mlpack/mlpack/pull/828 -- Commit Summary -- * Don't use equal initial probabilities. -- File Changes -- M src/mlpack/methods/hmm/hmm_impl.hpp (11) M src/mlpack/tests/hmm_test.cpp (5) -- Patch Links -- https://github.com/mlpack/mlpack/pull/828.patch https://github.com/mlpack/mlpack/pull/828.diff -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/mlpack/mlpack/pull/828
_______________________________________________ mlpack mailing list [email protected] http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack
