Leon Palafox wrote:
> If you want to stick to python, you could use pyMC to do that.
emcee is also very good if you need MCMC, not to mention easier to use than
PyMC. You just have to provide a callback that computes the loglikelihood.
http://dan.iel.fm/emcee/current/
Sturla
If you want to stick to python, you could use pyMC to do that.
On Mon, Feb 2, 2015 at 10:02 AM, Tom Groves wrote:
> On 02/02/2015, Alexander Fabisch
> wrote:
> > I implemented that
> > in another library (https://github.com/AlexanderFabisch/gmr, example:
> >
> https://github.com/AlexanderFabi
On 02/02/2015, Alexander Fabisch wrote:
> I implemented that
> in another library (https://github.com/AlexanderFabisch/gmr, example:
> https://github.com/AlexanderFabisch/gmr/blob/master/examples/plot_estimate_gmm.py).
I tried out your code, but it crashed when attempting to generate
samples from
Wow, fast responses, thanks!
On 02/02/2015, Andy wrote:
> I don't see how this would fit into the standard sklearn interface...
Just what Alexander said. I don't know sklearn well enough to know
what the standard pattern for interfacing might be, or why this
presents a problem. I was expecting t
I don't see how this would fit into the standard sklearn interface...
From a probabilistic modelling perspective that is indeed quite basic,
but sklearn is not a probabilistic modelling framework ;)
--
Dive into the Worl
Hi Juan.
For up to floating point precision, that is pretty hard as Gael
mentioned. 1e-5 on sigma seems pretty low, though.
Can you post data to reproduce?
I would expect most classifiers to go to around 1e-8.
Cheers,
Andreas
On 02/02/2015 10:46 AM, Juan Nunez-Iglesias wrote:
Hi all,
*TL;DR
Hi Tom,
as far as I know this is not implemented in sklearn. I implemented that
in another library (https://github.com/AlexanderFabisch/gmr, example:
https://github.com/AlexanderFabisch/gmr/blob/master/examples/plot_estimate_gmm.py).
The implementation is very simple and not as efficient and usabl
I'm using scikit-learn to fit a multivariate Gaussian Mixture Model to
some data (which works brilliantly). But I need to be able to get a
new GMM conditional on some of the variables, and the scikit toolkit
doesn't seem to be able to do that, which surprised me because it
seems like a pretty basic
Thinking about it, that's going to be hard: even floating points
operations such as a sum of many floating point numbers is not
permutation invariant, due to the rounding errors.
Gaƫl
--
Dive into the World of Parallel Pr
Hi all,
TL;DR version:
I'm looking for a classifier that will get the *exact same model* for shuffled
versions of the training data. I thought GaussianNB would do the trick but
either I don't understand it, or some kind of numerical instability prevents it
from achieving the same model on subs
10 matches
Mail list logo