For some reason I thought we had a "prefit" parameter.
I think we should.
On 10/01/2017 07:39 PM, Sebastian Raschka wrote:
Hi, Rares,
vc = VotingClassifier(...)
vc.estimators_ = [e1, e2, ...]
vc.le_ = ...
vc.predict(...)
But I am not sure it is recommended to modify the "private" estimators
I agree. I had added sth like that to the original version in mlxtend (not sure
if it was before or after we ported it to sklearn). In at case though, it be
happy to open a PR about that later today :)
Best,
Sebastian
> On Oct 7, 2017, at 10:53 AM, Andreas Mueller wrote:
>
> For some reason
I don't think LOF is designed to apply to unseen data.
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
just a note that if you're using this for topic modelling, perplexity might
not be a good choice of objective function. others have been proposed. see
the diagnostic functions for MALLET topic modelling for instance.
___
scikit-learn mailing list
scikit-l
actually I'm probably wrong there, but you may not be able to use accuracy
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
I am attempting to validate the output of an L2 normalization function:
*data_l2 = preprocessing.normalize(data, norm='l2') *# raw data is
below at end of this email
output:
array([[ 0.57649683, 0.53806371, 0.61492995],
[-0.53806371, -0.57649683, -0.61492995],
[ 0.3359268