2012/9/12 Christian Jauvin <[email protected]>:
> As I only have an intuitive notion of how "sample_weight" (i.e. to be
> fed to certain types of classifier) should work, I'd like to know if
> this is a sound way of computing them:
>
> def get_sample_weight(y):
>     p = 1. / len(np.unique(y))
>     bc = np.bincount(y)
>     w = np.repeat(p, len(y))
>     for i, v in enumerate(y):
>         w[i] /= bc[v]
>     assert np.sum(w) == 1
>     return w

The normalization is a bad idea for Naive Bayes estimators, unless you
also divide the smoothing parameter by n_samples. I'm not sure if this
is also true for other classifiers.

The assert is also unlikely to ever work with floats.

May I ask why you think you need this?

-- 
Lars Buitinck
Scientific programmer, ILPS
University of Amsterdam

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to