Hi,

I am attempting to build some classification models where false-positives are
much worse than false-negatives. Normally these two outcomes are
treated equally (equal loss) in the training procedure,
but I would like to be able to customize this.

I've been using the AdaBoost classifier, which works well as a general
data-miner, except for this issue. I tried hacking a bit on the
code by only boosting the false-positive samples, 
but I don't really know if that makes any sense (it tends to forget
about the false-negatives).

Googling around I found a paper [1] but it's not clear to me
if this is what I am looking for.

Thankyou for any suggestions.

Simon.


[1] McCane, Brendan; Novins, Kevin; Albert, Michael (2005). "Optimizing cascade 
classifiers.".


------------------------------------------------------------------------------
_______________________________________________
Scikit-learn-general mailing list
Scikit-learn-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to