Hey everybody.
Recently I opened and issue
<https://github.com/scikit-learn/scikit-learn/issues/989> suggesting
that all classifiers should raise an error if there is only one class
present during training.
My reasoning was that there is no point in training a classifier, as
they usually won't learn anything meaningful and it
just wastes clock-cycles.
Olivier and Gilles raise the issue that this might lead to problems when
doing sub sampling for
cross-validation and ensemble methods.
What do you think?
You can read the arguments in the issue thread
<https://github.com/scikit-learn/scikit-learn/issues/989> and some
comments here
<https://github.com/scikit-learn/scikit-learn/commit/f01907dcb208edf46dc6b5b652aa46b52c62ba59>.
Cheers,
Andy
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general