That was also my thinking. Similarly it's also useful to try and choose a
threshold that achieves some tpr or fpr, so that methods can be
approximately compared to published results.

It's not obvious what to do though when an increment in the threshold
results in several changes in classification.

On Mon, Jul 17, 2017 at 5:00 PM Joel Nothman <joel.noth...@gmail.com> wrote:

> I suppose it would not be hard to build a wrapper that does this, if all
> we are doing is choosing a threshold. Although a global maximum is not
> guaranteed without some kind of interpolation over the precision-recall
> curve.
>
> On 18 July 2017 at 02:41, Stuart Reynolds <stu...@stuartreynolds.net>
> wrote:
>
>> Does scikit have a function to find the maximum f1 score (and decision
>> threshold) for a (soft) classifier?
>>
>> - Stuart
>>
> _______________________________________________
>> scikit-learn mailing list
>> scikit-learn@python.org
>> https://mail.python.org/mailman/listinfo/scikit-learn
>>
> _______________________________________________
> scikit-learn mailing list
> scikit-learn@python.org
> https://mail.python.org/mailman/listinfo/scikit-learn
>
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to