That was also my thinking. Similarly it's also useful to try and choose a
threshold that achieves some tpr or fpr, so that methods can be
approximately compared to published results.
It's not obvious what to do though when an increment in the threshold
results in several changes in
I suppose it would not be hard to build a wrapper that does this, if all we
are doing is choosing a threshold. Although a global maximum is not
guaranteed without some kind of interpolation over the precision-recall
curve.
On 18 July 2017 at 02:41, Stuart Reynolds
Great work indeed !
Thx,
Bertrand
On 17/07/2017 22:08, Alexandre Gramfort wrote:
great team work as usual !
congrats everyone
Alex
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
And... with that in mind -- are there methods that explicitly try to
optimize the f1 score?
On Mon, Jul 17, 2017 at 9:41 AM, Stuart Reynolds
wrote:
> Does scikit have a function to find the maximum f1 score (and decision
> threshold) for a (soft) classifier?
>
> -
Great job! This will be a great release, with a lot of new features and
improvements
G
On Mon, Jul 17, 2017 at 02:49:51PM +0200, Olivier Grisel wrote:
> The new release is coming and we are seeking feedback from beta testers!
> pip install scikit-learn==0.19b2
> conda-forge packages should
Does scikit have a function to find the maximum f1 score (and decision
threshold) for a (soft) classifier?
- Stuart
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
The new release is coming and we are seeking feedback from beta testers!
pip install scikit-learn==0.19b2
conda-forge packages should follow in the coming hours / days.
Note that many models have changed behaviors and some things have been
deprecated, see the full changelog at: