Thanks Andy, I'll look into starting a scikit-learn-contrib project!
Best,
Josh
___
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
Hi Josh.
Yes, as I mentioned briefly in my second email, you could start a
scikit-learn-contrib project that implements these.
Or, if possible, show how to use Aequitas with sklearn.
This would be interesting since it probably requires some changes to the
API, as our scorers have no side-infor
Hi Andy,
Yes, good point and thank you for your thoughts. The Aequitas project stood
out to me more because of their flowchart than their auditing software
because, as you mention, you always fail the report if you include all the
measures!
Just as with choosing a machine learning algorithm, ther
Would be great for sklearn-contrib, though!
On 10/29/18 1:36 AM, Feldman, Joshua wrote:
Hi,
I was wondering if there's any interest in adding fairness metrics to
sklearn. Specifically, I was thinking of implementing the metrics
described here:
https://dsapp.uchicago.edu/projects/aequitas/
Hi Josh.
I think this would be cool to add at some point, I'm not sure this is now.
I'm a bit surprised by their "fairness report". They have 4 different
metrics of fairness which are conflicting.
If they are all included in the fairness report then you always fail the
fairness report, right?