Hi Josh.
Yes, as I mentioned briefly in my second email, you could start a
scikit-learn-contrib project that implements these.
Or, if possible, show how to use Aequitas with sklearn.
This would be interesting since it probably requires some changes to the
API, as our scorers have no side-information,
such as the protected class.
This is actually an interesting instance of
https://github.com/scikit-learn/scikit-learn/issues/4497
an API discussion that has been going on for at least 3 years now.
Cheers,
Andy
On 10/30/18 1:05 PM, Feldman, Joshua wrote:
Hi Andy,
Yes, good point and thank you for your thoughts. The Aequitas project
stood out to me more because of their flowchart than their auditing
software because, as you mention, you always fail the report if you
include all the measures!
Just as with choosing a machine learning algorithm, there isn't a one
size fits all solution to ML ethics, as evidenced by the contradicting
metrics. A reason why I think implementing fairness metrics in sklearn
might be a good idea is that it would empower people to choose the
metric that's relevant to them and their users. If we were to
implement these metrics, it would be very important to clarify this in
the documentation.
Tools that could change predictions to be fair according to one of
these metrics would also be very cool. In the same vein as my thinking
above, we would need to be careful about giving a false sense of
security with the "fair" algorithms such a tool would produce.
If you don't think now is the time to add these metrics, is there
anything I could do to move this along?
Best,
Josh
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn