Hi Andy,

Yes, good point and thank you for your thoughts. The Aequitas project stood
out to me more because of their flowchart than their auditing software
because, as you mention, you always fail the report if you include all the
measures!

Just as with choosing a machine learning algorithm, there isn't a one size
fits all solution to ML ethics, as evidenced by the contradicting metrics.
A reason why I think implementing fairness metrics in sklearn might be a
good idea is that it would empower people to choose the metric that's
relevant to them and their users. If we were to implement these metrics, it
would be very important to clarify this in the documentation.

Tools that could change predictions to be fair according to one of these
metrics would also be very cool. In the same vein as my thinking above, we
would need to be careful about giving a false sense of security with the
"fair" algorithms such a tool would produce.

If you don't think now is the time to add these metrics, is there anything
I could do to move this along?

Best,
Josh
_______________________________________________
scikit-learn mailing list
scikit-learn@python.org
https://mail.python.org/mailman/listinfo/scikit-learn

Reply via email to