*Please join us in the Data Science Studio (6th Floor of the Physics and
Astronomy Tower <http://www.washington.edu/maps/#!/pat>) on Wednesday,
April 11th from 2:30-3:30 pm for a discussion seminar about ethics in data
science. Harm and beyond: What the data science community is doing to
address ethical concerns … and why it’s necessary but insufficientIn recent
years, it has become increasingly hard to ignore the propensity for
data-intensive computational technologies to do harm by violating privacy,
codifying bias, and facilitating malfeasance. In this session, Bernease
Herman <http://www.berneaseherman.com/>, a data scientist who specializes
in interpretable machine learning, will help us understand recent
developments in data science tools, techniques, and norms that address some
of these concerns. From algorithmic audits to differential privacy to
statistical definitions of fairness, Bernease will explain what these
approaches are capable of doing, and what their limitations are. Then, Anna
Lauren Hoffman <http://annaeveryday.com/>, a scholar of technology, culture
and ethics, will help us see why all those developments are necessary, but
insufficient. Anna looks beyond the materialization of specific harms and
invites us to think more broadly about how the underlying logics of
data-intensive computational systems perpetuate cultural violences against
marginalized communities. At the bottom of this email, you’ll find more
detailed previews of their talks. As usual, we’ll reserve a decent chunk of
the hour for a group discussion following the presentation. Hope to see you
there!AnissaTalk by Bernease HermanCountering Harm: Computational
Approaches to a More Ethical Data Science Bernease Herman will give an
accessible primer for select computational methods popular in the Fairness,
Accountability, and Transparency in Machine Learning (FATML) community that
address data science ethics. She'll present advantages, disadvantages, and
current efficacy of each method as practiced today.Talk by Anna Lauren
HoffmanAmplifying Harm: When Data, Algorithms, and Cultural Violence
Collide Many conversations around data and discrimination focus on problems
of biased datasets or unfair algorithms that produce unjust material
outcomes. But we also need better ways of grappling with cultural violences
- that is, discursive and symbolic harms reproduced and amplified by
researchers. Hoffmann argues that these harms are not secondary or even
concurrent with other forms of discrimination; rather, they are
foundational, as they create the social conditions against which other
harms can occur. Just like physical violence in the real world, this kind
of violence - dubbed ‘data violence’ - occurs as the result of choices that
underwrite other harmful or even fatal outcomes produced by data-driven,
algorithmically-mediated systems.*
_______________________________________________
change mailing list
change@change.washington.edu
http://changemm.cs.washington.edu/mailman/listinfo/change

Reply via email to