A Harm-Reduction Framework for Algorithmic Fairness
Author(s)
Altman, Micah; Wood, Alexandra; Vayena, Effy
DownloadAccepted version (604.0Kb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
In this article, we recognize the profound effects that algorithmic decision making can have on people's lives and propose a harm-reduction framework for algorithmic fairness. We argue that any evaluation of algorithmic fairness must take into account the foreseeable effects that algorithmic design, implementation, and use have on the well-being of individuals. We further demonstrate how counterfactual frameworks for causal inference developed in statistics and computer science can be used as the basis for defining and estimating the foreseeable effects of algorithmic decisions. Finally, we argue that certain patterns of foreseeable harms are unfair. An algorithmic decision is unfair if it imposes predictable harms on sets of individuals that are unconscionably disproportionate to the benefits these same decisions produce elsewhere. Also, an algorithmic decision is unfair when it is regressive, that is, when members of disadvantaged groups pay a higher cost for the social benefits of that decision.
Date issued
2018-05Department
Massachusetts Institute of Technology. LibrariesJournal
IEEE Security and Privacy
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Citation
Altman, Micah et al. "A Harm-Reduction Framework for Algorithmic Fairness." IEEE Security and Privacy 16, 3 (May 2018) © 2018 IEEE
Version: Author's final manuscript
ISSN
1540-7993
1558-4046