Publications
* signifies equal contribution
signifies less representative papers, signifies more representative papers
Papers
Measuring Machine Learning Harms from Stereotypes: Requires Understanding Who is Being Harmed by Which Errors in What Ways
Angelina Wang, Xuechunzi Bai, Solon Barocas, Su Lin Blodgett
EAAMO 2023Overcoming Bias in Pretrained Models by Manipulating the Finetuning Dataset
Angelina Wang and Olga Russakovsky
ICCV 2023 - OralGender Artifacts in Visual Datasets
Nicole Meister*, Dora Zhao*, Angelina Wang, Vikram V. Ramaswamy, Ruth Fong, Olga Russakovsky
ICCV 2023Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy [website]
Angelina Wang*, Sayash Kapoor*, Solon Barocas, Arvind Narayanan
FAccT 2023; Journal of Responsible Computing 2023Taxonomizing and Measuring Representational Harms: A Look at Image Tagging
Jared Katzman*, Angelina Wang*, Morgan Scheuerman, Su Lin Blodgett, Kristen Laird, Hanna Wallach, Solon Barocas
AAAI 2023Manipulative Tactics are the Norm in Political Emails: Evidence from 100K emails from the 2020 U.S. Election Cycle [website]
Arunesh Mathur, Angelina Wang, Carsten Schwemmer, Maia Hamin, Brandon M. Stewart, Arvind Narayanan
Big Data & Society 2023Measuring Representational Harms in Image Captioning [4 min video] [15 min video]
Angelina Wang, Solon Barocas, Kristen Laird, Hanna Wallach
FAccT 2022Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation [4 min video] [15 min video]
Angelina Wang, Vikram V. Ramaswamy, Olga Russakovsky
FAccT 2022REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets
Angelina Wang, Alexander Liu, Ryan Zhang, Anat Kleiman, Leslie Kim, Dora Zhao, Iroha Shirai, Arvind Narayanan, Olga Russakovsky
IJCV 2022 (extended version of ECCV 2020 publication)Understanding and Evaluating Racial Biases in Image Captioning
Dora Zhao, Angelina Wang, Olga Russakovsky
ICCV 2021Directional Bias Amplification [video] [poster]
Angelina Wang and Olga Russakovsky
ICML 2021The Limits of Global Inclusion in AI Development
Alan Chan*, Chinasa T. Okolo*, Zachary Terner*, Angelina Wang*
AAAI 2021 Workshop on Reframing Diversity in AI - SpotlightREVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets [video]
alternate name: “BImAgeS: BIAS is in all images”
Angelina Wang, Arvind Narayanan, Olga Russakovsky
ECCV 2020 - Spotlight (top 5% of submissions)Learning Robotic Manipulation through Visual Planning and Acting
Angelina Wang, Thanard Kurutach, Kara Liu, Pieter Abbeel, Aviv Tamar
Robotics: Science and Systems 2019Safer Classification by Synthesis
William Wang, Angelina Wang, Aviv Tamar, Xi Chen, Pieter Abbeel
NeurIPS 2017 Aligned AI Workshop
Other
A Tale of Two Conferences: FAccT and ICML 2022
Medium 2022Building a Bridge with Concrete… Examples
Freedom to Tinker Blog 2020FTC Comment on Children’s Online Privacy Protection Act (COPPA) Rule
2019