If you believe your research has any real-world impact, then it inherently carries the potential for social impact. Even the most theoretical science generally claims some degree of relevance to the real world, and much of technology is inherently dual-use—it can lead to outcomes that are beneficial, harmful, or somewhere in between.
For example, if you are working with facial datasets like CelebA, it’s worth considering the potential
surveillance applications of computer vision. If you are working on object recognition, whether those systems will perform well on the
objects of countries which may look different from those in the USA or Western Europe. If you are working on image captioning, whether the captioning systems trained on captions generated by sighted users (COCO dataset) will generalize to the
different kinds of captions desired by blind users (VizWiz dataset). If you are working on motion capture systems, how incorrect assumptions about bodies
fail to generalize across diverse body types. And if you are working with any labeled dataset, where that dataset came from and whether participants provided
consent or received proper compensation (e.g., participants have previously
reviewed distributing content for two to three dollars an hour).
While it may be easy to justify your own work as simply research, and these issues as those that arise during application, research findings have an impact on deployment practices. For instance, the best image captioning model according to research practice may be taken as the base model for a deployed application. However, this
model may still do poorly when used as an assistive technology for blind and low vision users.
That being said, of course not every ML work needs to be fairness work. However, possible fairness implications are important to be aware of, and if significant enough, described as a
broader impact or
limitations section. This can ensure that technology is developed ethically from the start. One useful analogy here is to
security by design (i.e., security built in foundationally from the start) compared to bolt-on security, which is less effective. Another useful analogy is to
technical debt, where there will be time-consuming and expensive future consequences to taking shortcuts and not doing sufficient testing now.
Ethical debt occurs when we prioritize values now that will lead to eventual ethical consequences.