May 2023 14

Fairware'23

The International Workshop on Equitable Data & Technology brings together academic researchers, industry researchers, and practitioners interested in exploring ways to build fairer, more equitable, data-driven software.

Co-located with ICSE’23, the FairWare’23 meeting will include keynotes on software fairness form different perspectives. FairWare’23 will also host panel sessions to invite researchers and the audience to engage in discussion.

Since many issues associated with fairness are often sociological in nature, we welcome commentaries from outside of computer science that can shed light on the complex issue of fairness.

CFP in PDF


What is software fairness?

As a society, we decide what attributes influence certain behavior. For example, race should not affect access to financial loans. Examples of real-world software exhibiting bias include image search and translation engines exhibiting gender stereotypes and facial detection and recognition tools’ depending on demographics.
 

Is there research on software fairness?

There are many software engineering challenges to building fair software that has not been addressed, from specifying fairness requirements to analysis, testing, and maintenance. FairWare 2023 will bring together academic and industry researchers and industry practitioners interested in creating software engineering technology to improve software fairness.
 

Why do we need more research on software fairness?

Recently, the requirements for fairer AI have become more common. The European Union, Microsoft, and the IEEE have all released white papers discussing fair and ethical AI. While these documents differ in the details, they all agree that ethical AI must be ``FAT''; i.e., fair, accountable and transparent. Such fairer "FAT"er AI systems support five ``FAT'' items:
  • Integration with human agency
  • Accountability where conclusions are challenged
  • Transparency of how conclusions are made
  • Oversight on what must change to fix bad conclusions
  • Inclusiveness such that no specific segment of society is especially and unnecessarily privileged or discriminated against by the actions of the AI.

Special Issue CFP

Following on from the workshop, there will be a journal special issue at the Journal of Systems and Software: “Over the horizon: Limits and breakthroughs in algorithmic fairness. What are our next steps?” (Dates TBD). As far as possible, reviewers from Fairware'23 will be reused for the journal special issue (so authors should know what revisions are required to turn their Fairware'23 paper into a journal paper).

 

FairWare Resources

The FairWare conference is over, and what an experience it was! We gathered many resources, open questions and ideas for future FairWare editions from our great discussions. These are available in the link below. You are welcome to leave new ideas or resources as well!

Link to resources


Keynote: Fairness through Unfairness

Producing fair outcomes in a structural sense may actually require deliberately weighting software in favour of marginalised populations. The question of how to make algorithmic systems fair is a common one for researchers concerned with the social consequences of automation and machine learning. But what if it is the wrong question? What if the right one is to ask how we might make things unfair? In this talk, I will posit precisely that. Drawing on illustrative examples from housing to hiring, along with the history of fairness as a concept, I will argue that - taking into account the broader contexts of algorithmic systems - achieving fair outcomes may require developers to work towards unfair outcomes, first.


Speaker: Os Keyes https://ironholds.org/
Os Keyes is a PhD Candidate at the University of Washington’s Department of Human Centred Design & Engineering. Details TBD!


Keynote: Seldonian Toolkit

Software systems that use machine learning are routinely deployed in a wide range of settings, including medical applications, the criminal justice system, hiring, facial recognition, social media, and advertising. These systems can produce unsafe and unfair behavior, such as suggesting harmful medical treatments, making racist or sexist recommendations, and facilitating radicalization and polarization in society. To address this, we developed the Seldonian Toolkit for training machine learning models that adhere to fairness and safety requirements. The models the toolkit produces are probabilistically verified: they are guaranteed, with high probability, to satisfy the specified safety or fairness requirements even when applied to previously unseen data. The toolkit is a set of open source Python packages which are available to download. A video demonstrating the Seldonian Toolkit is available at https://youtu.be/wHR-hDm9jX4/ .


Speaker: Austin Hoag
Dr. Austin Hoag is a machine learning engineer at the Berkeley Existential Risk Initiative (BERI), a nonprofit that collaborates with university research groups working to reduce existential risk. He is the co-creator and lead software engineer for the Seldonian Toolkit, a collaboration with machine learning researchers at the University of Massachusetts. Before BERI, he worked as a software developer at the Princeton Neuroscience Institute at Princeton University. He received his PhD in Physics from the University of California, Davis in 2018 and conducted postdoctoral research in astrophysics at UCLA.


Paper Submission

Papers will be submitted through HotCRP, and will be subjected to double-blind reviews. Submissions must use the official “ACM Primary Article Template” from the ACM proceedings template. LaTeX users should use the sigconf option, as well as the review (to produce line numbers for easy reference by the reviewers) and anonymous (omitting author names) options. In addition, submitted papers must not exceed the 8-page limit, be written in English, must present an original contribution, and must not be published or under review elsewhere.

Two members of the program committee will review each paper and the committee will select the papers for presentation at workshop based on quality, relevance, and the potential for starting meaningful and productive conversations.

Workshop Participation

At least one author of each accepted paper must register for the workshop. Each paper will be presented in a 15-20 minute presentation with follow-up questions and discussion.

 

Important Dates

  • Submission: 25 Jan
  • Notification of acceptance: 24 Feb
  • Camera-ready submission: 17 Mar
  • Workshop date: 2X May

Schedule


8:45 - 9:00 Welcome from the organizers

9:00 - 10:00 Keynote

      “Fairness through Unfairness” by Os Keyes, University of Washington

10:00 - 10:30 Paper Session 1

      “Fair-Siamese Approach for Accurate Fairness in Image Classification” by Kwanhyong Lee, Van-Thuan Pham, and Jiayuan He

10:30 - 11:00 Morning COFFEE

11:00 - 12:00 Paper Session 2

      “On Retrofitting Provenance for Transparent and Fair Software – Drivers and Challenges” by Jens Dietrich, Matthias Galster, and Markus Luczak-Roesch

      “Heavy-tailed Uncertainty in AI Policy” by Lelia Marie Hampton

12:00 - 12:30 Tutorial

      Quantitative and Qualitative Methods for Equitable Research and Development

12:30 - 13:45 LUNCH break (and networking!)

13:45 - 14:45 Keynote

      “Applying Safe and Fair Machine Learning Algorithms with the Seldonian Toolkit” by Austin Hoag, Berkeley Existential Risk Initiative (BERI)

14:45 - 15:15 Paper Session 3

      “Reflexive Practices in Software Engineering” by Alicia Boyd

15:15 - 15:45 Afternoon COFFEE

15:45 - 17:05 Workshop in a workshop: Teaching Ethics

17:05 - 17:15 Workshop Closing + Dinner plans

Topics of Interest

To support fairer “FAT”er software we aims to empower software developers, individuals and organizations, with methods and tools that measure, manage, and mitigate unfairness. Therefore we ask for papers that explore:

  • How to identify bias in AI models?
  • How to explain the source or reason for this bias?
  • How to measure the level of bias on these systems?
  • How to mitigate bias by changing model training?
  • How to support for explanations of automated decisions and redress for stakeholders for accountability and transparency of deployed systems?
  • How to determine the trade-off between making fair(er) systems and other objectives of a system?
  • Are there inherently unfair social pressures that doom us to forever delivering unfair software?

We are accepting contributions as full papers (4--8 pages), with either novel research results or a statement of vision or position, on one or more of the following perspectives:

  • Improving fairness -- Present a novel approach or evaluate an existing approach for software fairness. This can be along the lines, but not limited to identification, explanation, measurement, and mitigation of fairness.
  • Applying fairness -- artificial intelligence, machine learning, requirements and design, testing, software engineering cycle, and policy-making, among many other areas of interest.
  • Pose challenges -- Show the weak points in fairness methods, and lead the way on the path to novel research. Request new models, processes, metrics, and artifacts.
  • Collaboration studies -- Between researchers & industry, across the industry, across domains and disciplines, or col- laborations between research groups.


Programme Committee

  • Joymallya Chakraborty, Amazon
  • Alex Groce, Northern Arizona University
  • Christine Julien, University of Texas at Austin
  • Os Keyes, University of Washington
  • Rahul Pandita, GitHub
  • Siobahn Day Grady, North Carolina Central University
  • Gema Rodriguez-Perez, University British Columbia
  • Muhammad Ali Gulzar, Virginia Tech
  • Mei Nagappan, University of Waterloo
  • Kevin Moran, George Mason University
  • Lelia Marie Hampton, Massachusetts Institute of Technology
  • Robert DeLine, Microsoft Research
  • Marc Canellas, Office of the Public Defender for Arlington County and the City of Falls Church
  • Mats Heimdahl, University of Minnesota
  • Organizing Committee

    • Brittany Johnson, George Mason University, USA
    • Tim Menzies, NC State University, USA
    • Federica Sarro, University College, UK.
    • Zhe Yu, Rochester Institute of Technology, USA
    • Yuriy Brun, U.Massachusetts, USA
    • Jeanna Matthews, Clarkson University, USA
    • Alicia Boyd, DePaul University, USA
    • Justin Smith, Lafayette College, USA