On Friday the 23rd and Saturday the 24th of February, I attended the Conference on Fairness, Accountability, and Transparency (FAT) at New York University. There were over 500 attendees with an impressive coverage of disciplines. Papers were presented by lawyers, machine learning researchers, philosophers, and data scientists. The main themes of the conference were:
- Methods and considerations for the ‘Interpretability’ and ‘Explainability’ of AI;
- Defining, detecting, and measuring ‘Discrimination’ in socio-technical systems; and
- Issues and challenges of ensuring ‘Fairness’ in machine learning and automated systems.
Click here for the online program and research links.
There were plenty of interesting talks, but here’s a quick summary of the research that I found most interesting:
Potential for Discrimination in Online Targeted Advertising
Till Speicher, Muhammad Ali (MPI-SWS), Giridhari Venkatadri (Northeastern University), Filipe Nunes Ribeiro (UFOP and UFMG), George Arvanitakis (MPI-SWS), Fabrício Benevenuto (UFMG), Krishna P. Gummadi (MPI-SWS), Patrick Loiseau (Univ. Grenoble Alpes), Alan Mislove (Northeastern University)
This research argues that Facebook isn’t doing enough to prevent discrimination in their targeted advertising. Targeted ads are only shown to a subset of the population that are associated with certain attributes (features) that are selected by the advertiser. Facebook gathers and infers hundreds of attributes about individuals that use their platform, which covers demographics, behaviours, and interests.
There are, however, certain sensitive attributes where targeted advertising is illegal, such as race or gender. The authors argue that these regulations aren’t sufficient and demonstrate that a malicious advertiser can still create discriminatory ads without using sensitive attributes. Timely research given the recent scandals with Cambridge Analytica!
“Meaningful Information” and the Right to Explanation
Andrew Selbst (Data & Society Research Institute), Julia Powles (Cornell Tech, NYU)
This presentation provided an overview of the General Data Protection Regulation (GDPR) laws due to come into effect in May of 2018. The European Union will introduce the GDPR laws that target the routine use of algorithmic decision-making. Chief among these is ‘right to explanation’ of AI systems. This will ensure that automated decision-making that ‘significantly affects’ individual users will have the right to ask for an explanation of an algorithmic decision that was made about them.
The authors discussed the complications and the benefits of bringing the laws into effect, and the precedent it could set for other regions. The paper linked above is only an extended abstract, but I found their presentation to be informative and concise.
Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment
Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito (MIT), Jonathan Zittrain (Harvard)
This research questions the purpose of using regression in risk assessments. Rather than using machine learning techniques to predict individual future crimes, the authors argue that such techniques should be applied to better understand the social, structural and psychological drivers of crime. I thought it was a compelling and technical perspective for addressing some of the fairness issues with machine learning systems that are used for criminal justice purposes.
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Joy Buolamwini (MIT), Timnit Gebru (Microsoft Research)
In this presentation, the authors presented an approach to evaluate biases present in automated facial analysis algorithms and datasets. Using three major facial analysis systems, they were able to demonstrate the substantial disparities in classifying people based on skin colour and gender (performing the worst on darker females). This was attributed to biases in the training data and algorithmic specification, which disproportionately favoured white males.
This was the standout presentation for me. A clear and engaging presentation, but most impressive was the impact of the research. As a result of their findings, IBM swiftly updated their facial analysis software to resolve some of the bias concerns, which has materially improved its accuracy. Great to see research resulting in change!
Fairness in Machine Learning: Lessons from Political Philosophy
Reuben Binns (University of Oxford)
What does it mean for a machine learning model to be ‘fair’? How do our conceptions of fairness reconcile with the probabilistic nature of machine learning? This research presentation drew from moral and political philosophy to orient the philosophical challenges of machine learning. I thought it was an interesting interpretation of an age-old debate.