Artificial intelligence (AI) algorithms discriminate.
This timely statement defines a topic that has drawn increasing attention from business law scholars, law and computation researchers, and data scientists.
Algorithmic discrimination examples include Airbnb’s price discrimination of African American-sounding names, Google’s and Nikon’s image-recognition discrimination against Black and Asian people respectively, and United Health Group’s data-based discrimination of white patients over sicker Black patients.
This past April, the National Science Foundation (NSF) announced the first 10 recipients of grants issued through a $20 million collaboration with Amazon to support research on fairness in AI.
In July, at the seventh Workshop on Automated Machine Learning at the International Conference on Machine Learning, Amazon researchers won a best-paper award for a paper that addresses the problem of ensuring the fairness of AI systems.
California Western Associate Professor Tabrez Ebrahim was recently awarded a 2020 Innovation, Business & Law Center Prize from the University of Iowa Innovation, Business & Law (IBL) Center for his scholarly paper Normative Considerations of Algorithmic Discrimination & Prescriptions for Data Fairness. The paper will be published in 2021 with the West Virginia Law Review.
Ebrahim will be presenting his winning paper over the internet as part of the fall 2020 IBL Center speaker series, Nov. 12. This speaker series addresses racism and ongoing injustices embedded within legal structures and contexts of business and innovation, outside of the criminal justice system.
“This is very much a topic whose time has come,” says Ebrahim. “The prevention of algorithmic discrimination is a new concern to businesses and business law scholars. As algorithms are increasingly part of more business models and marketing plans, discrimination appears in the shadows of value creation and value capture. It may be baked into data-driven decisions, inadvertently or intentionally, that require closer inspection.”
Decision making by algorithms, rather than purely by humans, in theory should lead to greater fairness since everyone is judged according to the same rules. However, according to Ebrahim, algorithms can actually reinforce discrimination.
“The real problem with algorithmic discrimination is not that it fails to overcome the same problems as human-based discrimination, but that it introduces additional sources of discrimination like biased training data or substitution of a desired performance trait with an easily observable one, such as race,” says Ebrahim.
While the effects of AI algorithms have gained media attention, and even though research grants such as the NSF collaboration with Amazon have shone more light on the subject, Ebrahim believes that how the law should prevent their discriminatory harms in data-driven business models and marketing plans is under-addressed in scholarship.
Ebrahim’s paper articulates an alternate methodology for the law in the context of algorithmic discrimination in business – a social welfare perspective where a proposed social planner (a central AI agency), which cares about data fairness, regulates private actors that utilize potentially discriminatory algorithms while retaining some private sector flexibility.
In terms of prescriptions, Ebrahim’s paper argues that policymakers must be cognizant of data fairness when attempting to enforce or regulate anti-discrimination in private law and business.
“As AI becomes more and more pervasive in business and business law, it presents opportunistic behaviors for businesses, entrepreneurs, and marketers, and necessitates prescriptions for competition law and policy,” says Ebrahim.
To join the IBL Center Fall Speaker Series with Tabrez Ebrahim, Nov. 12, at 10:30 a.m. PST, click here.