The leaders of Amazon, Apple, Facebook, and Google have some serious explaining to do about bias and discrimination when they appear Wednesday at an antitrust hearing before the House Judiciary Committee.
The abuse of trust by the platform-based companies we rely on most has largely flown under the radar as a global pandemic heightens and highlights fissures in our society. But our data and our choices continue to be manipulated in problematic ways — often by algorithms that subtly introduce bias into the prices we pay and the information and options made available to us. It is essential that we hold our digital gatekeepers accountable.
The algorithms at issue have a veritable fire hose of our data at their disposal, and they aren’t the neutral equations we might assume them to be. They are the product of humans, and because of that they have a tendency to perpetuate human biases.
Biased algorithms across Big Tech
To cite just three examples:
►In 2017, Consumer Reports and ProPublica discovered that drivers living in predominantly minority urban neighborhoods were charged higher auto insurance premiums on average than drivers with similar safety records in nonminority neighborhoods with comparable levels of risk.
►In 2018, software created by Amazon to help companies identify the most promising job candidates was discovered to be biased against women, according to Reuters. The algorithm had learned to spot “good” résumés on a diet of examples heavily skewed toward males.
►Apple’s new credit card came under investigation in November, after a customer complained that its lending algorithm offered him a line of credit 20 times higher than it offered his wife — even though her credit score was better than his.
We would never tolerate that sort of blatant discrimination if it happened at a neighborhood grocery store or a car lot, but we have quietly allowed it run rampant in the digital marketplace without oversight or accountability.
A follow-up joint investigation recently conducted by Consumer Reports and The Markup revealed how better data can alter the power relationship between company and consumer. The latest example of how algorithms, however unintentionally, negatively affect our lives and our pocketbooks: Allstate, the fourth largest auto insurer in the country, proposed big premium hikes exclusively for customers who its formulas concluded were less likely than others to shop around.
In targeting what the investigation concluded was a “suckers list” of drivers deemed by an algorithm to be less likely to switch providers, Allstate used factors that have nothing to do with consumers’ driving records and their risk for filing a claim. In this case, it was middle-age consumers who ended up being discriminated against for no reason other than their shopping tendencies. The result was they were overcharged quite a bit more for the same coverage.
Tech fail:He was arrested because of a computer error. Now he wants to fix the system.
Facial recognition algorithms used in police departments have been found to misidentify African American and Asian faces up to 100 times as often as Caucasian faces, leading to false arrests and baseless confrontations.
Boston is among the municipalities that have recently taken steps to prevent facial recognition technology from being used by city agencies, including the police. Amazon has imposed a one-year suspension on the sale of its Rekognition software to law enforcement.
Progress has been made on this front in part because of efforts by Joy Buolamwini, a computer scientist and founder of the Algorithmic Justice League, and others to call attention to the very real potential harms of this technology.
Require fairness from tech giants
In the years ahead, algorithms are poised to influence an ever larger share of what we pay, receive, see, learn and decide between — from the cost of goods and services to the headlines and search results that do and do not make it into our personal feeds. As their influence rises, the question becomes more critical: How can we guard against algorithmic biases and hold our tech giants accountable for maintaining fairness in the digital marketplace?
So far, we haven’t pursued policies to ensure that fairness,or even transparency for that matter. We haven’t created avenues of recourse for consumers who get the short end of the stick. We also know that industry thus far can’t be counted on to self-regulate — in many cases, they aren’t even aware that potential discrimination is going on until after journalists or customers happen to unravel it. Too often, the watchdogs aren’t watching closely enough.
Failure to enforce: Despite COVID-19 pandemic, tech giants still profit from anti-vaccination movement
The good news is that consumers hold tremendous power to set us on a better path. By wielding our collective influence, we can press for policymakers to enact new laws and standards to bring fairness and transparency to the hidden world of algorithms. Companies should not be permitted to use “proxy” data, like users’ ZIP codes or credit scores, in algorithms where it isn’t relevant — these are data points that frequently lead to discriminatory outputs. And we need vigorous oversight and enforcement of laws that prohibit bias.
As the CEOs of the most powerful tech companies take questions, we must get answers on platform accountability and plans to limit discrimination. Many biases may still be hardwired in our society, but that doesn’t mean we have to sit idly by as they replicate themselves in the digital economy. It is within our power — and, indeed, it is our responsibility — to ensure that the digital world evolves in the direction of greater fairness and greater trust.
Marta L. Tellado is the president and chief executive officer of Consumer Reports. Follow her on Twitter: @MLTellado
You can read diverse opinions from our Board of Contributors and other writers on the Opinion front page, on Twitter @usatodayopinion and in our daily Opinion newsletter. To respond to a column, submit a comment to [email protected].
This article originally appeared on USA TODAY: Big Tech’s biased algorithms abuse consumers and limit their choices