AdobeStock_179292498In her remarkable book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, former Wall Street quant Cathy O’Neil paints an incredibly sobering picture of organizations using advanced analytics to make critical decisions without fully considering the consequences.

Unquestionably, AI will become an indispensable tool for automating key processes, finding hidden patterns in large datasets and driving greater efficiencies across the modern enterprise in thousands of ways.

From my perspective as a technology leader, I am confident that AI will emerge as a force for good, delivering amazing opportunities and benefits to billions of people in markets all over the world.

That said, the road to AI adoption will not be smooth. There are issues that cannot be glossed over or swept under the carpet in our zeal to move forward and gain competitive advantages.

One of the major challenges is algorithmic bias. Although we tend to assume that data science is inherently unbiased and objective, the truth is that algorithms often reflect the biases of their creators, even when those biases are invisible. Even worse is the unpleasant reality that machine learning algorithms can actually exacerbate existing biases, taking bad situations and making them worse.

A recent article in MIT Technology Review reveals the magnitude of the problem. The article cites the story of a web developer who discovered that his wife – who had a better credit score than he had – was given a far lower credit limit when they applied for an Apple Card. According to the article, the machine learning algorithm that determined the credit limits based its decision on data that reinforced longstanding biases against women. The algorithm could have been tweaked to consider the gender of credit card applicants, but that would have been illegal in the U.S., where gender blindness is legally required when making decisions about creditworthiness.

“But in machine learning, gender blindness can be the problem. Even when gender is not specified, it can easily be deduced from other variables that correlate highly with it. As a result, models trained on historical data stripped of gender still amplify past inequities,” writes Karen Hao in the Technology Review. “The same applies to race and other characteristics. This is likely what happened in the Apple Card case: because women were historically granted less credit, the algorithm learned to perpetuate that pattern.”

As technology executives, we need to be aware of these kinds of issues and challenges. It would be easy to dismiss the Apple Card story as an outlier, but my intuition tells me the problem is more widespread. At minimum, it would be prudent to raise the question with AI providers early on.

I predict that more stories of algorithmic bias will emerge before the industry figures out how to manage the problem. Reputable AI vendors are already aware of algorithmic bias, and are working diligently to minimize its negative impacts.

At HMG Strategy, we're actively supporting the next great generation of technology leaders. We've created a truly global peer-to-peer platform to deliver world-class insights and ideas to the global audience. Please join us at one of our upcoming summits and learn how to lead, re-imagine and reinvent the modern enterprise to create a culture of genius and drive growth in unprecedented times.