AI_Technology_updatedElon Musk was in the headlines again last week, after tweeting that Tesla might become a privately owned company. Wall Street pundits have called Musk "the world's greatest showman" and now the SEC might be investigating his tweets to see if they violated U.S. securities law.

Musk is unquestionably an inspiring and energizing force in the global economy. But in today's newsletter, I'm focusing on another aspect of Musk, namely his stated concerns about artificial intelligence.

Musk isn't alone in expressing anxiety about AI. The late Stephen Hawking also warned that AI could wreak havoc on a world that's not prepared to manage its potentially destructive power.

AI isn't just another "feature" that will be bolted onto existing systems. AI will quickly replace existing systems, rendering them obsolete and irrelevant. Once AI spreads through the IT universe, all of our roles as technology leaders will fundamentally change. In short, we will experience the disruption that we often see happening in other industries.

Imagine a world with no systems administrators, software developers or business analysts. That scenario will become a reality sooner than we imagine. In the very near future, AI will be baked deeply into every conceivable system and platform.

How will IT leaders add value when most of IT becomes fully automated? That's a hard question and we must begin considering it seriously. The most pressing issue is acquiring talent. You'll need a process for identifying, recruiting, hiring and retaining people who understand AI and who know how to use it. You'll need to create appealing work environments to attract the best minds and keep them focused. 

It's not too soon to begin setting up talent pipelines. Are you reaching out to local colleges and universities? Are you actively recruiting people with data science skills? If you aren't, you should be.  

Mark van Rijmenam of the Netherlands wrote a good post last week on the difference between "good AI" and "bad AI."  In his post, he argues that when AI is applied thoughtfully and carefully, its benefits outweigh its potential for causing harm. But when AI is applied haphazardly or indiscriminately, it can morph into something genuinely dangerous.

"Good AI," he writes, must be "explainable." In other words, it can't be a black box. AI decision-making processes must be visible and understandable to the human mind. In other words, we need to know how it works and how it's making decisions. When we don't require AI processes to be transparent and understandable, we're abdicating our responsibilities. 

AI feeds on data, so we also need to make sure that our data sources are clean and unbiased. We've already seen instances in which biased data has led AI to make biased decisions, so this isn't science fiction. It's already happening.

In aviation, pilots are taught to stay ahead of their airplane's power curve. Allowing a plane to get "behind the power curve" is an invitation to disaster since it won't have enough power to recover if a problem arises.  

In a sense, we're allowing ourselves to fall behind the AI power curve. After a certain point, it will be impossible to recover if something goes wrong. That's not where we want to be.