fake-newsTraditionally, CIOs have been responsible for maintaining and operating systems of records. Someone else - usually in finance or accounting- was responsible for ensuring the accuracy and veracity of the data in those systems.

We assumed the data was mostly accurate and that if we did a good job of managing the data, we could provide users with a "single version of the truth."

But that assumption - that the data is mostly accurate - has been undermined in recent years. The term "fake news" doesn't just apply to the news we see on cable television. Any dataset can be altered or "faked," and that's a problem we all need to confront.

Today, most of us are keenly aware that much of what we see or read has been edited, altered, massaged or spun to reflect a particular viewpoint. We no longer blindly trust the information we receive.  

Since the dawn of science, data has been the gold standard of truth. Suddenly and unexpectedly, people are questioning the central primacy of data. 

How does that shift in belief impact our role as stewards of data and information? It's too early to tell, but it seems that our responsibilities will have to evolve to keep pace with the changing perceptions about information.

I recommend reading a fascinating article in Fast Company by Hootsuite CEO Ryan Holmes. In his article, Holmes argues forcefully in favor of taking stronger steps to ensure the accuracy and integrity of content. From my perspective, content includes data and information. 

"The way forward isn't just an algorithm tweak or a new set of regulations. This challenge is far too complex for that. We're talking, at root, about faith in what we see and hear online, about trusting the raw data that informs the decisions of individuals, companies, and whole countries. The time for a Band-Aid fix has long passed," writes Holmes.

He foresees the growth of a new industry based on content validation. If his prediction is accurate, CIOs and other senior technology executives will be responsible for evaluation, selecting and deploying the most appropriate content validation solutions for their companies. 

It's likely that many of the solutions for validating content will be based on artificial intelligence, which has a knack for spotting suspicious patterns in data. But reliable AI solutions require machine learning processes, which feed hungrily on mountains of data. How do we verify and validate the data used in the machine learning processes? 

We all remember GIGO, which stood for "garbage in, garbage out." Somehow, we let ourselves believe that we had solved the GIGO problem. Now it seems to have returned with a vengeance.