Big data doesn’t always mean better business

data management

Ravi Rao argues that bad data, not just big data, should be top of mind for business and IT professionals. 

Today everyone is creating data, from individuals, to government agencies, to businesses. In fact, almost all the data that was ever created happened in just the last two years.

While this is probably not news to those of us who plan for, manage or process this ‘data deluge’, questions still remain regarding the best practices when taking on infrastructure changes to address big data. Big data, without any sort of framework, is just noise, enormous amounts of information derived from a large and growing pool of internal and third-party sources. What was once a question of “how do we get the data,” has changed into a question of “how do we manage, analyze and operationalise insights from this data analysis?”

Data and information technology resources are constantly mishandled and misunderstood. Expectations for IT departments are changing now, and it is critical for them to get data management right. If they aren’t getting the data right already, IT needs to wrap its head around big data from start to finish because it’s a massive business opportunity that isn’t going away anytime soon.

The sheer quantity and growing sources of data pose a huge challenge to information technology leaders. IT leaders who are ready to meet this challenge must equip themselves with a plan that spans across the entire enterprise and breaks down the barriers between departments. IT must think bigger, obtain executive buy-in and consider system integration as the foundation for big data success. It seems simple, right? So why isn’t it?

Big data means that small bad-data problems, which once went undiscovered, now become magnified as big problems. Those big problems can cause flaws with the very analytical results that were illustrated as one of the virtues of big data.

Outside of security, data management and analytics, big data usually represents the biggest IT cost for most companies. The tumultuous way many businesses want to consume, track and trend data is outpacing their ability to do it right.

It’s also true that data integration and data quality are the top challenges companies need to overcome before launching analytics initiatives. For this reason, bad data should be IT’s biggest concern. When it comes to big data, even small issues in data quality can create huge mistakes in decision making, but data-quality concerns don’t have to keep IT up at night.

With automated, continuous controls and deductive analysis, IT can be sure that mistakes will be flagged and reconciled, and that the correct people will be notified to the issue to maintain good data across the board.

Automation should be a given. It’s better to automate that process and let machines do what machines do best. Similarly, problems with data quality can take your business decisions off course. End-to-end deductive analysis of data can reduce risks by automating the process and in turn reduce the cost of ensuring quality data from the get-go.

Bad data compounds in severity when organisations assume the data is trustworthy and apply it to data analytics when making decisions. When bad data is ignored, at any stage, it can result in poor insights, misleading interpretations and off-the-mark outcomes. Making sure that data is reliable needs to be the first step in putting big data into action. However, automated, integrated data quality solutions can help IT keep their organisation on a true heading, getting your organization to the right destination every single time.

Ravi Rao is the senior vice president of analytics at Infogix.