Manish Sinha is a Managing Director at Dun & Bradstreet.
The success of technology will hinge to a great extent on the ability to exploit data as wave after wave of technological innovations redefine solutions. Data volumes have grown to unimaginable levels, with it driving the transformative nature of the latest technologies. A study conducted by IDC on behalf of Seagate concludes that seven years from now, data moving across various systems will touch 163 zettabytes, presenting a reminder of the volumes of data that need to be stored and processed.
However, data and its use can experience a hard landing; the moment data crosses the threshold of ‘half-life’. Data will lose its ‘edge’ as it becomes outdated and irrelevant. It will then only be a question of time before the ‘asset’ turns into a ‘liability’. Businesses that rely on technology consequently have to contend with a new demand—data not only needs to be used right, the right data needs to be used.
Data analytics can only be as good as the data that goes into the analysis. Used correctly, data has the power to lend an edge to decisions, backed by the analytical insights. However, data that has lost its value will blight the decisions, blunting the edge or worse still, damage the prospects of the organisation.
Data adds value to decisions and processes, thanks to ‘accuracy’ of the data, its ‘actionable’ nature, and the availability of this data in a ‘timely’ manner. Inability to meet any of these criteria will render the data irrelevant, effectively throwing the process haywire.
In the US alone, bad data costs the economy a staggering $3.1 trillion annually. Duplicate data, incomplete data and inconsistent data can adversely impact a campaign or routine processes. The disparate nature of data lends it the informative edge, funneling different sources of information. However, the challenge lies in identifying the right data and using it rightly.
Clearly, the need of the hour is a solution that offers reliable information to stakeholders in a manner that is both consistent and accurate. This turns the spotlight on Master Data Management (MDM), which helps organisations to maintain data that is not only reconciled but free from redundant and duplicate data, throughout the enterprise.
When entities become larger, when transitions occur from legacy systems, or when transactions or subscriber base acquire a wider footprint, the data reconciliation processes become complex, unmanageable and result in poorly integrated master databases that can ripple and actually cripple an organisation’s efficiency. The single point of reference that MDM promises can reduce the exposure of organisations to the risks of relying on conflicting and multiple sources of information. And more importantly, organisations can purge databases of irrelevant and outdated data, resulting in the creation of repositories of accurate and reliable information.
Data Half Life changes depending on the kind of decision that has to be made. For instance, in tactical decisions, data had a half-life of 30 minutes, while in operational decisions; the half-life was eight hours. In the case of strategic decisions, data had a half-life of 56 hours.
Improvements in data quality improve business outcomes while data that has crossed thresholds is likely to have the direct opposite effect on a business decision or action. For instance, data about a social media interaction statistic will be irrelevant after a particular point of time. The data can be used for comparison with newer statistics to understand growth and demographics, but to hold onto the old data forever will serve no useful business purpose, other than being a matter of record.
Similarly, an organisation that holds records of customer needs will require continuous updating of databases to maintain accurate information about customers. By retaining information that is irrelevant or incorrect, such as duplicate data, organisations run the risk of wasting resources in the maintenance of such information, in addition to the possibility of running a campaign with wrong information about targeted customers. There are two aspects that need to be considered – the tangible effects and intangible effects of not having MDM in place.
Data will forever remain in a state of flux and businesses need to get this right. Data about demographics will remain in a state of change, and a business that needs to be in the know about demographic data, needs to have updated information. For instance, data about a city will have information about people in a particular age group, or income group. This will never be static; a business that relies on data that is no more relevant will get the wrong picture, and it will be a question of fighting for a toehold in a world where the advantage of time is priceless. Similarly, a business that runs a campaign on the basis of outdated contact data of individuals will find emails bounce back. And an organisation that telecalls existing or prospective customers runs the risk of being blocked out, if the calls are made to the wrong individuals or clients who have moved away to different services. In addition to the compliance aspect, a loss of reputation can seriously impact a client’s business directly and indirectly.
With one fifth of all data turning out to be bad , it makes it all the more important for organisations to rely on Master Data Management to survive the cut and thrust of competition. Corporate data grows by 40 percent every year; this effectively means that the volume of bad data keeps growing every year. All that it takes is poor data and data that is no more relevant to take the shine off a campaign. Organisations need to get their data refreshed and updated to avoid the smoke and mirrors of irrelevant data caused by Data Half Life.
The author is a managing director at Dun & Bradstreet.