Stories about big data appear in The Wall Street Journal with some regularity. But it’s rare to see an article like “Big Data Chips Away at Costs,” which ran recently within WSJ’s CFO Journal, because it does such a great job describing how the proper use of big data can help CFOs improve financial performance by gaining a better understanding of business drivers and hidden costs.

The article is viewable only to subscribers so here’s a recap: CFOs are analyzing big data to determine what parts of the business aren’t working (GM crunched parts costs, labor trends and market predictions to figure out it had to pull out of the European auto market); trim capital spending (Planet Fitness uses data on guest traffic patterns to lengthen the life of treadmills and other capital equipment); meet peak demand without increasing labor costs (Lowes uses security cameras to track customers in stores so it can better understand when stores need more employees on the floor and when they need fewer), and free up cash (AT&T boosted free cash flow by a third, in part by using big data to pinpoint use cases where videoconferencing could replace costly in-person travel).

But as the article points out, some CFOs have concerns. There’s such a thing as too much information, they say. And because big data now comes from so many different sources, those who have been on the fence about it may stay there a while longer. In fact, 40 percent of CFOs surveyed late last year by American Express Global Corporate Payments said they had no plans to invest in big data initiatives over the next year.

To these CFOs, big data probably seems to be little more than a distraction – one they don’t have time for. And I’d argue they’re right, if their view of using big data is to gulp down great masses of information in the hopes of accidentally discovering a tasty morsel of insight.

Bring it Into Context

But if you bring big data into the context and financial and operational processes – to begin with an understanding of what you want from the data, and why it’s important – then you don’t have to swallow an ocean.

What the companies profiled in CFO Journal already know – and what reluctant CFOs might suspect – is  that big data alone has very little value without context. Context is the meaning that surrounds the data. It’s the “secret sauce” that helps separate the useless information, of which there is plenty, from the really useful stuff. It helps you separate the tiny needle from the big data haystack. Take the Lowes example. If you just looked at raw numbers of how many visitors a store gets in a day, well, that’s helpful. But if you take the Lowes approach and track how many shoppers physically visit which departments and when, then you have the context needed to understand why the paint department should have an extra staffer on hand between 10 am and 2 pm – when DIY moms like to shop for paint while their kids are in daycare.

Like big data, contextual information comes from a lot of places, including from people (such as staffing data and performance evaluations) and from collaboration (like social network activity and emails).

As reader comments to the CFO Journal article point out, making use of this data has in the past proven costly and time-consuming. Well, no kidding. As more and more data is brought into the business by applications, sensors, people and machines, understanding the financial and operational context of data becomes even more crucial. This is especially true because working with larger volumes makes us more susceptible to becoming fooled by randomness, such as false correlations that merely affirm our preconceived notions of what the data will tell us.

From Business Needs Come Meaningful Data

So how to find context in a big data world?  Start by looking at what the business actually needs.  What do business users want to learn from the source data? What are the business use cases that could benefit from the torrent of data in today’s digital economy?  What correlating activities and content will help answer these questions?

This has a natural “funneling” effect that speeds the analysis process and fits the way companies operate today. And when you consider that the typical enterprise application draws data from as many as 70 to 90 sources, it’s immediately clear that traditional ways of integrating data won’t work. Mapping all that data, from all those sources, will require a small army or a large fortune. In most organizations, you’ll get neither. So what do you do?

A smarter approach is to reorder the process for making data usable by your application. Today, most organizations extract huge volumes of data, then transform it into a format their application will recognize, and finally load it into their data warehouse. The modern, cloud-aware alternative is to defer the need to transform data until the moment business users need it. It’s the approach taken by customers of Tidemark’s cloud-first business planning and enterprise analytics solutions. They aren’t pursuing the old familiar ETL (extract, transform, load) approach because it’s inherently limited, takes months to complete, and is effectively frozen once you’re done.

A Run-Time, Real-Time World

Instead, our customers use big data in a way that’s more flexible and agile: Extract your files and data, and then load them into a secure, cloud-based storage and computation platform. Then when the application has to respond to a particular business user need and hence execute on the data, you transform it on demand. This approach – call it ELT – lets you load as much data as you want and then transform it as business users are asking the questions all that unstructured data will help them solve.

Our customers – innovators like Netflix and Brown University – have to succeed in a run-time, real-time world. They can’t predict where the next bit of contextual information will come from.  And they certainly can’t anticipate every possible way their application will use source data ahead of time. But by turning the transform step into an on-demand task, the traditional enterprise data integration stack becomes utterly transparent to end users.   An ugly and expensive manual process becomes a seamless, automated one.

In an increasingly cloud-to-cloud world, this is how enterprises will make use of big data. ELT will help business users not only cope with their growing big data haystack, but it will also help them probe for the right needles to find the right answers that will help propel their business.