Big data has been hyped up in a big way lately, setting the world’s imagination on fire, forcing analysts from all industries to wonder where—if anywhere—it can be applied.
Big data applies to any domain where a lot of information is collected—but few domains seem as poised to benefit from it as the supply chain, where an unthinkably large quantity of valuable data, yielded from countless transactions buried in many decades’ worth of records, reports and metrics, is just waiting to be mined.
The challenge, of course, is the mining itself—the unlocking of value.
Some firms are already giving the unlocking process a go, figuring that “local value” metrics. i.e., specific supplier-related Key Performance Indicators (KPIs) related to performance over a given timeframe and gleaned with classic supply-chain mechanisms like vendor or supplier dashboards, might provide quantifiable value—a sort of “medium data,” if you will.
Real, industrial-sized value, however, demands looking at the supply chain holistically. This has to be done to determine how we can adjust KPIs; how we can evolve the business; and what we can learn about our business by analyzing transaction flow across the supply chain and the demand chain.
That’s a task that will call for a lot more than simply crunching the numbers on a day’s, week’s, month’s or quarter’s worth of transactions. It will require digesting vast quantities of highly detailed data—which itself will require the analytical techniques unique to the big data movement—in order to effectively extract the latent value.
And when it comes to processing really large data sets economically, there is only one natural go-to strategy: The cloud.
Unless I’m a very large enterprise, I’m not going to break ground for a new data center simply because I have a new initiative that requires lots of data storage. Instead, I’m going to outsource that storage requirement and use the cloud to:
- Lower the upfront costs of the initiative
- Compress the start-up timeframe
- Avoid most of the inherent risks that accompany any new initiative
- Extract value unique to my business
- Harvest private information from the resulting set of big data
It’s an act that will force users to determine which data to put in the cloud, which vendors to trust and how to guarantee data security.
So how do you get existing data from wherever your systems are, analyze the data, warehouse it and secure it—all in the cloud? Doesn’t a move to the cloud seem to be pushing in the other direction, away from an optimally-secured environment? After all, what we’re proposing here is executing—completely in the cloud—an initiative to root out those few precious jewels of differentiated data buried beneath a massive slag heap of undifferentiated data. What if somebody outside of the organization, with access to the same cloud servers we’re using, finds those few precious jewels before we do?
It’s a question Apple Chief Information Officer Niall O’Connor surely asks himself. That outfit vigorously defends its supply-chain information, so much so that the media, eager to relay a tantalizing rumor about an upcoming iPhone or iPad, sniff around for their own medium data or big data intelligences in the supply chain in the hopes that they might predict what the company is up to.
Don’t be fooled. Apple knows the value of this data, the privacy concerns that go with it and the unique challenges in its supply chain. Not to mention that the supply chain industry in general has unique challenges concerning its valuable data that industries like financial services, government, high tech, automotive, healthcare and others simply don’t.
Factors at play
To begin with, supply chain organizations don’t play with huge margins. In fact, some of the suppliers may be running on even thinner margins than the supply chain organizations they’re dealing with. That means that the introduction of new security mechanisms must be accompanied by an acute sensitivity to cost.