What’s the “Big” Idea?

Unlock your big data’s potential and reap in the supply chain benefits

John Thielens, Chief Security Officer, Axway.
John Thielens, Chief Security Officer, Axway.

Big data has been hyped up in a big way lately, setting the world’s imagination on fire, forcing analysts from all industries to wonder where—if anywhere—it can be applied.

Big data applies to any domain where a lot of information is collected—but few domains seem as poised to benefit from it as the supply chain, where an unthinkably large quantity of valuable data, yielded from countless transactions buried in many decades’ worth of records, reports and metrics, is just waiting to be mined.

The challenge, of course, is the mining itself—the unlocking of value.

Some firms are already giving the unlocking process a go, figuring that “local value” metrics. i.e., specific supplier-related Key Performance Indicators (KPIs) related to performance over a given timeframe and gleaned with classic supply-chain mechanisms like vendor or supplier dashboards, might provide quantifiable value—a sort of “medium data,” if you will.

Real, industrial-sized value, however, demands looking at the supply chain holistically. This has to be done to determine how we can adjust KPIs; how we can evolve the business; and what we can learn about our business by analyzing transaction flow across the supply chain and the demand chain.

That’s a task that will call for a lot more than simply crunching the numbers on a day’s, week’s, month’s or quarter’s worth of transactions. It will require digesting vast quantities of highly detailed data—which itself will require the analytical techniques unique to the big data movement—in order to effectively extract the latent value.

And when it comes to processing really large data sets economically, there is only one natural go-to strategy: The cloud.

Platform utilization

Unless I’m a very large enterprise, I’m not going to break ground for a new data center simply because I have a new initiative that requires lots of data storage. Instead, I’m going to outsource that storage requirement and use the cloud to:

  • Lower the upfront costs of the initiative
  • Compress the start-up timeframe
  • Avoid most of the inherent risks that accompany any new initiative
  • Extract value unique to my business
  • Harvest private information from the resulting set of big data

 

It’s an act that will force users to determine which data to put in the cloud, which vendors to trust and how to guarantee data security.

So how do you get existing data from wherever your systems are, analyze the data, warehouse it and secure it—all in the cloud? Doesn’t a move to the cloud seem to be pushing in the other direction, away from an optimally-secured environment? After all, what we’re proposing here is executing—completely in the cloud—an initiative to root out those few precious jewels of differentiated data buried beneath a massive slag heap of undifferentiated data. What if somebody outside of the organization, with access to the same cloud servers we’re using, finds those few precious jewels before we do?

It’s a question Apple Chief Information Officer Niall O’Connor surely asks himself. That outfit vigorously defends its supply-chain information, so much so that the media, eager to relay a tantalizing rumor about an upcoming iPhone or iPad, sniff around for their own medium data or big data intelligences in the supply chain in the hopes that they might predict what the company is up to.

Don’t be fooled. Apple knows the value of this data, the privacy concerns that go with it and the unique challenges in its supply chain. Not to mention that the supply chain industry in general has unique challenges concerning its valuable data that industries like financial services, government, high tech, automotive, healthcare and others simply don’t.

Factors at play

To begin with, supply chain organizations don’t play with huge margins. In fact, some of the suppliers may be running on even thinner margins than the supply chain organizations they’re dealing with. That means that the introduction of new security mechanisms must be accompanied by an acute sensitivity to cost.

Further, large supply chains have particular cost sensitivities—for both the buyer and the seller—when making any changes at all to something as critical as a security mechanism; when enacting something as dramatic as moving all connections to the cloud; and when considering something as tenuous as putting mutual faith in a cloud-service provider that’s able to live up to a service level agreement (SLA).

In another scenario, retail giant Amazon suffered an infrastructure networking problem in 2011 that had ripple effects across the industry, serving as a stark reminder of the importance of being very careful about SLA’s from your cloud provider and their recommendations for an architectural approach. Companies that weathered Amazon's little hiccup painlessly were the ones that had followed recommendations for high SLA requirements, which included distributing the workload across multiple availability zones, even if a single-availability zone would have been adequate 99.9 percent of the time. Those companies knew that even though they were using a cloud infrastructure for which they didn’t have high upfront costs, they also had no control over the availability of that infrastructure. By taking the right architectural approach, they were relatively unscathed when Amazon’s failure occurred.

Your success in analyzing cloud-borne big data in order to glean high-value nuggets requires that you keep this approach firmly in mind. Also keep in mind that availability risk is only one high-profile risk that requires a new mindset when moving to the cloud. The other elephant in the room is data privacy risk, and best practices for managing privacy in the cloud center on encryption technologies—more of a data-centric approach than the network-access-centered approach classically used on-premise. These practices introduce new operational disciplines around key management (key storage, rotation, escrow, etc). While some careful design is needed to avoid the introduction of operational risks, a focus on data protection effectively decouples you from network-access risks at the infrastructure provider.

Once security concerns are handled, your big data is finally cloud-borne, and all your SLA’s are in place—with all the attendant cost savings of going cloud. So what kind of transactional dynamics can your supply chain organization look forward to? What kind of fruit will this big data tree bear?

Benefits within reach

Imagine looking at order flow, transaction delays and performance over time—actually digesting every single bit of data that’s been recorded, not just a sample—and correlating it by time, day or season. Then, imagine making predictions based on mined intelligence about throttling lead times at will, anticipating load spikes, picking new product winners and losers, remediating vendor compliance atrophy and optimizing the supply chain by blending different data streams against each other in a way only big data analytics can. Picture being able to make predictions—including risk management predictions—about the data you collect from your thousands of connections. Visualize being able to manage your supply chain’s community using the vast quantities of data that were always there—but that no one was ever able to make heads or tails of because the technology didn’t exist.

It exists now. And cloud services governed by the right software make it possible. With the right SLA’s and security assurances, they make it not only possible, but prudent—an upset to the on-premise, data-forfeiting paradigm that has existed for so long.

You’ve got big data. You’ve always had it. Your predecessors had it. But now you know how to unlock it and use it effectively. What amazing things will you do with it first?

Latest