Last week,
we discussed how we could learn from this pandemic crisis, and how companies are reacting swiftly to assess how to address critical shortages, bottlenecks, and potential at-risk suppliers. Specifically, we warned about the dangers of letting this be a one-and-done exercise and how companies needed to institutionalize the learnings gained from the crisis by investing additional time and effort to operationalize these processes to support your supply chain in good times and bad.
In a public session earlier this May, our CTO Stephen Brobst discussed with Hani Mahmassani, of Northwestern University’s Transportation Center, how organizations
are leveraging technology for supply chain resilience, under COVID-19. In light of the number of actions Teradata customers and other organizations are taking to plan around the supply chain disruption, we summarized the main lessons learned below.
One of the most prevalent needs companies have had is to start using their supply chain data as a weapon to fight the disruptive effects of the COVID-19 pandemic on their operations to help make them more resilient, more agile and more adaptive. But what kind of data, and how do you use it? What are the processes and techniques companies are employing?
Real Time and Granularity = Lots of Data
To be useful, the data has to be detailed, because summary data is the enemy of good data. You need to be able to drill down from state to county, to specific origin and destination location addresses, because responses and conditions will vary from location to location. Also, because what’s happening today -- and whatever time-worn patterns we had from years past are going to look very different -- your data must be up to date. So, if all this data is at high degree of granularity, not summarized, and is updated constantly, that means (you guessed it!) a LOT of it.
Brobst: “You cannot make decisions across large geographies. If you look in the U.S., the way the pandemic has impacted New York City is very different than how it’s impacted San Diego or … Chicago. And so, you have to have decision-making that is really precision-decision making, which only happens if you have access to detailed data.
You cannot use last week's data in the kind of environment we're in today. So, you need the ability to handle very large volumes of data.
You need to be able to get that data in real time or near real time and you need to give visibility to decision makers, all the way down to the detail.”
In the past, companies have worked hard to optimize the cost out of their networks, something called “hyper-optimization.” The companies that know how to do this have benefited by lower costs and lower margins. But in a rapidly evolving situation, it’s how you continue to optimize that matters.
Brobst: “They have an advantage, but they only can sustain the advantage if they can evolve the optimization techniques and the only way to do that is by having access to detailed data.
When the underlying conditions change, what used to be the right KPIs to look at in the old world might not be the right KPIs to look at in the new world.”
And so, you have to be able to adjust and evolve, so those skill sets for optimization are highly valuable and the organizations that know how to do that are going to win, but they're only going to win on a sustainable basis if they can also adapt. So, there's optimization, but there's also adaptability.
Virtual War Rooms
This type of access to data, in high volume, in near real time gives rise to a new phenomenon. Many of our customers are constructing data war rooms, which are virtual war rooms, because people are distributed all over the country, and frequently all over the world.
The single biggest activity being done in these data war rooms? Scenario planning. Actually, scenario crunching, “what-if” analysis, etc. Because no one knows exactly what's going to happen in the future, companies have to do scenario planning.
An example of such scenario planning is in the adjustments in the supply chain to deal with cargo in this current environment and the constraints on cargo capacity in an environment of exploding demand for air cargo. Carriers are having to -- again, in real time -- essentially try to re-plan their network just now to meet this demand in the situation where the routes and aircraft choices are different. Many new factors have to change based on the data. They have found that without access to the detail data where that cargo is coming from, where it's going and what the refrigeration requirements are -- and a host of other requirements -- companies cannot fully take advantage.
Brobst: “You have to look at worst case and best case; hope for the best, and plan for the worst and be able to adjust very, very quickly. So, these data war rooms, they need to understand what is happening as it's happening. You don't get yesterday's data and look at it today. You look at today's data today and you look at it as the events are unfolding.”
In these war rooms, companies are then constructing best and worst scenarios, and developing action plans. And then monitoring real time events to answer the question, “Which scenario is occurring?” so they can take the actions according to the scenario plans they constructed.
Single, Consolidated, Near-Real-Time Version of the Truth
Extending the idea of the virtual war room, they involve multi-location meetings with people who are physically distanced. Some of the great new collaboration tools we have allow people to somewhat behave as though the people are working together face to face. More important than the human interaction, it’s the data that needs to be accessible all at once. How we deal with access to data impacts the effectiveness of these new digital, virtual war rooms. Because supply chains are so inter-dependent -- in other words, we experience the ripple effects of one factor, shortage or bottleneck across the network -- it’s critical that we see the full extent of those effects in a connected and consolidated manner. To be effective then, data needs to be accessible, in near real time, and in a manner that allows it to be seen and available to answer detailed, scenario-based “what-if” questions. And it cannot be allowed to proliferate in multiple, disconnected silos or pools. That is what is meant by “single version of the truth.”
Investing in a Culture of Data Literacy
In God we trust; all others must bring data. – W. Edwards Deming
Many company executives are finding that having had the foresight to invest in data over the past years will pay off in the ability to do some of the scenario analysis and “what-if” questioning needed in support of a virtual data war room. And while investments in data and data-related technology are all important investments to make, the important ones are not just about technology. It's about investing in a culture of data. A culture of data-driven decision making is a culture of data literacy. These are as important as the investments in the technology itself.
Now, during the COVID-19 crisis, we're not using data from last year, we are using data from the last hour, and that makes a big difference to the kind of decisions that you can make in this situation. One such organization we worked with that was prescient in their decisions to bring data into their ecosystem made some big investments to accelerate the rate at which they brought data into their ecosystem. And they were able to make much better decisions than the other airline in their market at that time; ultimately leading to that competitor’s demise.
Brobst: “Erik Brynjolfsson at the MIT Sloan School of Management conducted a study of organizations who use data for decision making versus those who used instinct, or at least, less sophisticated methods of using data just for KPI reporting. The study concludes that the organizations who use data to drive decision making were enjoying four to six percent higher profitability than other organizations. Those are enormous numbers considering they deal with the bottom line of the business.”
The investment in data, both in the technology and in the people, has paid off during previous, more localized disruptions. Based on the fundamental belief that data and good management will pay off to minimize disruption and ill effects during this crisis, we continue to trust in the data.
A warm thank you goes to Northwestern University’s
Dr. Hani S Mahmassani, for hosting this episode, which is one in an informative series, and encouraging this lively discussion with
Dr. Stephen Brobst.
Cheryl Wiebe is an Ecosystem Architect in Teradata’s Data and Analytics Strategy team in the Americas region, and works from her virtual office in Southern California. Her focus is on the business, data, and applications areas of analytic ecosystems. She has spent years working with customers to help create a digital strategy in which they can bring together sensor data and other machine interaction data, connect it with other enterprise and operational domain data for the betterment of the reliability and efficiency of large equipment, large machinery, and other large (and expensive) assets, as well as the supply chain and extended value chain processes around those assets.
Industry-spanning programs, such as Industry 4.0 and others that address enterprises in their goals to “go digital” in a journey to the cloud, are where Cheryl focuses. She helps companies leverage traditional and new IoT settings to organize and develop their business, data and analytic architectures. This prepares them to build analytics that can inform the digital enterprise, whether it’s in Connected Vehicle services, Smart Mobility, Connected Factories, and Connected Supply Chain, or specialized solutions such as Industrial Inspection / Vision AI solutions that address needs to replace tedious work with AI.
Cheryl’s background in Supply Chain, Manufacturing and Industrial Methods stem from her 12+years in management consulting, industrial/high tech, analytics companies, and Teradata, where she’s spent the last 18 years.
View all posts by Cheryl Wiebe