I wrote in my previous post how companies should try to cultivate a LinkedIn-style culture of collaboration around analytics so business users throughout the organization can stay engaged, energized and accountable when it comes to data.
I included the caveat that this do-it-yourself urge among employees to innovate and experiment should be harnessed, but only if you can ensure your underlying data remains accessible. I think it’s worth expanding on this point, since the relationship between these two worthwhile goals – user engagement and data accessibility – is one that can easily veer into conflict.
Think about it: We hire people to use their intelligence to take action for the benefit of the company. We pay recruiters top dollar to get folks who think creatively and innovate through solving challenges. These are the qualities that drive success; they’re also the qualities that drive the go-getters, in the face of slow moving IT and analytics solutions, to set up their own isolated data marts to solve particular problems for their business units. Unfortunately, doing so opens a Pandora’s box of bigger problems, chiefly the spread of “data anarchy,” or the “Wild Wild West” of big data siloes and data marts. This inevitably leads to ballooning IT costs to handle all the redundancies, as multiple departments copy and alter data, and ultimately deal with Data Drift, which is essentially a lack of data accuracy.
Nails in the Data Mart Coffin
While the road to data anarchy may be paved with good intentions, it still leads us to the point where more than 75-percent of people’s time can be spent sifting through data, rather than making data-driven decisions. Consider the daily or weekly fire drills that take place in the CEO’s office when numbers from two departments – say marketing and finance – don’t match up and the circular arguments begin over which data are right, and which data are wrong. Users and technology executives alike can spin their wheels if the organization doesn’t have systems and policies that are inclusive and effective around data.
It’s no wonder that users get frustrated. In “Drive,” his bestselling book on workplace motivation, author Daniel Pink explains how scientists have developed a “new operating system” for business success that revolves around three elements: autonomy, or the urge to direct our own lives; mastery, the desire to get better and better at something; and purpose, the yearning to do something that matters.
How many of these qualities can you expect to find in the earnest employee who just went the extra mile for the company, building a data mart silo only to later realize that the effort set the company back? The extra mile was traveled in reverse. When it comes to the culture around data analytics and business intelligence, it’s not hard to think of these latest insights on workforce motivation as additional nails in the data mart coffin.
The key to solving this problem is building a data platform that can remain the source for all future analytics and applications, which can change and evolve over time. Here’s where the Virtual Data Mart comes in. A Virtual Data Mart is a staging area with real-time, self-serve characteristics that looks and feels like a traditional data mart to the end user, but that is designed to allow such experimentation to happen while protecting the underlying data. It also allows new data or new intermediaries to be added in an agile manner without the need to copy core data at all.
Because multiple users in the organization can create VDMs simultaneously in real time, you can get centralized access for decentralized use cases across the organization. As a result, the data remains accurate, clean and flexible. Anyone in the enterprise can request and analyze that data, anytime and anywhere. This is the kind of framework that enables teams across an organization to ask complex questions that drive insights and innovation at scale. Without veering into a technical deep-dive, I think it’s important to stress that businesses need to back up the vision with production-grade architectures that can execute the approach at the enterprise level. My own approach – part of the Sentient Enterprise methodology I advocate — is what I call a Layered Data Architecture. In a nutshell, it’s a framework that makes the organization’s data assets safely available via multiple layers of access and complexity to accommodate everyone from the die-hard data scientist to the casual business user.
A Layered Data Architecture is one way to provide frictionless, self-service analytics for everyone while still controlling access to data and the rules that govern that data. Whatever your specific approach may be, your benchmark for success is whether you were able to put a stop to the data anarchy and related pitfalls that can cripple your organization.
(Author):
Oliver Ratzesberger
Mr. Ratzesberger has a proven track record in executive management, as well as 20+ years of experience in analytics, large data processing and software engineering.
Oliver’s journey started with Teradata as a customer, driving innovation on its scalable technology base. His vision of how the technology could be applied to solve complex business problems led to him joining the company. At Teradata, he has been the architect of the strategy and roadmap, aimed at transformation. Under Oliver’s leadership, the company has challenged itself to become a cloud enabled, subscription business with a new flagship product. Teradata’s integrated analytical platform is the fastest growing product in its history, achieving record adoption.
During Oliver’s tenure at Teradata he has held the roles of Chief Operating Officer and Chief Product Officer, overseeing various business units, including go-to-market, product, services and marketing. Prior to Teradata, Oliver worked for both Fortune 500 and early-stage companies, holding positions of increasing responsibility in technology and software development, including leading the expansion of analytics during the early days of eBay.
A pragmatic visionary, Oliver frequently speaks and writes about leveraging data and analytics to improve business outcomes. His book with co-author Professor Mohanbir Sawhney, “The Sentient Enterprise: The Evolution of Decision Making,” was published in 2017 and was named to the Wall Street Journal Best Seller List. Oliver’s vision of the Sentient Enterprise is recognized by customers, analysts and partners as a leading model for bringing agility and analytic power to enterprises operating in a digital world.
Oliver is a graduate of Harvard Business School’s Advanced Management Program and earned his engineering degree in Electronics and Telecommunications from HTL Steyr in Austria.
He lives in San Diego with his wife and two daughters.
View all posts by Oliver Ratzesberger