By now, a lot has been said and demonstrated about Teradata’s much prized vision and emerging reality of
Teradata Vantage as a leading analytics platform that delivers
Pervasive Data Intelligence. In this column, I want to deconstruct this category and explain what actually makes Vantage such a much-ballyhooed industry standard for delivering institutional outcomes (business and public organization) across virtually any vertical of choice.
Multi-genre Analytics as a Vehicle for Predictive Modeling
The key to delivering on the promise of pervasive data intelligence and business outcomes is
multi-genre analytics. Multi-genre analytics includes, among other things, the development of predictive models and machine learning classifications that ensure we are able to quickly target, for example, the right customers or the sickest patients or the most expensive machines. There are many more examples.
These applications of predictive modeling are typically the product of a diverse group of analytic techniques that are implemented within one analytic platform. Let us take one important use case – customer churn. In this scenario, algorithms (or functions) such as Apache Log Parser, Text Parser, Sessionization (the three data preparation functions), nPath, Sentiment Extractor (data discovery and exploration), AppCenter App (visualization), Generalized Linear Modeling (GLM) and GLM Scoring (machine learning model building and scoring), and Confusion Matrix (model evaluation) are executed in a sequential workflow as an ensemble to deliver and operationalize insights.
Predictive modeling is difficult and complex when analytic capabilities are not available within a single framework such as Vantage. The alternative is to piece together an awkward sequence of these algorithms that have to be painstakingly sourced from different analytic solutions, each with their own user experience, documentation, and workflows.
Performing analytics in
Vantage involves calling these functions that are pre-coded in the solution within the
same user interface without any insurgent code that needs to be written by the user. Many genres of analytics are standard with Vantage. These include data preparation,
Path and Pattern analysis, machine learning, text and sentiment analysis, statistics, behavioral analytics, and more. Plus, there are over 180 algorithms available out of the box across these genres.
Critical Ingredients for Achieving Pervasive Data Intelligence
While it is clear that achieving these institutional outcomes is likely to be beneficial to a wide group of stakeholders, it is not entirely clear at the outset how exactly to achieve them in a manner that ensures scalable implementation and ease of use.
Let’s refer again to the customer churn example. At least eight steps are involved in addressing this:
- Collect multi-channel data (this is usually data that comes in multiple structures—structured, semi-structured, and some unstructured)
- Cleanse and prepare the data to remove outliers, substitute missing data, normalize variable data, and more
- Pre-process the data
- Create initial data exploration metrics that enable hypothesis building
- Create visualizations
- Model data based on initial hypothesis
- Evaluate the model
- Score the data based on the model
- Operationalize based on segmentation
The predictive models that come out of these steps deliver some important business outcomes for the enterprise and its stakeholders. That is the first and, in my view, the most important ingredient. But there are two other important areas of work that Teradata has pioneered in its 40-year history that underpins the strength of enabling a data-driven organization: performance and scalability.
Scalability can mean at least two things: Scalability in the sense of analyzing massive volumes of data, and scalability in the sense of repeatedly implementing this and other multi-genre analytics with as much frequency as is warranted by the business imperatives on hand.
Massive data volumes are a given in today’s data rich environment. Furthermore, given the variegated nature of customer personas, large data volumes capture the nuanced nature of customer behaviors. Large datasets also inform model richness, and these lead to better predictions and customer classifications. On the other hand, model scalability is equally important because we do not want to spend each day repeating the painstakingly involved process of manually creating and operationalizing models without some automation.
One benefit of Vantage is that large volumes of data of any type can be easily accessed and analyzed, and the repeated implementation of multiple models of varying complexity is a standard feature. Furthermore, when these analytic functions are made accessible through an easy to execute interface that minimizes the need to engage in advanced programming, the world of analytics is suddenly open to the widest possible group of data-driven decisionmakers such as business analysts, citizen data scientists, lines of business managers, and more. This is the awesome reality of Teradata Vantage.
Performance refers to the speed and efficiency at which analytic results are delivered for decision making and operationalization purposes. Gone are the days from my years in graduate school when executing statistical programs usually involved a half day’s wait for the mainframes to crank out the results of relatively simple algorithms.
In today’s high-impact decision making environments, the lag between data ingestion to operationalization is incredibly short and often signals the difference between competitive survival and outright extinction. Vantage’s ability to deliver high octane business outcomes in blazing fast speeds is a key market differentiation that customers have been weighing seriously.
While advanced analytics, scalability, and performance are three of my top contenders for delivering pervasive data intelligence, no doubt there are others that we will explore in later blog posts.
High-Impact Business Outcomes with Teradata Vantage
Because the raison d’etre for Vantage is to deliver business outcomes, let’s look at a few examples:
(A)
Retail: One big issue facing retailers is managing customer churn. To predict with high accuracy which customers are likely to churn enables retailers to create targeted programs that minimize or even eliminate customer churn. Customer churn management has far reaching implications for revenue management, cost reduction, and brand perception, among other things.
(B)
Healthcare: Healthcare providers–be it large hospitals to small clinics–have one goal: To provide lifesaving care that ultimately increases the
quality of life and life spans. However, caregiving is complicated due to a variety of factors that include pre-existing conditions, socioeconomic and demographic variables, and possible chemical interactions with other medications. Patient outcome analysis is a focus area for healthcare providers with a view toward identifying key disease markers and taking preventive care measures to ensure increased survival rates.
(C)
Manufacturing: Expensive equipment failure is the bane of many enterprises. When machines fail, production stops and when that happens, revenue implications arise. Companies want to understand underlying performance patterns so that early changes in machine behaviors can be extrapolated to understand their likelihood of a complete breakdown. Typically, this involves analyzing large volumes of sensor data that codify underlying part behaviors and their interactions with each other inside these machines. The ultimate goal of these analyses is to predict the likelihood of a breakdown long before it actually happens so that alternate arrangements can be made without any significant operational and revenue impact.
(D)
Insurance and Banking: Financial organizations thrive on bundling products. Oftentimes products are bundled based on prior purchases. Product bundling has the twin benefit of ensuring that the consumer can buy several products without having to wait and the sellers can meet their revenue targets without having to wait on the purchases being staggered across time periods.
(E)
Public Health: The opioid crisis is a burgeoning problem in many states and has reached epidemic proportions in some localities. One part of understanding the data on opioid use is to look at patterns of usage over time by individuals to see which individuals are likely to end up abusing opioids. A combination of time-based prescription data and other variables (e.g., income levels) are used to delineate patterns that are highly likely to end up in abuse. Once these patterns are detected, it becomes easier for well-considered interventions to be implemented that lead to reduced abuse and improved public health.
Vantage as a Platform for Pervasive Data Intelligence
By now, it may be clear that an ethos of pervasive data intelligence is an indispensable marker of institutional success. Teradata Vantage has over 180 pre-built analytic functions and has analytic interfaces that enable virtually anyone in the organization to implement scaled analytics. This results in:
- Enterprises getting more value out of any data
- Delivering business outcomes with clear measurability and continuous improvement
- Expanding the data-driven culture within enterprises
Teradata Vantage, with its multi-genre analytic implementations delivered with performance and at scale, is a one of a kind solution that fulfills the promise of analytics for the masses while removing the complexity and demystifying its implementation.
For more information on how Teradata Vantage can help,
contact us.
Never miss a beat. Get the latest news, updates, and tips in your inbox once a month when you subscribe to
Teradata Vantage Updates.