Let me put it out there rather shamelessly that this is a blog series. Far superior minds have attempted to illustrate the promise (and perils) of Artificial Intelligence (AI) as a construct, a concept, a solution, an all time panacea, a behemoth of a Frankenstein destined to keep all of humanity in indentured servitude, and many other things. My goal, starting with this blog, is rather simple(r). It is to provide an overview of AI – an AI 101, if you will – that attempts to do the following:
- Provide a clear and concise meta-analysis of the literature that distills a definition of AI – this blog.
- Give a view into the various applications of AI – this blog and other blogs.
- How AI is viewed in the context of Machine Learning (ML), Deep Learning (DL), Expert Systems, and Natural Language Processing (NLP) – next blog.
- The risks and myths of AI – the blog after the next blog.
Before I get started, a small word into the motivation that set me up to write this piece. Besides the urge to express my passion for all things analytics in written form there is a deeper urgency that I have felt over time. For one thing, I have found myself in the middle of many a conversation wherein terms like AI, ML, DL, NLP (and other concepts that are clustered around these acronyms such as Neural Networks) are often casually blended into a soup of words where these terms are used interchangeably. Heck…I have done so myself and ought to be classified as a Class A offender by the cognoscenti. If anything, this post is one that I am writing for myself to impart a much needed self-clarity as to what these concepts actually mean particularly in the larger context of delivering analytic solutions to businesses that could use clever implementations of AI.
What exactly is AI?
One of the more concise definitions of AI harks back to its academic provenance and describes it as a theory and development of computer systems that are able to perform tasks that normally require human intelligence (e.g., visual perception, decision making). In a colloquial sense, AI refers to how machines can copy the vast cognitive capabilities of the human mind to learn from its experiences while interacting in its local environments to solve a problem. The problems could be diverse in nature: identifying a fraudulent activity, recognizing a missing child from images, or diagnosing a medical condition. The overall goal of any AI project in today’s commercial implementations is to create a capability that allows machines to function in an intelligent manner. AI is considered both a scientific discipline as well as a technology development initiative that is an amalgam of diverse disciplines such as high performance computing, Psychology, Linguistics, Mathematics, and Engineering.
A singular ability of an AI system is that it incorporates functions that are associated with human intelligence such as logical reasoning, self-learning, perception, language understanding, and problem resolution. In this regard, there is a fundamental difference between AI and conventional computing. Conventional computing, which most of the commercial world has benefited from for the longest time, takes advantage of powerful computer systems that are programmed for specific tasks. These tasks could span the simplest to the most complex – or from the ridiculous to the sublime, pick your metaphorical gulf – (e.g., adding numbers, CRM, ERP, SCM). There are tightly defined computational parameters, which are adhered to. AI systems, on the other hand, involve the marriage of machines and algorithms that together teach themselves based on experience, adapt to their environmental conditions in providing their responses, and organically add to a knowledge base that can be used for future emulations of intelligent responses.
Why AI and why now?
The cavalier response first is “why not AI”? After all, what could be better than a technology or a system of technologies that learns as it goes along, increases accuracy, and provides contextual responses by considering unique and idiosyncratic environmental impulses. Fair enough…but this requires an enumeration of some of the things that AI can be used for that conventional computing does not quite cover.
Before we go any further let’s acknowledge that AI exists in some form or the other in our daily lives (this blog is already long for me to add current AI application examples – more on that in the following series) but the extent to which it is going to be significantly entrenched in our lives is becoming more apparent by the day. As a marker for this deeper entrenchment there are at least three broad areas in which AI is poised to play a central role in our lives: First, by creating a new and ubiquitous virtual workforce with intelligent decision making capabilities; Second, by materially contributing to the better (more efficient) deployment of existing physical and human capital assets across a wide range of workplaces; Third, by driving an entirely new structure of economic and social activity in that not only are companies going to be doing things in a markedly changed manner due to the promise of AI but also doing entirely new things that hitherto were not part of their mission scopes.
The first two outcomes of AI application (full automation and automation assistance) are relatively axiomatic. For example, in a recent conversation with a large mortgage services provider I was told that AI systems were being deployed to scour a vast number of prior mortgage decisions and all associated business documents to develop a set of heuristics to determine likely recipients of home mortgages. Gone is the need for a manual and painstaking review of information to make decisions that could be infused with more than an acceptable risk of human fallibility. Similarly, in a similar conversation I had with a company that manufactures semiconductors for mobile telephones there was an internal pilot project to deploy AI systems alongside humans to parcel out routine and repetitive tasks to the machines and leave the high value one-offs to the humans. However, the third and admittedly diffuse construct of structural impacts of AI is not well understood and just beginning to be looked at beyond just as academic exercises. Take, for example, the impact of self-driving cars. In theory, as self-driving cars become more pervasive, presumably drivers like me, who may be initially gritting their teeth in the not too sultry anticipation of an accident, may eventually relax and turn their attention to focusing on other things like, say, reading, watching a movie, or benefiting from other entertainment. Mobile companies and content development conglomerates are already preparing for these possibilities by making huge investments in entertainment systems that can take advantage of more driver idle time. This is an example of a structural change in the economy engendered by AI. Furthermore, these structural changes beget the strangest of bedfellows. For example, Ford Motors and China’s search giant, Baidu have teamed up to develop driverless cars and deliver a unique navigational experience to users of those cars. Partnerships once inconceivable are now de rigueur thanks to the innovative ways in which companies can bring their various technological core competencies together to deliver organizational synergies and unique customer experiences.
What exactly constitutes AI?
As I briefly alluded in the introduction to this blog, I have thrown myself into hysterical fits of exasperation by both listening to myself and others blithely toss terms such as AI, Machine Learning, Deep Learning and more as if they were all part of a fascinating basket of synonyms that naturally cluster around each other. Nothing could be farther from the truth. What is required is a clear taxonomy of AI. To be clear, AI is an umbrella term that encompasses many capabilities. One such capability borrows heavily from human biology and shows how human neural networks can be artificially created in AI systems. Artificial neural networks are highly interconnected processing elements, which process information by their dynamic state response to external impulses. Each layer of the network is able to process information cumulatively and the highest response is a function of all the sub layer processing that occurs in the network. But there is more to it. Neural Networks is one of the important way stations towards understanding Deep Learning. Deep Learning is a subset of Machine Learning which, in turn, is nested inside the larger AI construct. And then, not to pile on but also to pile on, there are other related areas such as Natural Language Processing (NLP), Expert Systems, Intelligent Agents (Bots) that are also used in AI implementations.
Point is, these are fairly esoteric concepts that need to be better fleshed out for a wide audience if ever we are going to make a difference in how AI can be effectively implemented. Stay tuned for Part 2 of this series on Artificial Intelligence wherein I lay out this taxonomy as well as providing a mapping of each taxonomic member to examples of the kinds of implementations/use cases that they are typically seen in.
is a Senior Global Product Marketing Manager at Teradata and is in the big data area with responsibility for the AsterAnalytics solution and all ecosystem partner integrations with Aster. Sri has more than 20 years of experience in advanced analytics and has had various senior data science and analytics roles in Investment Banking, Finance, Healthcare and Pharmaceutical, Government, and Application Performance Management (APM) practices. He has two Master’s degrees in Quantitative Economics and International Relations respectively from Temple University, PA and completed his Doctoral coursework in Business from the University of Wisconsin-Madison. Sri is passionate about reading fiction and playing music and often likes to infuse his professional work with references to classic rock lyrics, AC/DC excluded.
View all posts by Sri Raghavan