Subscribe to the Teradata Blog

Get the latest industry news, technology trends, and data science insights each week.

No, You Can’t Machine Learn Everything

No, You Can’t Machine Learn Everything

Machine Learning is fast becoming a source of both confusion and anxious hope to many organizations. So much so that several customers last year told us, “please don’t talk about analytics to our senior stakeholders, because we’ve told them that we are going to machine learn everything!”

Now, Machine Learning already provides enormous value in just about every industry you can imagine – with use-cases that span from preventative maintenance through smart recommender systems to fraud detection. But you can’t “machine learn everything” – and even if you could, there would still be quicker routes to goal to solve some problems. The most successful data-driven organizations tend to think first in terms of the business problem that they are trying to solve; second about the data that are – or that could be – available to solve it; and only then about the methods, techniques, algorithms and technology that they should employ.

Part of the problem, we think, is that terms like “Analytics”, “Data Science”, “Machine Learning” and “Artificial Intelligence” are used by commentators both interchangeably and to mean different things.  By understanding the history of the field and the origin of these labels, our hope is that business and technology managers will be able to truly understand the possibilities – and the limitations – of Machine Learning.

The recent history of Machine Learning arguably begins with the brilliant British mathematician and early computer scientist, Alan Turing. Turing and his contemporary, Alonzo Church, had already produced what subsequently became known as the Church-Turing thesis – proof that digital computers are capable of computing anything that is computable – when in 1950, Turing turned his attention to another, related question. Could a machine exhibit intelligent behaviour, equivalent to – or even indistinguishable from – that of a human? And if it could, how would we know?


Turing proposed what came to be known as “the Turing Test”; that a human evaluator, eavesdropping on a conversation between a human and an “Intelligent Agent”, should not be able to tell which is which at least 70% of the time.

The Turing test - or “Imitation Game” - is now often held to be flawed, for all sorts of very good reasons that we don’t have time to explore here.  But in the 1950s it was a revolutionary idea that helped to give birth to the idea of Artificial Intelligence (AI) – and led to the first academic study of the subject at Dartmouth College in 1956. As the author of the proposal, J McCarthy, put it: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

In 1956, researchers believed that they were only a decade away from computers that could achieve true Artificial Intelligence.  That turned out to be wildly optimistic, with the field going through at least two “winters” – epochs when research money dried-up in the face of AI’s apparently intractable problems and when other approaches, like rule-based systems, looked more promising.  But Artificial Intelligence had now entered the academic mainstream as a sub-field of Computer Science.

Research into Artificial Intelligence can be divided into disciplines that focus on specific problems. Among the more important problems is enabling the Intelligent Agent to harvest data from the environment - and then using those data to improve its performance of a task.  And so the quest for Artificial Intelligence led naturally to the study of “Machine Learning”.

Since Artificial Intelligence is also concerned with many other issues – reasoning and problem-solving, knowledge representation, agency and cognition, Hollywood movies about a dystopian future ruled by killer robots, etc. – Machine Learning is only a sub-field of Artificial Intelligence, which is itself a sub-field of Computer Science.

It was the quest for Artificial Intelligence that gave us Machine Learning. And in the next installment of this blog, we’ll explore how machine learning gave us Data Mining – and how vendor marketing departments have now taken Machine Learning back to the future.


Portrait of Dr. Frank Säuberlich

(Author):
Dr. Frank Säuberlich

Dr. Frank Säuberlich leads the Data Science & Data Innovation unit of Teradata Germany. It is part of his repsonsibilities to make the latest market and technology developments available to Teradata customers. Currently, his main focus is on topics such as predictive analytics, machine learning and artificial intelligence.
Following his studies of business mathematics, Frank Säuberlich worked as a research assistant at the Institute for Decision Theory and Corporate Research at the University of Karlsruhe (TH), where he was already dealing with data mining questions.

His professional career included the positions of a senior technical consultant at SAS Germany and of a regional manager customer analytics at Urban Science International. Frank has been with Teradata since 2012. He began as an expert in advanced analytics and data science in the International Data Science team. Later on, he became Director Data Science (International).

His professional career included the positions of a senior technical consultant at SAS Germany and of a regional manager customer analytics at Urban Science International.

Frank Säuberlich has been with Teradata since 2012. He began as an expert in advanced analytics and data science in the International Data Science team. Later on, he became Director Data Science (International).

View all posts by Dr. Frank Säuberlich
Portrait of Martin Willcox

(Author):
Martin Willcox

Martin leads Teradata’s EMEA technology pre-sales function and organisation and is jointly responsible for driving sales and consumption of Teradata solutions and services throughout Europe, the Middle East and Africa. Prior to taking up his current appointment, Martin ran Teradata’s Global Data Foundation practice and led efforts to modernise Teradata’s delivery methodology and associated tool-sets. In this position, Martin also led Teradata’s International Practices organisation and was charged with supporting the delivery of the full suite of consulting engagements delivered by Teradata Consulting – from Data Integration and Management to Data Science, via Business Intelligence, Cognitive Design and Software Development.

Martin was formerly responsible for leading Teradata’s Big Data Centre of Excellence – a team of data scientists, technologists and architecture consultants charged with supporting Field teams in enabling Teradata customers to realise value from their Analytic data assets. In this role Martin was also responsible for articulating to prospective customers, analysts and media organisations outside of the Americas Teradata’s Big Data strategy. During his tenure in this position, Martin was listed in dataIQ’s “Big Data 100” as one of the most influential people in UK data- driven business in 2016. His Strata (UK) 2016 keynote can be found at: www.oreilly.com/ideas/the-internet-of-things-its-the-sensor-data-stupid; a selection of his Teradata Voice Forbes blogs can be found online here; and more recently, Martin co-authored a series of blogs on Data Science and Machine Learning – see, for example, Discovery, Truth and Utility: Defining ‘Data Science’.

Martin holds a BSc (Hons) in Physics & Astronomy from the University of Sheffield and a Postgraduate Certificate in Computing for Commerce and Industry from the Open University. He is married with three children and is a solo glider pilot, supporter of Sheffield Wednesday Football Club, very amateur photographer – and an even more amateur guitarist.

View all posts by Martin Willcox

Turn your complex data and analytics into answers with Teradata Vantage.

Contact us