Home Blog A Brief History of Machine Learning (ML) Algorithm

A Brief History of Machine Learning (ML) Algorithm

0
2757

machine taking booksIn our recent article, we covered the topic A Brief History of Artificial Intelligence. Today we are going to see about one of the most popular and widely discussed technologies, Machine Learning (ML).

In a nutshell, machine learning uses mathematical algorithms to learn and analyse data to make predictions and take decisions in the future.

Today, ML algorithms enable computers to communicate with humans, autonomously drive cars, write and publish match reports, predict natural disasters or find terrorist suspects.

Machine learning has been one of the most commonly heard buzzwords in the recent past. 

So let’s jump in and examine the origins of machine learning and some of its recent milestones.

The concept of machine learning came into picture in 1950, when Alan Turing, a pioneering computer scientist published an article answering the questions, “Can machines think?” 

He proposed a hypothesis stating that machines that succeeded in convincing humans that it is not indeed a machine, would have achieved artificial intelligence. This was called the Turing Test.

In 1957, Frank Rosenblatt designed the first neural network for computers, now commonly called the Perceptron model.

The Perceptron algorithm was designed to classify visual inputs, categorizing subjects into one of the two groups. 

In 1959, Bernard Widrow and Marcian Hoff created two network models called Adeline, that could detect binary patterns and Madeline, that could eliminate echo on phone lines. The latter had a real-world application. 

In 1967, the “Nearest Neighbor” algorithm was written, that later allowed computers to use very basic pattern recognition. 

In 1981, Gerald Dejong introduced the concept of explanation-based learning in which computer analyzes data and creates a general rule to discard unimportant information.



During the 1990s, work on machine learning shifted from a knowledge-driven approach to a more data-driven approach.

Scientists began creating programs for computers to analyze large amounts of data and draw conclusions or “learn” from the results.

Now let’s talk about some of the recent achievements in this field. 

In 2002, using a combination of machine learning, natural language processing and information retrieval techniques IBM’s Watson beat two human champions in a  game of Jeopardy. 

In 2016, Google’s AlphaGo program became the first computer program to beat a professional human using a combination of machine learning and tree search techniques.

Since the start of the 21st century, many businesses ventured into creative machine learning projects. Google Brain, AlexNet, DeepFace, DeepMind, Open AI, Amazon Machine Learning Platform and ResNet are some large projects taken up by top tier companies.  

Amazon, Netflix, Google, Salesforce and IBM are dominating the IT industry with ML. 

ML has scaled exponentially in the recent decades. As the quantities of data we produce continue to grow, so will our computers ability to process and analyze it.


If you like this article. Take a look at our Subscribe page.