When Did Machine Learning Start? | 30 Big Events You Should Know in The History of Machine Learning

In recent years, everybody around us starts talking about machine learning and its applications, consequences, etc. Especially there are large numbers of students and professionals who are attracted to it. Consequently, a large number of online platforms came into the limelight. Then the question is

When did Machine Learning start?


Ever since human starts thinking about computing there are a lot of ups and downs. But the major changes in Machine learning starts around the middle of the 20th century. So down below is a brief summary of the History of machine learning progress happened to date.


In 1943 Authors Warren S. McCulloch and Walter Pitts published an article “A logical calculus of the ideas immanent in nervous activity”. In this article, they briefly explained neural network functioning and described it as “Mimicking the brain”.


In the year 1950 major work in the field of AI happened when Alan Turing Creates the “Turing Test” for machines. This test is still used for the comparison of various machine-learning models. According to this test if a machine is able to fool the human that it is human, then the model is said to possess intelligence.


In 1951 Stochastic Neural Analog Reinforcement Calculator abbreviated as SNARC first-ever Artificial Neural Network built by Marvin Minsky and Dean Edmunds. In this, they make use of 3000 vacuum tubes to simulate 40 neurons. This machine was able to learn.


This year IBM’s Poughkeepsie Laboratory was working on the very first machine learning programs. Arthur Samuel joined them and later in the same year Arthur Samuel developed a program that can play checkers and also based upon the environments and agents this machine learning program able to learn its own.


The First Artificial Intelligence Program known as Logic Theorist was developed by Allen Newell and Herbert Simon and was successfully able to solve 38 out of the first 52 theories as discussed in Whitehead and Russell’s Principia Mathematica.


This year came with the discovery of perceptron, a supervised learning algorithm for binary classifiers by Frank Rosenblatt.


Arthur Samuel makes use of the term “Machine Learning” stating that the machine can be able to learn to play checkers better than the human taught the program to do it.


The machine starts playing tic-tac-toe developed by Donald Michie which makes use of reinforcement learning to play the game and learn from it. This news helps other researchers to think more about other machine learning algorithms to solve these kinda problems.


In the year 1967 algorithm named the Nearest neighbor was developed. this makes use of the maps to solve the problem. Later this was widely used for basic pattern recognition. Now we make use of the Nearest Neighbor algorithm to solve various problems.


Seppo Linnainmaa publishes a general method for automatic Differentiation in 1970. That method corresponds to the modern version of backpropagation which is widely used in various algorithms.


Stanford cart was developed which was able to navigate in the room and can also find the obstacles found in the room.


Kunihiko Fukushima publishes a paper on Neocognitron, which is a multilayered artificial neural network.  Neocognitron has been widely used for pattern recognition and handwritten character recognition.


Gerald Dejong introduces a fairly new concept known as Explanation Based Learning. According to this algorithm first, it analyzes the data, and then depending upon the data it makes a general rule that the algorithm can follow and easily discard the unimportant data.


Hopfield Networks a kind of recurrent neural network was invented by John Hopfield. These Neural Networks are served as content-addressable memory systems.


Terry Sejnowski developed a program that can be able to pronounce the words the same as a baby. He named it as NetTalk.


Christopher Watkins developed Q-learning a model-free reinforcement learning algorithm. It means that we don’t need to train the model. But the model gets trained based upon the reward he got after completing a particular task.


Tin Kam Ho publishes a paper in which she described an algorithm that can be used for Regression, classification, and other tasks, known as Random Forest Algorithm.


The support Vector Machine Algorithm was developed by Vladimir Vapnik and Corinna Cortes.


World Chess Champion Kasparov was beaten by IBM’s Deep Blue.


LSTM was developed by Jürgen Schmidhuber and Sepp Hochreiter, through which the efficiency and practical use of neural networks increases.


Netflix made a competition in which participants needs to develop an algorithm that could beat their own recommendation system. Team “The Ensemble” was able to make by an improvement of 10.09%. but Netflix never implemented that algorithm.


Geoffrey Hinton gave the new Term “Deep Learning” which makes use of Artificial neural networks in order to identify text and objects in images and videos.


A large visual database, known as Imagenet is created.


Kaggle a website for various machine learning competitions was launched, which is widely used by various data scientists all over the world.


IBM Watson by making use of NLP, various machine learning algorithms, and information (Data) beat the two human champions in Jeopardy.


The Google Brain team led by Andrew Ng and Jeff Dean creates a neural network that is able to detect the cat from youtube videos. for this, they make a neural network that can learn from unlabeled data.


Deepface was published by Facebook which could able to identify faces with an accuracy of 97.35%.


Microsoft and Amazon launch their own machine-learning platforms.


Google’s AlphaGo program becomes the first program the human in the Chinese board game Go.


In three days OpenAI’s  Dota 2 bot won 7215 against the humans. This bot has an overall accuracy of 99.4%. The data seems very interesting which means that might be in the real future machine can be able to pass all the tests of Turing.

We are hopeful that, now you will get an idea of when did machine learning start.

That’s it!

No, it’s just the beginning of the era in which we will be able to completely mimic brain functionality. Nowadays the data generated is far greater than the data generated in the whole of mankind’s history, which makes it easier for computer programs to learn more. Let’s see what’s new and exciting will happen in the field of machine learning.

Leave a Reply

Your email address will not be published. Required fields are marked *