20th Century
The Further History of Machine Learning
In the 20th century, machine learning began to develop rapidly due to advancements in mathematics, statistics, and computer science. The first algorithms for data processing emerged, such as the perceptron, and the foundations of neural networks and other key methods started to be developed. With the advancement of computers and the increase in the volume of available data in the 1990s, machine learning became more practical and widely used in various fields, such as image recognition, natural language processing, and recommendation systems.
The Early History (Pre-1940)
1913: Andrey Markov introduced Markov chains, which later became essential in various machine learning applications, including natural language processing and speech recognition.
1925: Ronald Fisher published "Statistical Methods for Research Workers," introducing many of the statistical concepts that underpin machine learning, such as maximum likelihood estimation.
1936: Alan Turing introduced the concept of the Turing machine in his paper "On Computable Numbers," which explained how machines could follow a set of instructions (algorithms) to solve problems. This was foundational for the development of machine learning algorithms.
1937: Claude Shannon's master's thesis at MIT demonstrated that Boolean algebra could be used to simplify the design of electrical switching circuits. This work was crucial for the development of digital computers, which would later enable complex machine learning computations.
1938: Claude Shannon published "A Symbolic Analysis of Relay and Switching Circuits," further developing the application of Boolean algebra to circuit design, which would be essential for building computers capable of running machine learning algorithms.
The Era of Stored Program Computers
1940: The invention of the first general-purpose computer, ENIAC, was the beginning of computers that could perform various tasks. It was manually operated, but nevertheless it was a breakthrough in electronic computing.
1949-1951: The development of stored-program computers such as the EDSAC and EDVAC allowed computers to store data and programs, giving them the ability to perform more complex operations automatically.
1943: A neural network model was created using electrical circuits, which was one of the earliest studies of how machines could mimic human brain activity. It was a key concept in machine learning, later applied to tasks such as pattern recognition and others.
Computer Machinery and Intelligence
1950: Alan Turing published his famous paper, "Computing Machinery and Intelligence," which explored the question, "Can machines think?" This is one of the earliest papers in the field of artificial intelligence, and it had a huge impact on the entire field.
1952: Arthur Samuel developed a checkers-playing program for IBM computers that improved its performance the more it played. This was one of the first self-learning programs, an early example of machine learning.
1959: Arthur Samuel coined the term “Machine Learning” to describe the process by which computers could learn and improve from experience without being explicitly programmed for every task.
1959: The first neural network was successfully applied to solve a real-world problem—removing echoes in phone lines using an adaptive filter, a foundational application of early neural networks.
1964: ELIZA is an early natural language processing computer program developed from 1964 to 1967 at MIT by Joseph Weizenbaum.
1974: Kaissa Soviet chess machine, was developed in the early 1970s at the Institute of Control Sciences in Moscow, Soviet Union. It was one of the first chess programs to achieve notable success in computer chess competitions. It won the first World Computer Chess Championship in Stockholm, marking a significant milestone in the history of artificial intelligence and game-playing algorithms.
The First “AI Winter” (1974-1980)
1974-1980: This was a tough period for AI and machine learning researchers. There were high expectations, but many AI projects failed to deliver results, leading to a reduction in government funding and a general lack of interest. This time of stalled progress was called the “AI winter.”
The Second “AI Winter” (1987–1993)
1987–1993: The second “AI winter” took place when interest and funding in artificial intelligence dropped significantly once again. This decline was largely due to inflated expectations that were not met, a lack of notable progress, and the high costs of AI projects at the time. Many ambitious projects were either unsuccessful or failed to deliver the results anticipated, which led to a reduction in government and private sector funding. The hype surrounding AI in the 1980s gave way to disappointment, causing this second period of stagnation in AI development.
The Expert Systems of the 80s: A Resurrection
In the 1980s, AI experienced a revival with the development of expert systems, which mimicked the decision-making process of specialists in specific fields. While these systems proved useful in certain areas, their application remained limited.
The Rise of the Internet and the Era of Machine Learning
With the advent of the Internet and the exponential increase in data availability, machine learning algorithms and neural networks gained a new life. The ability to analyze and learn from large data sets has opened up a world of possibilities for AI applications in everyday life.
1985: Researchers Terry Sejnowski and Charles Rosenberg created NETtalk, a neural network that taught itself how to pronounce 20,000 words in just one week, showcasing the potential of machine learning for language processing tasks.
1997: IBM’s Deep Blue computer made headlines by beating world chess champion Garry Kasparov, becoming the first computer to defeat a human champion in chess. This showed the world that machine learning could be applied to strategic, decision-based games.
Last updated