A short history of AI - a story of human ingenuity
The honest truth is that we wouldn’t be where are today with AI, if it weren’t for some amazing scientists and philosophers who dared to dream the impossible. While we all this AI was invented in the last 20 years or so, the fact is that the thoughts about artificial intelligence can be traced back to ancient history, where the concept of creating intelligent beings can be found in myths and stories. Some of the oldest and greatest philosophers laid the motivation and foundation for intelligent beings. For example, the greek philosopher Aristotle devised syllogistic logic that laid the groundwork for formal reasoning.
Though a lot of ideation and imagination went into building an image of an “intelligent non human being”, it was only in the 20th century that serious efforts to create intelligent machines began. The advent of the industrial age, and progress in mechanical and electrical engineering, helped channel the thought from an “intelligent being” to an intelligent machine. In fact, the term "artificial intelligence" was coined in 1956 by John McCarthy at the Dartmouth Conference, which is often considered the birth of AI as a field of study.
One of the early pioneers, Alan Turing, made significant contributions with his concept of the Turing Test, which aimed to determine a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. Turing’s visionary thinking laid the foundation for modern computer science and AI. His story is particularly poignant; during World War II, he played a crucial role in breaking the Enigma code, significantly aiding the Allied war effort. Unfortunately, Turing faced persecution for his homosexuality, which was criminalized in Britain at the time, leading to a tragic end to his life. His legacy, however, is immortalized in the Turing Award, often regarded as the "Nobel Prize of Computing."
The 1950’s and the 1960’s were the formative decades for AI when the vision of intelligent machines slowly started to come to life. In the 1950s and 1960s, AI research flourished. Pioneers like Marvin Minsky and Herbert Simon explored problem-solving and reasoning. Minsky co-founded the Massachusetts Institute of Technology’s AI laboratory and is known for his work on neural networks and robotics. His innovative spirit often led him to propose ambitious projects, such as creating a machine that could learn from experience. One fascinating anecdote about Minsky involves his efforts to build a robotic hand that could mimic human dexterity using simple materials. This early exploration of robotics has had lasting implications, influencing today's advances in AI and robotics.
Meanwhile, Allen Newell and Herbert Simon developed the Logic Theorist, regarded as the first AI program, capable of solving mathematical problems by simulating human reasoning. Their collaboration was not only productive but also emblematic of the era's optimism about AI. Newell's story is particularly inspiring; he famously worked long hours, often skipping meals, as he believed deeply in the potential of machines to think.
The late 1970s and 1980s saw the emergence of machine learning, a subset of AI focused on the idea that systems can learn from data. This period witnessed the rise of decision trees and neural networks, which would later be pivotal in the evolution of AI. A notable figure during this time was Geoffrey Hinton, often called the "Godfather of Deep Learning." Hinton's research on backpropagation—a method for training neural networks—revolutionized how machines learn from data. His journey was marked by skepticism from the academic community, as many doubted the viability of neural networks. However, Hinton's perseverance led to groundbreaking advancements that laid the foundation for modern deep learning technologies.
While the 80’s and 90’s were times of reduced funding for AI, scientists continued to persevere in research, trying to get technology to be able to meet the computational needs that AI demanded. As a resulted, a significant step change was made in data processing technology. Transistors of the 50’s gave way to processors of the 80’s and 90’s. The personal computer and reduction in costs of computer hardware, rapidly improved the adoption of computers in industry and a slew of data processing technologies began to be developed. The data processors of the 70’s transformed into the data warehouses of the 80’s. The advent of scientific methods to model data into relational database created an explosion of analytical solutions.
As the 90’s rolled out, the pivotal moments that step changes in the development of intelligent machines and AI began to happen more frequently. It all started with IBM’s deep blue beating Gary Kasparov in chess. That incident rocked the world and a mad dash for developing AI was ignited in Silicon Valley. By the early 2000’s google had released its Map Reduce algorithm, that finally was able to process the volume of data to power large machine learning algorithms. Coupled with the digital transformation happening in commerce, Hadoop clusters for processing the large data volumes of clickstream data ushered an age of personalization in digital commerce. Companies like Amazon embraced this technology to redefine shopping.
In 2012, a pivotal moment occurred when a deep learning algorithm developed by Hinton and his team won the ImageNet competition, demonstrating unprecedented accuracy in image classification. This changes the definition of data. Data, once thought of to be lines and columns on excel sheets and databases, now expanded to include audio, video and text. Unstructured data was here to stay. The evolution of computers to smaller and smaller size, ushered in the age of connected devices that further fueled the excitement around unstructured data. This was also a time where social commerce slowly began to emerge. If amazon redefined shopping, Facebook redefined friendships and conversations. Through network databases, social algorithms and machine learning, Facebook helped connect millions of people worldwide.
Until 2017, you never thought about talking to AI. It was mostly an intelligent machine that completed assigned predictive tasks. However, The paper "Attention Is All You Need," published by Vaswani et al. in 2017, revolutionized the field of machine learning, particularly in natural language processing (NLP). The introduction of Transformers via this paper led to significant improvements in various NLP tasks, such as machine translation, text summarization, and sentiment analysis. This led to the development of various models such as BERT, GPT, and T5 that ushered in the age of generative AI.
While Generative Adversarial Networks (GANs) was discovered by Ian Goodfellow and his collaborators in 2014, the transformers of 2017 and the vast availability of computational resources on public cloud environments like AWS, Azure and Google resulted in the rapid development of Generative AI. Machine now had a voice and could think on its own and even answer questions it was never trained to answer. The world was engulfed in a mad frenzy for chips from Nvidia, GPT models from Open AI. Today, this space continues to evolve at a dizzying pace, with multiple companies pushing the boundaries of what AI can do.
While the rapid advancement of AI continues, there are many scientists pointing out the issues of letting AI loose. Notably, figures like Fei-Fei Li have emerged as advocates for ethical AI, emphasizing the need for responsible development and deployment of AI technologies. Li's work on ImageNet not only advanced AI research but also sparked conversations about the ethical implications of AI, ensuring that the technology is developed with societal considerations in mind.
The history of artificial intelligence is a testament to human ingenuity, marked by the vision, creativity, and determination of its inventors. From the early theoretical foundations laid by philosophers and mathematicians to the modern advancements driven by machine learning and deep learning, AI has grown into a transformative force in society. As we look to the future, the stories of those who dared to dream of intelligent machines remind us that the quest for knowledge and understanding is a journey worth pursuing. With continued innovation and responsible stewardship, the potential of AI is boundless, promising to shape the world in ways we have yet to imagine.