How did AI develop?
By Dr Kathy Nicholson, Operations Manager, Australian Institute for Machine Learning, the ³ÉÈË´óƬ.
This article is an extract from , a report published in partnership with the .
The allure and potential of AI have always co-existed with humanity. AI informed the robotic Talos of Greek mythology and automata in the Middle Ages, as well as more recent figures in storytelling such as the monster of Mary Shelley’s Frankenstein, Isaac Asimov’s I, Robot, and The Terminator.
While English computer scientist and cryptanalyst Alan Turing is commonly known as the father of modern AI for his work in the 1940s and 1950s, the earliest research into thinking machines dates to the late 1830s, when Charles Babbage designed the analytical engine—a concept for the first mechanical general-purpose computer. It was Ada Lovelace who wrote the first published computer program in 1843, recognising the ability of the analytical engine to solve problems of any complexity, rather than to simply crunch numbers.
A century later, in the 1940s, the first modern programmable digital computer was built and used for code breaking during World War II.
From those early beginnings, the potential for computers to solve the world’s biggest problems felt tangible. But converting those ideas into reality required a series of iterative technological advances in multiple domains, including materials, computer hardware, data science and logic.
It was widely believed that the large-scale implementation of thinking machines was imminent, and that general AI was just on the horizon. Humanity needed to define mechanisms to control and regulate them. Two theories that have stood the test of time emerged:
- The Turing Test (1950) determines whether a machine can demonstrate human intelligence. To pass this simple test, a computer must engage in a conversation with a human, without being detected as a machine. To date, no computer has passed the test, although some have come close.
- Isaac Asimov’s Three Laws of Robotics (1942) hold that: 1) a robot shall not harm a human, or by inaction, allow a human to come to harm; 2) a robot shall obey any instruction given to it by a human; and 3) a robot shall avoid actions or situations that cause it to come to harm itself. Variations and additions to these laws have been proposed by various researchers in recent decades.
The term ‘artificial intelligence’, and the birth of modern AI research, were the result of a Dartmouth College workshop in the summer of 1956. Over several weeks, around 20 mathematicians and scientists came together as a group and built consensus from what had previously been an array of divergent concepts and ideas.
AI is based on the concept that a machine can mimic the process of human thought. Two competing approaches emerged for modern AI. The first is a logic-based approach and uses rules to manipulate symbols. The second approach uses artificial neural networks that mimic how the human brain works and allows systems to be trained to solve problems.
From 1957 to 1974, AI flourished alongside rapid advances in computer hardware and technology. Computers’ capacity to store information increased and they became significantly faster.
Research funding was plentiful, leading to advances in the complexity of algorithms. Early AI researchers predicted that AI would beat world-leading chess players, replace humans in the workforce and demonstrate general intelligence within 10–20 years. Early milestones included:
- 1955: the first self-learning game program
- 1961: General Motors’ launch of the first robot on a production assembly line
- 1965: ELIZA, the first chatbot, created at the MIT Artificial Intelligence Laboratory.
But when AI failed to deliver on those early promises—primarily due to the lack of funding or computer memory and processing power—investors became disillusioned. The AI field was plunged into a series of ‘AI winters’, in which support for AI development all but disappeared.
Academic research continued slowly through the first AI winter (1974–1980) but was reinvigorated when a new boom in research funding in the 1980s coincided with an effort to use AI to create commercial products. Advances included:
- expert systems: the automation of a series of computer functions
- major developments in deep-learning methods
- ALVINN, Carnegie Mellon University’s autonomous land vehicle in a neural network (1989).
The second AI winter (1987–1993) began when hardware was unable to keep up with the growing complexity of expert systems.
Emerging from that period, advances in AI, computer chips and data storage reduced the cost of using deep learning for researchers and entrepreneurs. With huge datasets, modern AI neural networks can often exceed human performance for specific tasks and can even learn from experience.
Some predictions from the early AI researchers have proved true:
- 1997: IBM’s Deep Blue beat chess grandmaster Garry Kasparov
- 2011: IBM’s Watson beat human Jeopardy! champions on live television
- Computers now reliably replace humans in tasks such as online shopping; digital personal assistants; translating language; navigation; detecting, counting, and labelling objects; facial recognition; and 3D imaging.
But other predictions, such as demonstrating human-like intelligence, still elude us. Unlike children, who can often learn from a single experience, AI needs repetition—and lots of it.
Recent advances in AI have opened up new challenges requiring input from philosophers, ethicists and legal minds to explore philosophical questions concerning data privacy, equity and regulation, as well as challenges in trusting AI systems, such as data bias and the ‘black box’ nature of commercial systems.
The 21st century dawned as the age that AI began to mature and deliver value to humans, at least commercially. Companies such as Amazon (founded 1994), Google (1998) and Facebook (2004) all have AI partly underpinning their phenomenal growth and success. Globally, the AI market is set to exceed US$500 billion in revenue in the next two years.
The future of AI is promising. It’s clear from history that we must keep investing in R&D and work closely with communities to ensure that we can achieve the next exciting technological step.
This article is an extract from , a report published in partnership with the .