2 minute read

A History of Artificial Intelligence

Teaching Intelligence



Alan Turing continued his research, believing computers could do more than calculate numbers. In 1950 he wrote a paper, “Computing Machinery and Intelligence,” which truly began the branch of computer science known today as AI. With David Campernowne, Turing wrote the first program allowing a computer to play chess against a human. Given the rules of chess, it became the standard against which machine intelligence was measured. Additionally, Turing developed another challenge, now known as the Turing test. This called for a human to use a computer terminal to interact in conversations with several different people, as well as with the machine. If the human could not determine which of the conversations were with a person and which were with the machine, the test had been passed and the machine would be considered “intelligent.” The annual Loebner Contest was held at Rutgers University in New Jersey with a $100,000 reward for the machine that could pass the test.



MARVIN MINSKY

Marvin Lee Minsky (1927– ) has been one of the leading developers of artificial intelligence since the 1950s, when the term was coined. He cofounded MIT's AI Laboratory, but is best known for his written works on AI and its philosophical implications.

Minsky was said to be a prodigy in both math and music. In high school, he focused on intelligence, going on to study mathematics at Harvard (1950) and Princeton (1954). Restless, he sought inspiration and enlightenment from other disciplines as he tried to understand how the mind worked.

With a colleague in 1951, Minsky built a machine named SNARC that could master maneuvering through a maze. This is considered the construction of the first neural network that prompted his doctoral thesis on automated learning.

Since then, Minsky has worked exclusively with computers, attending the Dartmouth conference where the term “artificial intelligence” was introduced. With John McCarthy, he founded MIT's AI Lab in 1959, where he remains today. Minsky has made a career out of studying the brain, intelligence, and learning, applying much of that information to designing computer systems that can learn. He considers himself the living authority on the human mind.

Minsky's achievements have been rewarded with numerous patents, including the development of the first graphics display visible from a head-mounted device in 1963. He also consulted with director Stanley Kubrick on how AI should be employed in the film 2001: A Space Odyssey. Minsky has been honored around the world for his efforts.

In the 1960s, Daniel G. Bobrow wrote a program called Student that could solve algebra problems from English-language stories. Thomas G. Evans repeated that success with his own program, Analogy. Edward A. Feigenbaum wrote the program DENDRAL, which performed simple calculations necessary for the study of chemistry.

People began predicting the limitless horizon for computer programming, suspecting machines would soon compose music, translate languages, and play chess at the grandmaster level. Those predictions eventually came to pass, but achieving them took much longer than expected.

The field of AI earned its place in the public consciousness with the rise of video games, first in arcades in the 1970s and then, starting in the 1980s, with handheld devices and home game platforms such as Atari and Intellivision.

It wasn't until the late 1990s, however, that some of the early predictions about AI came to pass. Deep Blue, a supercomputer, defeated chess master Gary Kasparov in successive matches in 1997. Since then, other countries have used chess as a benchmark for their own AI program developments, including a summer 2006 competition in China.

Additional topics

Job Descriptions and Careers, Career and Job Opportunities, Career Search, and Career Choices and ProfilesCool Science CareersA History of Artificial Intelligence - Teaching Intelligence, Marvin Minsky