The History of Artificial Intelligence
Before 1950 nobody talks about Artificial Intelligence but John McCarthy who is one of the founders of Artificial Intelligence must have realized that we will need Artificial intelligence in the future that is why he invented the word AI in 1955 the word Artificial intelligence first time used by John McCarthy.
The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, & improvements in computing power and storage. Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this kind of work and began training computers to dummy basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.
This early work paved the way for the automation & formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities. While Hollywood movies & science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary? or quite that smart. Instead, AI has evolved to provide many specific benefits in all industry.
The Beginnings of Artificial Intelligence
Artificial intelligence (AI) as we know it began as a vacation project. Dartmouth professor John McCarthy coined the term in the summer of 1956, when he invited a small group to spend a few weeks musing on how to make machines do things like use language. He had high hopes of a breakthrough toward human-level machines. “We think that a significant advance can be made,” he wrote with his co-organizers, “if a carefully selected group of scientists work on it together for a summer.”
SOME MOMENTS SHAPED ARTIFICIAL INTELLIGENCE
- In 1956 The Dartmouth Summer Research Project on Artificial Intelligence coins the name of a new field concerned with making software smart like humans.
- In 1956 Joseph Weizenbaum at MIT creates Eliza, the first chatbot, which poses as a psychotherapist.
- In 1975 Meta-Dendral, a program developed at Stanford to interpret chemical analyses, makes the first discoveries by a computer to be published in a refereed journal.
- In 1987 A Mercedes van fitted with two cameras & a bunch of computers drives itself around 20 kilometers along a German highway at more than 55 mph, in an academic project led by engineer Ernst Dickmanns.
- In 1997 IBM’s computer Deep Blue defeats chess world champion Garry Kasparov.
- In 2004 The Pentagon stages the Darpa Grand Challenge, a race for robot cars in the Mojave Desert that catalyzes the autonomous-car industry.
- In 2012 Researchers in a niche field called deep learning spur new corporate interest in AI by showing their ideas can make speech and image recognition much more accurate.
- In AlphaGo, created by Google unit DeepMind, defeats a world champion player of the board game Go.
Those hopes were not met, and McCarthy later conceded that he had been overly optimistic. But the workshop helped researchers dreaming of intelligent machines coalesce into a proper academic field.
Early work often focused on solving fairly abstract problems in math and logic. But it wasn’t long before AI started to show promising results on more human tasks. In the late 1950s Arthur Samuel created programs that learned to play checkers. In 1962 one scored a win over a master at the game. In 1967 a program called Dendral showed it could replicate the way chemist’s interpreted mass-spectrometry data on the makeup of chemical samples.
As the field of AI developed, so did different strategies for making smarter machines. Some researchers tried to distill human knowledge into code or come up with rules for tasks like understanding language. Others were inspired by the importance of learning to human and animal intelligence. They built systems that could get better at a task over time, perhaps by simulating evolution or by learning from example data. The field hit milestone after milestone, as computers mastered more tasks that could previously be done only by people.
Deep learning, the rocket fuel of the current AI boom, is a revival of one of the oldest ideas in AI. The technique involves passing data through webs of math loosely inspired by how brain cells work, known as artificial neural networks (NLP). As a network processes training data, connections between the parts of the network adjust, building up an ability to interpret future data.
“Mildaintrainings”,Artificial neural networks became an established idea in AI not long after the Dartmouth workshop. The room-filling Perception Mark 1 from 1958, for example, learned to distinguish different geometric shapes, and got written up in The New York Times as the “Embryo of Computer Designed to Read & Grow Wiser.” But neural networks tumbled from favor after an influential 1969 book co-authored by MIT’s Marvin Minsky suggests that they couldn’t be very powerful.
Not everyone was convinced, and some researchers kept the technique alive over the decades. They were vindicated in 2012, when a series of experiments showed that neural networks fueled with large piles of data and powerful computer chips could give machines new powers of perception.
In one notable result, researchers at the University of Toronto trounced rivals in an annual competition where software is tasked with categorizing images. In another, researchers from IBM, Microsoft, and Google teamed up to results showing deep learning could also deliver a significant jump in the accuracy of speech recognition. Tech companies began frantically hiring all the deep-learning experts they could find.