Difference between revisions of "Timeline of artificial intelligence"

From Timelines
Jump to: navigation, search
(What the timeline is still missing)
(13 intermediate revisions by the same user not shown)
Line 6: Line 6:
  
 
==Big picture==
 
==Big picture==
 +
 +
=== Summary by year ===
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 26: Line 28:
 
| 1987–1993 || Second AI winter || "Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective."<ref name="javatpoint.coma"/> "The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding."<ref name="livescience.coms"/> "The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter."<ref name="dataversity.netw"/> However, " By the end of 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining of expert systems"<ref name="washington.edu"/>
 
| 1987–1993 || Second AI winter || "Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective."<ref name="javatpoint.coma"/> "The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding."<ref name="livescience.coms"/> "The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter."<ref name="dataversity.netw"/> However, " By the end of 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining of expert systems"<ref name="washington.edu"/>
 
|-
 
|-
| 1993–2011 || "The emergence of intelligent agents" || "In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These  intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots."<ref name="dataversity.netw"/> "Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved."<ref name="harvard.edu d"/> "However, neural networks would not become financially successful until the 1990s, when they started being used to operate optical character recognition programs and speech pattern recognition programs."<ref name="dataversity.netw"/>
+
| 1993–2011 || "The emergence of intelligent agents" || "In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These  intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots."<ref name="dataversity.netw"/> "Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved."<ref name="harvard.edu d"/> "However, neural networks would not become financially successful until the 1990s, when they started being used to operate optical character recognition programs and speech pattern recognition programs."<ref name="dataversity.netw"/> "Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics."<ref name="ocw.uc3m.es"/>
 
|-
 
|-
 
| 2011-onward || Massive data and new computing power. "Deep learning, big data and artificial general intelligence" || "In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly."<ref name="javatpoint.coma"/> "Two factors explain the new boom in the discipline around 2010. 1) First of all, access to massive volumes of data. To be able to use algorithms for image classification and cat recognition, for example, it was previously necessary to carry out sampling yourself. Today, a simple search on Google can find millions. 2) Then the discovery of the very high efficiency of computer graphics card processors to accelerate the calculation of learning algorithms. The process being very iterative, it could take weeks before 2010 to process the entire sample. The computing power of these cards (capable of more than a thousand billion transactions per second) has enabled considerable progress at a limited financial cost (less than 1000 euros per card)."<ref name="coe.intf"/>
 
| 2011-onward || Massive data and new computing power. "Deep learning, big data and artificial general intelligence" || "In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly."<ref name="javatpoint.coma"/> "Two factors explain the new boom in the discipline around 2010. 1) First of all, access to massive volumes of data. To be able to use algorithms for image classification and cat recognition, for example, it was previously necessary to carry out sampling yourself. Today, a simple search on Google can find millions. 2) Then the discovery of the very high efficiency of computer graphics card processors to accelerate the calculation of learning algorithms. The process being very iterative, it could take weeks before 2010 to process the entire sample. The computing power of these cards (capable of more than a thousand billion transactions per second) has enabled considerable progress at a limited financial cost (less than 1000 euros per card)."<ref name="coe.intf"/>
 
|-
 
|-
 
|}
 
|}
 +
 +
=== Summary by country ===
  
 
==Full timeline==
 
==Full timeline==
Line 60: Line 64:
 
|-
 
|-
 
| 1854 || || "  George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations."<ref name="forbes.coms"/> "George Boole developed a binary algebra representing (some) "laws of thought," published in The Laws of Thought (1854)."<ref name="aitopics.org"/> ||
 
| 1854 || || "  George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations."<ref name="forbes.coms"/> "George Boole developed a binary algebra representing (some) "laws of thought," published in The Laws of Thought (1854)."<ref name="aitopics.org"/> ||
 +
|-
 +
| 1863 || || "1863 - Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[24]"<ref name="sutori.comd">{{cite web |title=The History Of Artificial Intelligence |url=https://www.sutori.com/story/the-history-of-artificial-intelligence--4qEzQz1PPuA9Wo4mBkv2a9BX |website=sutori.com |accessdate=20 March 2020}}</ref> ||
 
|-
 
|-
 
| 1879 || || "Modern propositional logic developed by Gottlob Frege in his 1879 work Begriffsschrift and later clarified and expanded by Russell,Tarski, Godel, Church and others."<ref name="aitopics.org"/>  ||
 
| 1879 || || "Modern propositional logic developed by Gottlob Frege in his 1879 work Begriffsschrift and later clarified and expanded by Russell,Tarski, Godel, Church and others."<ref name="aitopics.org"/>  ||
Line 79: Line 85:
 
| 1929 || || "Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism."<ref name="forbes.coms"/> ||
 
| 1929 || || "Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism."<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1931 || || "1931: Kurt  Gödel  introduced the  theory of  deficiency, which is called by his own name."<ref name="Mijwil"/> ||
+
| 1931 || || "1931: Kurt  Gödel  introduced the  theory of  deficiency, which is called by his own name."<ref name="Mijwil"/> "In 1931, Goedel layed the foundations of Theoretical Computer Science and AI"<ref name="people.idsia.ch">{{cite web |title=Artificial Intelligence |url=http://people.idsia.ch/~juergen/ai.html |website=people.idsia.ch |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
| 1936 || || "1936: Konrad  Zuse  developed  a  programmable computer named Z1 named 64K memory."<ref name="Mijwil"/>
+
| 1936 || || "1936: Konrad  Zuse  developed  a  programmable computer named Z1 named 64K memory."<ref name="Mijwil"/> ||
 
|-
 
|-
 
| 1936–1937 || || "Alan Turing proposed the universal Turing machine (1936-37)"<ref name="aitopics.org"/> ||
 
| 1936–1937 || || "Alan Turing proposed the universal Turing machine (1936-37)"<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1943 || || "Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”"<ref name="forbes.coms"/> "a first mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943."<ref name="coe.intf">{{cite web |title=History of Artificial Intelligence |url=https://www.coe.int/en/web/artificial-intelligence/history-of-ai |website=coe.int |accessdate=7 February 2020}}</ref> "The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons."<ref name="javatpoint.coma">{{cite web |title=History of Artificial Intelligence |url=https://www.javatpoint.com/history-of-artificial-intelligence |website=javatpoint.com |accessdate=7 February 2020}}</ref> ||
+
| 1943 || || ". McCulloch and Pitts [1943] showed how a simple thresholding “formal neuron” could be the basis for a Turing-complete machine. "<ref name="artint.info"/> "Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”"<ref name="forbes.coms"/> "a first mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943."<ref name="coe.intf">{{cite web |title=History of Artificial Intelligence |url=https://www.coe.int/en/web/artificial-intelligence/history-of-ai |website=coe.int |accessdate=7 February 2020}}</ref> "The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons."<ref name="javatpoint.coma">{{cite web |title=History of Artificial Intelligence |url=https://www.javatpoint.com/history-of-artificial-intelligence |website=javatpoint.com |accessdate=7 February 2020}}</ref> ||
 
|-
 
|-
 
| 1943 || || "Arturo Rosenblueth, Norbert Wiener & Julian Bigelow coin the term "cybernetics" in a 1943 paper. Wiener's popular book by that name published in 1948."<ref name="aitopics.org"/> ||
 
| 1943 || || "Arturo Rosenblueth, Norbert Wiener & Julian Bigelow coin the term "cybernetics" in a 1943 paper. Wiener's popular book by that name published in 1948."<ref name="aitopics.org"/> ||
 
|-
 
|-
 
| 1943 || || "Emil Post proves that production systems are a general computational mechanism (1943). See Ch.2 of Rule Based Expert Systems for the uses of production systems in AI. Post also did important work on completeness, inconsistency, and proof theory."<ref name="aitopics.org"/> ||
 
| 1943 || || "Emil Post proves that production systems are a general computational mechanism (1943). See Ch.2 of Rule Based Expert Systems for the uses of production systems in AI. Post also did important work on completeness, inconsistency, and proof theory."<ref name="aitopics.org"/> ||
 +
|-
 +
| 1943 || || ". In 1943 the neurophysiologist Warren McCulloch of the University of Illinois and the mathematician Walter Pitts of the University of Chicago published an influential treatise on neural nets and automatons, according to which each neuron in the brain is a simple digital processor and the brain as a whole is a form of computing machine. "<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1945 || || "George Polya published his best-selling book on thinking heuristically, How to Solve It in 1945. This book introduced the term 'heuristic' into modern thinking and has influenced many AI scientists."<ref name="aitopics.org"/> ||
 
| 1945 || || "George Polya published his best-selling book on thinking heuristically, How to Solve It in 1945. This book introduced the term 'heuristic' into modern thinking and has influenced many AI scientists."<ref name="aitopics.org"/> ||
Line 110: Line 118:
 
|-
 
|-
 
| 1951 || || "  Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons."<ref name="forbes.coms"/> ||
 
| 1951 || || "  Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons."<ref name="forbes.coms"/> ||
 +
|-
 +
| 1950 || || "In a 1950 Scientific American article, Claude Shannon argued that only an artificial intelligence program could play computer chess"<ref name="atariarchives.org">{{cite web |title=A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE |url=https://www.atariarchives.org/deli/artificial_intelligence.php |website=atariarchives.org |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 1951 || || "1951:  The first artificial  intelligence  programs  for  the Mark 1 device were written"<ref name="Mijwil"/>
 
| 1951 || || "1951:  The first artificial  intelligence  programs  for  the Mark 1 device were written"<ref name="Mijwil"/>
Line 117: Line 127:
 
| 1952 || || "Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses."<ref name="dataversity.netw">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.dataversity.net/brief-history-artificial-intelligence/ |website=dataversity.net |accessdate=7 February 2020}}</ref> ||
 
| 1952 || || "Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses."<ref name="dataversity.netw">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.dataversity.net/brief-history-artificial-intelligence/ |website=dataversity.net |accessdate=7 February 2020}}</ref> ||
 
|-
 
|-
| 1954 || || "In the US, one of the main motivations for the funding of AI research was the promise of machine translation (MT). Because of Cold War concerns, the US government was particularly interested in the automatic and instant translation of Russian. In 1954, the first demonstration of MT, the Georgetown-IBM experiment, showed a great promise. The system was by no means complete, consisting only six rules, a 250-item vocabulary and specialized only in Organic Chemistry."<ref name="washington.edu"/> ||
+
| 1953 || || "Other recent work includes the development of languages for reasoning about time-dependent data such as “the account was paid yesterday.” These languages are based on tense logic, which permits statements to be located in the flow of time. (Tense logic was invented in 1953 by the philosopher Arthur Prior at the University of Canterbury, Christchurch, New Zealand.)"<ref name="britannica.coms"/> ||
 +
|-
 +
| 1954 || || "In the US, one of the main motivations for the funding of AI research was the promise of machine translation (MT). Because of Cold War concerns, the US government was particularly interested in the automatic and instant translation of Russian. In 1954, the first demonstration of MT, the Georgetown-IBM experiment, showed a great promise. The system was by no means complete, consisting only six rules, a 250-item vocabulary and specialized only in Organic Chemistry."<ref name="washington.edu"/> ||
 +
|-
 +
| 1954 || || "It was not until 1954, however, that Belmont Farley and Wesley Clark of MIT succeeded in running the first artificial neural network—albeit limited by computer memory to no more than 128 neurons. They were able to train their networks to recognize simple patterns."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1955 || || "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field."<ref name="forbes.coms"/> ||
 
| 1955 || || "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field."<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1955 || || "December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica."<ref name="forbes.coms"/> " An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems."<ref name="javatpoint.coma"/> ||
 
| 1955 || || "December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica."<ref name="forbes.coms"/> " An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems."<ref name="javatpoint.coma"/> ||
 +
|-
 +
| 1955–1956 || || " An important landmark in this area was a theorem-proving program written in 1955–56 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of the Carnegie Mellon University. The Logic Theorist, as the program became known, was designed to prove theorems from Principia Mathematica (1910–13), a three-volume work by the British philosopher-mathematicians Alfred North Whitehead and Bertrand Russell. In one instance, a proof devised by the program was more elegant than the proof given in the books."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1956 || || "In 1956, a conference "Artificial Intelligence" was held for the first time in Hanover, New  Hampshire,  at  Dartmouth  College. "<ref>{{cite web |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |website=researchgate.net |accessdate=9 March 2020}}</ref> "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.""<ref name="forbes.coms"/> "The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline. Anecdotally, it is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments based on formal logic)."<ref name="coe.intf"/><ref name="washington.edu">{{cite web |title=The History of Artificial Intelligence |url=https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf |website=washington.edu |accessdate=7 February 2020}}</ref><ref name="javatpoint.coma"/> ||
 
| 1956 || || "In 1956, a conference "Artificial Intelligence" was held for the first time in Hanover, New  Hampshire,  at  Dartmouth  College. "<ref>{{cite web |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |website=researchgate.net |accessdate=9 March 2020}}</ref> "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.""<ref name="forbes.coms"/> "The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline. Anecdotally, it is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments based on formal logic)."<ref name="coe.intf"/><ref name="washington.edu">{{cite web |title=The History of Artificial Intelligence |url=https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf |website=washington.edu |accessdate=7 February 2020}}</ref><ref name="javatpoint.coma"/> ||
 
|-
 
|-
| 1956 || || "Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956."<ref name="harvard.edu d"/> "But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined."<ref name="livescience.coms">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.livescience.com/49007-history-of-artificial-intelligence.html |website=livescience.com |accessdate=7 February 2020}}</ref> "1956:  The logic  theorist  (Logic  Theory-LT)  program for  solving  mathematical  problems  is  introduced  by Neweell,  Shaw  and  Simon. The system is regarded as the first artificial intelligence system."<ref name="Mijwil"/> ||
+
| 1956 || || "Newell and Simon [1956] built a program, Logic Theorist, that discovers proofs in propositional logic."<ref name="artint.info"/> "Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956."<ref name="harvard.edu d"/> "But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined."<ref name="livescience.coms">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.livescience.com/49007-history-of-artificial-intelligence.html |website=livescience.com |accessdate=7 February 2020}}</ref> "1956:  The logic  theorist  (Logic  Theory-LT)  program for  solving  mathematical  problems  is  introduced  by Neweell,  Shaw  and  Simon. The system is regarded as the first artificial intelligence system."<ref name="Mijwil"/> ||
 
|-
 
|-
 
| 1957 || || " Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a “remarkable machine… capable of what amounts to thought.”"<ref name="forbes.coms"/> ||
 
| 1957 || || " Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a “remarkable machine… capable of what amounts to thought.”"<ref name="forbes.coms"/> ||
Line 131: Line 147:
 
| 1957 || || "Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI then entered a first winter. Simon's vision proved to be right... 30 years later."<ref name="coe.intf"/> ||
 
| 1957 || || "Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI then entered a first winter. Simon's vision proved to be right... 30 years later."<ref name="coe.intf"/> ||
 
|-
 
|-
| 1957 || || "The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon."<ref name="aitopics.org"/> ||
+
| 1957 || || "The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon."<ref name="aitopics.org"/> "Newell, Simon, and Shaw went on to write a more powerful program, the General Problem Solver, or GPS. The first version of GPS ran in 1957, and work continued on the project for about a decade. GPS could solve an impressive variety of puzzles using a trial and error approach."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1958 || || "John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research."<ref name="forbes.coms"/> "1958: John  McCarty  of  MIT  created  the  LISP  (list Processing language) language."<ref name="Mijwil"/> ||
 
| 1958 || || "John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research."<ref name="forbes.coms"/> "1958: John  McCarty  of  MIT  created  the  LISP  (list Processing language) language."<ref name="Mijwil"/> ||
 +
|-
 +
| 1958 || || "1958: Herbert Gelernter’s “geometry machine” becomes the first advanced AI programme to prove geometric theorems and the third ever in creation."<ref name="omnius.com"/> ||
 
|-
 
|-
 
| 1959 || || "Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”"<ref name="forbes.coms"/> ||
 
| 1959 || || "Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”"<ref name="forbes.coms"/> ||
Line 140: Line 158:
 
|-
 
|-
 
| 1959 || || "John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”"<ref name="forbes.coms"/> ||
 
| 1959 || || "John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”"<ref name="forbes.coms"/> ||
 +
|-
 +
| 1959 || || "Once real computers were built, some of the first applications of computers were AI programs. For example, Samuel [1959] built a checkers program in 1952 and implemented a program that learns to play checkers in the late 1950s."<ref name="artint.info">{{cite web |title=1.2 A Brief History of Artificial Intelligence |url=https://artint.info/2e/html/ArtInt2e.Ch1.S2.html |website=artint.info |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 1960 || || "JCR  Licklider  described  the  human-machine relationship in his work"<ref name="Mijwil"/> ||
 
| 1960 || || "JCR  Licklider  described  the  human-machine relationship in his work"<ref name="Mijwil"/> ||
Line 165: Line 185:
 
| 1964 || || "Bert Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems"<ref name="aitopics.org"/> ||
 
| 1964 || || "Bert Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems"<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1965 || || "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""<ref name="forbes.coms"/> ||
+
| 1965 || || "Herbert Simon said in The Shape of Automation for Men and Management (1965) that “machines will be capable, within 20 years, of doing any work a man can do”"<ref name="historyextra">{{cite web |title=7 phases of the history of Artificial intelligence |url=https://www.historyextra.com/period/second-world-war/7-phases-of-the-history-of-artificial-intelligence/ |website=historyextra.com |accessdate=21 March 2020}}</ref> "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1965 || || "Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress."<ref name="forbes.coms"/> ||
 
| 1965 || || "Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress."<ref name="forbes.coms"/> ||
Line 204: Line 224:
 
|-
 
|-
 
| 1967 || || "Richard Greenblatt at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play."<ref name="aitopics.org"/> ||
 
| 1967 || || "Richard Greenblatt at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play."<ref name="aitopics.org"/> ||
 +
|-
 +
| 1967 || || "For example, the STUDENT program of Daniel Bobrow [1967] could solve high school algebra tasks expressed in natural language."<ref name="artint.info"/> ||
 
|-
 
|-
 
| 1968 || || "The film 2001: Space Odyssey is released, featuring Hal, a sentient computer."<ref name="forbes.coms"/> "In 1968 Stanley Kubrick directed the film "2001 Space Odyssey" where a computer - HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film will naturally not be scientific but it will contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions."<ref name="coe.intf"/> ||
 
| 1968 || || "The film 2001: Space Odyssey is released, featuring Hal, a sentient computer."<ref name="forbes.coms"/> "In 1968 Stanley Kubrick directed the film "2001 Space Odyssey" where a computer - HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film will naturally not be scientific but it will contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions."<ref name="coe.intf"/> ||
Line 242: Line 264:
 
|-
 
|-
 
| 1972 || || "Prolog developed by Alain Colmerauer."<ref name="aitopics.org"/> ||
 
| 1972 || || "Prolog developed by Alain Colmerauer."<ref name="aitopics.org"/> ||
 +
|-
 +
| 1972 || || "Work on MYCIN, an expert system for treating blood infections, began at Stanford University in 1972. MYCIN would attempt to diagnose patients based on reported symptoms and medical test results."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1973 || || "James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research."<ref name="harvard.edu d"/> "The “Lighthill report” commonly refers to “Artificial Intelligence: A General Survey” by Professor Sir James Lighthill of Cambridge University in 1973. His review of AI was at the request of Brian Flowers, the head of the British Science Research Council, the main funding body of British university scientific research. The review was to help the council evaluate requests for support in AI research. In the paper, Lighthill offered a pessimistic prognosis for AI, stating that “in no part of the field have discoveries made so far produced the major impact that was then promised”"<ref name="washington.edu"/> ||
 
| 1973 || || "James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research."<ref name="harvard.edu d"/> "The “Lighthill report” commonly refers to “Artificial Intelligence: A General Survey” by Professor Sir James Lighthill of Cambridge University in 1973. His review of AI was at the request of Brian Flowers, the head of the British Science Research Council, the main funding body of British university scientific research. The review was to help the council evaluate requests for support in AI research. In the paper, Lighthill offered a pessimistic prognosis for AI, stating that “in no part of the field have discoveries made so far produced the major impact that was then promised”"<ref name="washington.edu"/> ||
 +
|-
 +
| 1973 || || "The logic programming language PROLOG (Programmation en Logique) was conceived by Alain Colmerauer at the University of Aix-Marseille, France, where the language was first implemented in 1973. PROLOG was further developed by the logician Robert Kowalski, a member of the AI group at the University of Edinburgh. This language makes use of a powerful theorem-proving technique known as resolution, invented in 1963 at the U.S. Atomic Energy Commission’s Argonne National Laboratory in Illinois by the British logician Alan Robinson. PROLOG can determine whether or not a given statement follows logically from other given statements. For example, given the statements “All logicians are rational” and “Robinson is a logician,” a PROLOG program responds in the affirmative to the query “Robinson is rational?” PROLOG is widely used for AI work, especially in Europe and Japan."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1973 || || "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."<ref name="washington.edu"/> ||
 
| 1973 || || "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."<ref name="washington.edu"/> ||
Line 317: Line 343:
 
|-
 
|-
 
| 1980 || || "First National Conference of the American Association of Artificial Intelligence (AAAI) held at Stanford."<ref name="aitopics.org"/> ||
 
| 1980 || || "First National Conference of the American Association of Artificial Intelligence (AAAI) held at Stanford."<ref name="aitopics.org"/> ||
 +
|-
 +
| 1980 || || " (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) "<ref name="britannica.coms">{{cite web |title=Artificial intelligence |url=https://www.britannica.com/technology/artificial-intelligence/Methods-and-goals-in-AI |website=britannica.com |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 1981 || || "In 1981 an expert system named SID (Synthesis of Integral Design) designed 93% of the VAX 9000 CPU logic gates. The SID system was existing out of 1,000 hand-written-rules. The final design of the CPU took 3 hours to calculate and outperformed in many ways the human experts. As an example, the SID produced a faster 64-bit adder than the manually designed one. Also the bug per gate rate, which where around 1 bug per 200 gates from human experts, was much lower at around 1 bug per 20,000 gates at the final result of the SID system."<ref name="dev.to">{{cite web |title=A Short History of Artificial Intelligence |url=https://dev.to/lschultebraucks/a-short-history-of-artificial-intelligence-7hm |website=dev.to |accessdate=9 March 2020}}</ref> ||
 
| 1981 || || "In 1981 an expert system named SID (Synthesis of Integral Design) designed 93% of the VAX 9000 CPU logic gates. The SID system was existing out of 1,000 hand-written-rules. The final design of the CPU took 3 hours to calculate and outperformed in many ways the human experts. As an example, the SID produced a faster 64-bit adder than the manually designed one. Also the bug per gate rate, which where around 1 bug per 200 gates from human experts, was much lower at around 1 bug per 20,000 gates at the final result of the SID system."<ref name="dev.to">{{cite web |title=A Short History of Artificial Intelligence |url=https://dev.to/lschultebraucks/a-short-history-of-artificial-intelligence-7hm |website=dev.to |accessdate=9 March 2020}}</ref> ||
Line 337: Line 365:
 
|-
 
|-
 
| 1984 || || "At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming “AI Winter,” predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s."<ref name="forbes.coms"/> ||
 
| 1984 || || "At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming “AI Winter,” predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s."<ref name="forbes.coms"/> ||
 +
|-
 +
| 1984 || || "CYC is a large experiment in symbolic AI. The project began in 1984 under the auspices of the Microelectronics and Computer Technology Corporation, a consortium of computer, semiconductor, and electronics manufacturers. "<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1985 || || "The autonomous drawing program, Aaron, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments)."<ref name="aitopics.org"/> ||
 
| 1985 || || "The autonomous drawing program, Aaron, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments)."<ref name="aitopics.org"/> ||
Line 343: Line 373:
 
|-  
 
|-  
 
| 1986 || Literature || {{w|Hubert Dreyfus}} publishes ''Mind over Machine''. ||
 
| 1986 || Literature || {{w|Hubert Dreyfus}} publishes ''Mind over Machine''. ||
 +
|-
 +
| 1986 || || "In one famous connectionist experiment conducted at the University of California at San Diego (published in 1986), David Rumelhart and James McClelland trained a network of 920 artificial neurons, arranged in two layers of 460 neurons, to form the past tenses of English verbs. Root forms of verbs—such as come, look, and sleep—were presented to one layer of neurons, the input layer. A supervisory computer program observed the difference between the actual response at the layer of output neurons and the desired response—came, say—and then mechanically adjusted the connections throughout the network in accordance with the procedure described above to give the network a slight push in the direction of the correct response. About 400 different verbs were presented one by one to the network, and the connections were adjusted after each presentation. This whole procedure was repeated about 200 times using the same verbs, after which the network could correctly form the past tense of many unfamiliar verbs as well as of the original verbs."<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1986 (October) || || {{w|Centre for Artificial Intelligence and Robotics}}<ref>{{cite web |title=Centre for Artificial Intelligence and Robotics (CAIR) |url=https://www.epicos.com/company/13386/centre-artificial-intelligence-and-robotics-cair |website=epicos.com |accessdate=6 March 2020}}</ref> || {{w|India}}
 
| 1986 (October) || || {{w|Centre for Artificial Intelligence and Robotics}}<ref>{{cite web |title=Centre for Artificial Intelligence and Robotics (CAIR) |url=https://www.epicos.com/company/13386/centre-artificial-intelligence-and-robotics-cair |website=epicos.com |accessdate=6 March 2020}}</ref> || {{w|India}}
Line 385: Line 417:
 
|-
 
|-
 
| 1991 || || {{w|European Neural Network Society}}<ref>{{cite book |last1=Taylor |first1=J.G. |title=The Promise of Neural Networks |url=https://books.google.com.ar/books?id=GbnkBwAAQBAJ&pg=PA63&lpg=PA63&dq=1991+European+Neural+Network+Society&source=bl&ots=o-ZMzEz2eC&sig=ACfU3U0g5hGyXuqYPyp4I5XUQr2ZwW3YlQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwA3oECAYQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref><ref>{{cite book |title=Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 |edition=Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa |url=https://books.google.com.ar/books?id=ozU7DwAAQBAJ&pg=PR5&lpg=PR5&dq=1991+European+Neural+Network+Society&source=bl&ots=9T2UfbE_J0&sig=ACfU3U3fExCFGSypH9eCD2Sjj9I_k_4vrQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwBHoECAoQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref> ||  
 
| 1991 || || {{w|European Neural Network Society}}<ref>{{cite book |last1=Taylor |first1=J.G. |title=The Promise of Neural Networks |url=https://books.google.com.ar/books?id=GbnkBwAAQBAJ&pg=PA63&lpg=PA63&dq=1991+European+Neural+Network+Society&source=bl&ots=o-ZMzEz2eC&sig=ACfU3U0g5hGyXuqYPyp4I5XUQr2ZwW3YlQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwA3oECAYQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref><ref>{{cite book |title=Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 |edition=Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa |url=https://books.google.com.ar/books?id=ozU7DwAAQBAJ&pg=PR5&lpg=PR5&dq=1991+European+Neural+Network+Society&source=bl&ots=9T2UfbE_J0&sig=ACfU3U3fExCFGSypH9eCD2Sjj9I_k_4vrQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwBHoECAoQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref> ||  
 +
|-
 +
| 1991 || || In 1991 the American philanthropist Hugh Loebner started the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program has come close to passing an undiluted Turing test.<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1992 || Literature || ''{{w|International Journal on Artificial Intelligence Tools}}''<ref>{{cite web |title=International Journal on Artificial Intelligence Tools |url=https://www.letpub.com/index.php?journalid=3920&page=journalapp&view=detail |website=letpub.com |accessdate=6 March 2020}}</ref> ||  
 
| 1992 || Literature || ''{{w|International Journal on Artificial Intelligence Tools}}''<ref>{{cite web |title=International Journal on Artificial Intelligence Tools |url=https://www.letpub.com/index.php?journalid=3920&page=journalapp&view=detail |website=letpub.com |accessdate=6 March 2020}}</ref> ||  
Line 395: Line 429:
 
|-
 
|-
 
| 1995 || || "Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web."<ref name="forbes.coms"/> ||
 
| 1995 || || "Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web."<ref name="forbes.coms"/> ||
 +
|-
 +
| 1995 || || "1995: AltaVista becomes the first search engine to use natural language processing."<ref name="omnius.com">{{cite web |title=A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE: MAKING MYTHOLOGY A REALITY |url=https://omnius.com/blog/a-short-history-of-artificial-intelligence-making-mythology-a-reality/ |website=omnius.com |accessdate=20 March 2020}}</ref> ||
 
|-
 
|-
 
| 1996 || || "EQP theorem prover at Argonne National Labs proves the Robbins Conjecture in mathematics (October-November, 1996)."<ref name="aitopics.org"/> ||
 
| 1996 || || "EQP theorem prover at Argonne National Labs proves the Robbins Conjecture in mathematics (October-November, 1996)."<ref name="aitopics.org"/> ||
Line 409: Line 445:
 
|-
 
|-
 
| 1998 || || "Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation."<ref name="harvard.edu d"/> ||
 
| 1998 || || "Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation."<ref name="harvard.edu d"/> ||
 +
|-
 +
| 1998 || || " And it was in 1998 that Amazon began using “collaborative filtering” enabling recommendations for millions of customers."<ref name="econsultancy.com">{{cite web |title=A brief history of artificial intelligence in advertising |url=https://econsultancy.com/a-brief-history-of-artificial-intelligence-in-advertising/ |website=econsultancy.com |accessdate=20 March 2020}}</ref> ||
 +
|-
 +
| 1998 || || "1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment."<ref name="sutori.comd"/> ||
 +
|-
 +
| Late 1990s || || "Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web."<ref name="sutori.comd"/> ||
 +
|-
 +
| 1990s || || "Demonstration of an Intelligent Room and Emotional Agents at MIT's AI Lab. Initiation of work on the Oxygen Architecture, which connects mobile and stationary computers in an adaptive network."<ref name="ocw.uc3m.es"/> ||
 
|-
 
|-
 
| 2000 || || "MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions."<ref name="harvard.edu d"/> ||
 
| 2000 || || "MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions."<ref name="harvard.edu d"/> ||
Line 415: Line 459:
 
|-
 
|-
 
| 2000 || Conference || {{w|Mexican International Conference on Artificial Intelligence}}<ref>{{cite web |title=MICAI 2000: Advances in Artificial Intelligence |url=https://www.springer.com/gp/book/9783540673545 |website=springer.com |accessdate=6 March 2020}}</ref> || {{w|Mexico}}
 
| 2000 || Conference || {{w|Mexican International Conference on Artificial Intelligence}}<ref>{{cite web |title=MICAI 2000: Advances in Artificial Intelligence |url=https://www.springer.com/gp/book/9783540673545 |website=springer.com |accessdate=6 March 2020}}</ref> || {{w|Mexico}}
 +
|-
 +
| 2000 || || "Cynthia Breazeal at MIT publishes her dissertation on Sociable Machines, describing KISMET, a robot with a face that expresses emotions."<ref name="ocw.uc3m.es">{{cite web |title=Tema 1 Brief History of Artificial Intelligence |url=http://ocw.uc3m.es/ingenieria-telematica/inteligencia-en-redes-de-comunicaciones/material-de-clase-1/01a-brief-history-of-ai |website=ocw.uc3m.es |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 2001 || || "A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love."<ref name="harvard.edu d"/> ||
 
| 2001 || || "A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love."<ref name="harvard.edu d"/> ||
Line 438: Line 484:
 
| 2006 || || "Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI."<ref name="javatpoint.coma"/> ||
 
| 2006 || || "Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI."<ref name="javatpoint.coma"/> ||
 
|-
 
|-
| 2006 || || The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct {{w|heart arrythmia}}. The results are rated as better than an above-average human surgeon. The machine has a {{w|database}} of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".<ref>{{cite news|url=https://www.engadget.com/2006/05/19/robot-surgeon-performs-worlds-first-unassisted-operation|title=Autonomous Robotic Surgeon performs surgery on first live human|date=19 May 2006|publisher=[[Engadget]]}}</ref><ref>{{cite web |url=http://www.physorg.com/news67222790.html |title=Robot surgeon carries out 9-hour operation by itself|publisher=[[Phys.Org]]}}</ref>
+
| 2006 || || The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct {{w|heart arrythmia}}. The results are rated as better than an above-average human surgeon. The machine has a {{w|database}} of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".<ref>{{cite news|url=https://www.engadget.com/2006/05/19/robot-surgeon-performs-worlds-first-unassisted-operation|title=Autonomous Robotic Surgeon performs surgery on first live human|date=19 May 2006|publisher=[[Engadget]]}}</ref><ref>{{cite web |url=http://www.physorg.com/news67222790.html |title=Robot surgeon carries out 9-hour operation by itself|publisher=[[Phys.Org]]}}</ref> ||
 
|-
 
|-
 
| 2006 || Conference || {{w|AI@50}}<ref>{{cite web |title=Dartmouth Artificial Intelligence Conference |url=https://www.dartmouth.edu/~ai50/homepage.html |website=dartmouth.edu |accessdate=6 March 2020}}</ref> ||
 
| 2006 || Conference || {{w|AI@50}}<ref>{{cite web |title=Dartmouth Artificial Intelligence Conference |url=https://www.dartmouth.edu/~ai50/homepage.html |website=dartmouth.edu |accessdate=6 March 2020}}</ref> ||
Line 475: Line 521:
 
|-
 
|-
 
| 2012 || || "October 2012 A convolutional neural network designed by researchers at the University of Toronto achieve an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before."<ref name="forbes.coms"/> ||
 
| 2012 || || "October 2012 A convolutional neural network designed by researchers at the University of Toronto achieve an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before."<ref name="forbes.coms"/> ||
 +
|-
 +
| 2012 || || The secutiry market is flooded by computer vision start-ups.<ref name="daxueconsulting.com">{{cite web |title=The history of Artificial Intelligence (AI) in China |url=https://daxueconsulting.com/history-china-artificial-intelligence/ |website=daxueconsulting.com |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 2013 || || "Boston Dynamics unveils Atlas , an advanced humanoid robot designed for various search-and-rescue tasks."<ref name="futureoftech.org"/><ref>{{cite web |title=Atlas |url=https://www.bostondynamics.com/atlas |website=bostondynamics.com |accessdate=9 March 2020}}</ref> ||
 
| 2013 || || "Boston Dynamics unveils Atlas , an advanced humanoid robot designed for various search-and-rescue tasks."<ref name="futureoftech.org"/><ref>{{cite web |title=Atlas |url=https://www.bostondynamics.com/atlas |website=bostondynamics.com |accessdate=9 March 2020}}</ref> ||
 +
|-
 +
| 2013 || || ". Automated Insights published 300 million pieces of content in 2013"<ref name="econsultancy.com"/> ||
 +
|-
 +
| 2014 (January) || || "The 3-years-old DeepMind being acquired by Google in Jan. 2014;"<ref>{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.kdnuggets.com/2017/04/brief-history-artificial-intelligence.html |website=kdnuggets.com |accessdate=9 March 2020}}</ref> ||
 
|-
 
|-
 
| 2014 || || "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."<ref name="harvard.edu d"/> ||
 
| 2014 || || "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."<ref name="harvard.edu d"/> ||
Line 492: Line 544:
 
| 2014 || || "Ian Goodfellow comes up with Generative Adversarial Networks (GAN)."<ref name="qbi.uq.edu.au">{{cite web |title=History of Artificial Intelligence |url=https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence |website=qbi.uq.edu.au |accessdate=9 March 2020}}</ref> ||
 
| 2014 || || "Ian Goodfellow comes up with Generative Adversarial Networks (GAN)."<ref name="qbi.uq.edu.au">{{cite web |title=History of Artificial Intelligence |url=https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence |website=qbi.uq.edu.au |accessdate=9 March 2020}}</ref> ||
 
|-
 
|-
| 2014 (January) || || "The 3-years-old DeepMind being acquired by Google in Jan. 2014;"<ref>{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.kdnuggets.com/2017/04/brief-history-artificial-intelligence.html |website=kdnuggets.com |accessdate=9 March 2020}}</ref> ||
+
| 2014 || || "When programmatic ad buying was popularized in 2014, it introduced us to artificial intelligence-based ad buying, effectively removing the broken, laborious manual tasks of researching target markets, budgets, insertion orders, and layers of additional analytics tracking – not to mention high prices."<ref name="econsultancy.com"/> ||
 
|-
 
|-
 
| 2015 || || "Amazon introduces service ‘Alexa’ in 2015."<ref name="bosch.coms"/> ||
 
| 2015 || || "Amazon introduces service ‘Alexa’ in 2015."<ref name="bosch.coms"/> ||
 +
|-
 +
| 2015 || || The Chinese Congress on Artificial Intelligence 2015 takes place in Beijing, giving the direction of AI-related industries in China.<ref name="daxueconsulting.com"/> || {{w|China}}
 
|-
 
|-
 
| 2015 || || {{w|Open Letter on Artificial Intelligence}}<ref>{{cite web |title=Elon Musk, Stephen Hawking warn of artificial intelligence dangers |url=https://mashable.com/2015/01/13/elon-musk-stephen-hawking-artificial-intelligence/ |website=mashable.com |accessdate=6 March 2020}}</ref> ||
 
| 2015 || || {{w|Open Letter on Artificial Intelligence}}<ref>{{cite web |title=Elon Musk, Stephen Hawking warn of artificial intelligence dangers |url=https://mashable.com/2015/01/13/elon-musk-stephen-hawking-artificial-intelligence/ |website=mashable.com |accessdate=6 March 2020}}</ref> ||
Line 500: Line 554:
 
| 2015 (September 22) || || {{w|The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World}} ||
 
| 2015 (September 22) || || {{w|The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World}} ||
 
|-  
 
|-  
 +
| 2015 || || "t. In 2015, Google introduced its latest artificial intelligence algorithm, RankBrain, which makes significant advances in interpreting search queries in new ways. Through RankBrain, Google has been successful in interpreting the intent behind a user’s search terms, making for a more relevant result."<ref name="econsultancy.com"/> ||
 +
|-
 
| 2016 || || "March 2016 Google DeepMind's AlphaGo defeats Go champion Lee Sedol."<ref name="harvard.edu d"/> ||
 
| 2016 || || "March 2016 Google DeepMind's AlphaGo defeats Go champion Lee Sedol."<ref name="harvard.edu d"/> ||
 
|-
 
|-
Line 509: Line 565:
 
|-
 
|-
 
| 2016 || || "Swarm AI, a real-time online tool, predicts the winning horse of the Kentucky Derby"<ref name="futureoftech.org"/> ||
 
| 2016 || || "Swarm AI, a real-time online tool, predicts the winning horse of the Kentucky Derby"<ref name="futureoftech.org"/> ||
 +
|-
 +
| 2016 || || " McKinsey estimates that in 2016 Google and Baidu spent around $20 to $30 billion on funding their internal R&D and acquiring startups in the field."<ref name="business2community.com">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.business2community.com/tech-gadgets/brief-history-artificial-intelligence-02004150 |website=business2community.com |accessdate=20 March 2020}}</ref> ||
 
|-
 
|-
 
| 2017 || || {{w|OpenAI Five}}<ref>{{cite web |title=OpenAI Five |url=https://openai.com/projects/five/ |website=openai.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2017 || || {{w|OpenAI Five}}<ref>{{cite web |title=OpenAI Five |url=https://openai.com/projects/five/ |website=openai.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
Line 525: Line 583:
 
|-
 
|-
 
| 2017 || || "Google’s DeepMind AI teaches itself to walk."<ref name="futureoftech.org"/> ||
 
| 2017 || || "Google’s DeepMind AI teaches itself to walk."<ref name="futureoftech.org"/> ||
 +
|-
 +
| 2017 || || AI is included in the Chinese government report as a national strategy in China.<ref name="daxueconsulting.com"/> ||
 
|-
 
|-
 
| 2018 || || "2018: AI debates space travel and makes a hairdressing appointment. These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine."<ref name="bosch.coms"/> ||
 
| 2018 || || "2018: AI debates space travel and makes a hairdressing appointment. These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine."<ref name="bosch.coms"/> ||
Line 531: Line 591:
 
|-
 
|-
 
| 2018 (April 26) || || {{w|Innovation Center for Artificial Intelligence}}<ref>{{cite web |title=Innovation Center for Artificial Intelligence officially launched |url=https://www.uva.nl/en/content/news/press-releases/2018/04/innovation-center-for-artificial-intelligence-officially-launched.html |website=uva.nl |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Ahold Delhaize Helps Launch AI Innovation Center |url=https://consumergoods.com/ahold-delhaize-helps-launch-ai-innovation-center |website=consumergoods.com |accessdate=6 March 2020}}</ref> || {{w|Netherlands}}
 
| 2018 (April 26) || || {{w|Innovation Center for Artificial Intelligence}}<ref>{{cite web |title=Innovation Center for Artificial Intelligence officially launched |url=https://www.uva.nl/en/content/news/press-releases/2018/04/innovation-center-for-artificial-intelligence-officially-launched.html |website=uva.nl |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Ahold Delhaize Helps Launch AI Innovation Center |url=https://consumergoods.com/ahold-delhaize-helps-launch-ai-innovation-center |website=consumergoods.com |accessdate=6 March 2020}}</ref> || {{w|Netherlands}}
 +
|-
 +
| 2018 || || " In 2018, the size of China’s artificial intelligence market reached 33.9 billion RMB"<ref name="daxueconsulting.com"/> || {{w|China}}
 
|-
 
|-
 
| 2018 || || "Astronomers use AI to spot 6,000 new craters on the moon’s surface."<ref name="futureoftech.org"/><ref>{{cite web |title=New technique uses AI to locate and count craters on the moon |url=https://phys.org/news/2018-03-technique-ai-craters-moon.html |website=phys.org |accessdate=9 March 2020}}</ref> ||
 
| 2018 || || "Astronomers use AI to spot 6,000 new craters on the moon’s surface."<ref name="futureoftech.org"/><ref>{{cite web |title=New technique uses AI to locate and count craters on the moon |url=https://phys.org/news/2018-03-technique-ai-craters-moon.html |website=phys.org |accessdate=9 March 2020}}</ref> ||
Line 537: Line 599:
 
|-
 
|-
 
| 2018 || || "Google demonstrates its Duplex AI, a digital assistant that can make appointments via telephone calls with live humans. Duplex uses natural language understanding, deep learning and text-to-speech capabilities to understand conversational context and nuance in ways no other digital assistant has yet matched."<ref name="futureoftech.org"/> ||
 
| 2018 || || "Google demonstrates its Duplex AI, a digital assistant that can make appointments via telephone calls with live humans. Duplex uses natural language understanding, deep learning and text-to-speech capabilities to understand conversational context and nuance in ways no other digital assistant has yet matched."<ref name="futureoftech.org"/> ||
 +
|-
 +
| 2018 || || AI ushers in the first year of commercial applications in China. There are more than 1,000 AI-related companies in the country by the time.<ref name="daxueconsulting.com"/> || {{w|China}}
 +
|-
 +
| 2018 || || The AI Now Report finds harmful inaccuracies in AI-driven technology, plus an alarming lack of accountability and, in some cases, systems built on racial discrimination or used for human rights violations.<ref name="looklisten.com">{{cite web |title=Rise of the Machines: The History of Artificial Intelligence |url=https://www.looklisten.com/blog/rise-of-the-machines-the-history-of-artificial-intelligence/ |website=looklisten.com |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 2019 || || {{w|Center for Security and Emerging Technology}}<ref>{{cite web |title=Center for Security and Emerging Technology |url=https://cset.georgetown.edu/about-us/ |website=cset.georgetown.edu |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Center for Security and Emerging Technology |url=https://www.linkedin.com/company/georgetown-cset/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2019 || || {{w|Center for Security and Emerging Technology}}<ref>{{cite web |title=Center for Security and Emerging Technology |url=https://cset.georgetown.edu/about-us/ |website=cset.georgetown.edu |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Center for Security and Emerging Technology |url=https://www.linkedin.com/company/georgetown-cset/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
Line 562: Line 628:
 
===What the timeline is still missing===
 
===What the timeline is still missing===
 
   
 
   
 
* [https://www.phocuswire.com/A-brief-history-of-artificial-intelligence]
 
* [https://aitopics.org/misc/brief-history]
 
* [https://www.bbc.co.uk/teach/ai-15-key-moments-in-the-story-of-artificial-intelligence/zh77cqt]
 
* [https://www.codementor.io/@paulwarren/a-brief-history-of-artificial-intelligence-1956-to-now-mgoracvnx]
 
* [https://omnius.com/blog/a-short-history-of-artificial-intelligence-making-mythology-a-reality/]
 
* [https://www.wsj.com/articles/test-your-knowledge-about-the-history-of-ai-11571018521]
 
* [https://www.business2community.com/tech-gadgets/brief-history-artificial-intelligence-02004150]
 
* [https://econsultancy.com/a-brief-history-of-artificial-intelligence-in-advertising/]
 
* [https://amt-lab.org/blog/2017/3/a-brief-history-of-artificial-intelligence-wxn6d]
 
* [https://www.sutori.com/story/the-history-of-artificial-intelligence--4qEzQz1PPuA9Wo4mBkv2a9BX]
 
* [https://www.britannica.com/technology/artificial-intelligence]
 
* [http://people.idsia.ch/~juergen/ai.html]
 
* [https://www.atariarchives.org/deli/artificial_intelligence.php]
 
* [https://daxueconsulting.com/history-china-artificial-intelligence/]
 
* [https://www.dummies.com/software/other-software/history-artificial-intelligence/]
 
* [http://blog.bccresearch.com/a-short-history-of-artificial-intelligence]
 
* [https://www.marktechpost.com/2018/07/18/15-moments-that-defined-the-history-of-artificial-intelligence/]
 
* [https://artint.info/2e/html/ArtInt2e.Ch1.S2.html]
 
* [https://becominghuman.ai/the-curious-history-of-artificial-intelligence-an-african-perspective-46002515934e]
 
* [https://www.historyextra.com/period/second-world-war/7-phases-of-the-history-of-artificial-intelligence/]
 
* [http://ocw.uc3m.es/ingenieria-telematica/inteligencia-en-redes-de-comunicaciones/material-de-clase-1/01a-brief-history-of-ai]
 
* [https://medium.com/datadriveninvestor/evolution-of-ai-past-present-future-6f995d5f964a]
 
* [https://matthewljones.github.io/historyai2019/]
 
* [http://www.inf.ed.ac.uk/about/AIhistory.html]
 
* [https://www.looklisten.com/blog/rise-of-the-machines-the-history-of-artificial-intelligence/]
 
* [http://www.historyofinformation.com/detail.php?id=4289]
 
 
 
* {{w|Category:Artificial intelligence applications}}
 
* {{w|Category:Artificial intelligence applications}}
 
* {{w|Category:Artificial intelligence publications}}
 
* {{w|Category:Artificial intelligence publications}}

Revision as of 13:13, 21 March 2020

This is a timeline of FIXME.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Big picture

Summary by year

Time period Development summary More details
1940s Maturation of artificial intelligence "Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons." "Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning."[1] "More recently, in the 1940s, a school of thought called “Connectionism” was developed to study the process of thinking"[2]
1952–1956 "The birth of Artificial Intelligence (1952-1956)" "Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems." "Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field."[1]
1950s "At the beginning of 1950, John Von Neumann and Alan Turing did not create the term AI but were the founding fathers of the technology behind it: they made the transition from computers to 19th century decimal logic (which thus dealt with values from 0 to 9) and machines to binary logic (which rely on Boolean algebra, dealing with more or less important chains of 0 or 1). The two researchers thus formalized the architecture of our contemporary computers and demonstrated that it was a universal machine, capable of executing what is programmed."[3] " By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. "[4] "Expert systems, as a subset of AI, first emerged in the early 1950s when the Rand-Carnegie team developed the

general problem solver to deal with theorems proof, geometric problems and chess playing"[5]

1957–1974 "From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem."[4]
1974–1980 AI winter " After several reports criticizing progress in AI, government funding and interest in the field dropped off – a period from 1974–80 that became known as the "AI winter.""[6] "Artificial Intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were experimented with, and dropped. The most impressive, functional programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals, and had made naive assumptions about the problems they would encounter. When the results they promised never materialized, it should come as no surprise their funding was cut."[2]
1980–1987 "A boom of AI" " After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert."[1] "In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. "[4] " The field later revived in the 1980s when the British government started funding it again in part to compete with efforts by the Japanese."[6] " AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology."[2] "Amid the AI Winter in the United States an epic attempt to realize the ‘AI dream’ was underway in Japan in the

form of the Fifth Generation Computer System (FGCS) project during the 1980s."[5]

1987–1993 Second AI winter "Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective."[1] "The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding."[6] "The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter."[2] However, " By the end of 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining of expert systems"[5]
1993–2011 "The emergence of intelligent agents" "In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots."[2] "Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved."[4] "However, neural networks would not become financially successful until the 1990s, when they started being used to operate optical character recognition programs and speech pattern recognition programs."[2] "Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics."[7]
2011-onward Massive data and new computing power. "Deep learning, big data and artificial general intelligence" "In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly."[1] "Two factors explain the new boom in the discipline around 2010. 1) First of all, access to massive volumes of data. To be able to use algorithms for image classification and cat recognition, for example, it was previously necessary to carry out sampling yourself. Today, a simple search on Google can find millions. 2) Then the discovery of the very high efficiency of computer graphics card processors to accelerate the calculation of learning algorithms. The process being very iterative, it could take weeks before 2010 to process the entire sample. The computing power of these cards (capable of more than a thousand billion transactions per second) has enabled considerable progress at a limited financial cost (less than 1000 euros per card)."[3]

Summary by country

Full timeline

Year Event type Details Country/location
4th century B.C. Greek philosopher Aristotle invents syllogistic logic, the first formal deductive reasoning system.[8]
1 AC "May 1. year: Alexander Heron in antiquity made automatons with mechanical mechanisms working with water and steam power."[9]
1206 "1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers of cybernetic science, has made water-operated automatic controlled machines."[9]
1308 " Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts."[10]
1623 "1623: Wilhelm Schickard invented a mechanic and a calculator capable of four operations."[9]
1642 "Pascal created the first mechanical digital calculating machine (1642)."[8]
1666 "Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), following Ramon Llull in proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts."[10]
1672 "1672: Gottfried Leibniz has developed a binary counting system that forms the abstract basis of today's computers."[9]
1726 "onathan Swift publishes Gulliver's Travels, which includes a description of the Engine, a machine on the island of Laputa (and a parody of Llull's ideas): "a Project for improving speculative Knowledge by practical and mechanical Operations." By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study.""[10]
1763 "Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning."[10]
1801 "Joseph-Marie Jacquard invented the Jacquard loom, the first programmable machine, with instructions on punched cards (1801)."[8]
1854 " George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations."[10] "George Boole developed a binary algebra representing (some) "laws of thought," published in The Laws of Thought (1854)."[8]
1863 "1863 - Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[24]"[11]
1879 "Modern propositional logic developed by Gottlob Frege in his 1879 work Begriffsschrift and later clarified and expanded by Russell,Tarski, Godel, Church and others."[8]
1898 "At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla makes a demonstration of the world’s first radio-controlled vessel. The boat was equipped with, as Tesla described, “a borrowed mind.”"[10]
1910 "Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionaized formal logic. Russell, Ludwig Wittgenstein, and Rudolf Carnap lead philosophy into logical analysis of knowledge."[8]
1912 "Torres y Quevedo built his chess machine 'Ajedrecista', using electromagnets under the board to play the endgame rook and king against the lone king, possibly the first computer game (1912)"[8]
1914 "The Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention."[10]
1921 "Czech writer Karel Čapek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work)."[10]
1925 " Houdina Radio Control releases a radio-controlled driverless car, travelling the streets of New York City."[10]
1927 "he science-fiction film Metropolis is released. It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026—it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars."[10]
1929 "Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism."[10]
1931 "1931: Kurt Gödel introduced the theory of deficiency, which is called by his own name."[9] "In 1931, Goedel layed the foundations of Theoretical Computer Science and AI"[12]
1936 "1936: Konrad Zuse developed a programmable computer named Z1 named 64K memory."[9]
1936–1937 "Alan Turing proposed the universal Turing machine (1936-37)"[8]
1943 ". McCulloch and Pitts [1943] showed how a simple thresholding “formal neuron” could be the basis for a Turing-complete machine. "[13] "Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”"[10] "a first mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943."[3] "The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons."[1]
1943 "Arturo Rosenblueth, Norbert Wiener & Julian Bigelow coin the term "cybernetics" in a 1943 paper. Wiener's popular book by that name published in 1948."[8]
1943 "Emil Post proves that production systems are a general computational mechanism (1943). See Ch.2 of Rule Based Expert Systems for the uses of production systems in AI. Post also did important work on completeness, inconsistency, and proof theory."[8]
1943 ". In 1943 the neurophysiologist Warren McCulloch of the University of Illinois and the mathematician Walter Pitts of the University of Chicago published an influential treatise on neural nets and automatons, according to which each neuron in the brain is a simple digital processor and the brain as a whole is a form of computing machine. "[14]
1945 "George Polya published his best-selling book on thinking heuristically, How to Solve It in 1945. This book introduced the term 'heuristic' into modern thinking and has influenced many AI scientists."[8]
1945 "Vannevar Bush published As We May Think (Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities."[8]
1946 ": ENIAC (Electronic Numerical Integrator and Computer), the first computer in a room size of 30 tons, started to work."[9]
1948 "John von Neumann introduced the idea of self-replicating program"[9]
1949 " Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes: “Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill….These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”"[10]
1949 "Donald Hebb publishes Organization of Behavior: A Neuropsychological Theory in which he proposes a theory about learning based on conjectures regarding neural networks and the ability of synapses to strengthen or weaken over time."[10] "Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning."[1]
1950 " Claude Shannon’s “Programming a Computer for Playing Chess” is the first published article on developing a chess-playing computer program."[10]
1950 " Alan Turing publishes “Computing Machinery and Intelligence” in which he proposes “the imitation game” which will later become known as the “Turing Test.”"[10] ". Turing, on the other hand, raised the question of the possible intelligence of a machine for the first time in his famous 1950 article "Computing Machinery and Intelligence" and described a "game of imitation", where a human should be able to distinguish in a teletype dialogue whether he is talking to a man or a machine."[3]
1950 "Claude Shannon published detailed analysis of chess playing as search in "Programming a computer to play chess" (1950)."[8]
1951 " Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons."[10]
1950 "In a 1950 Scientific American article, Claude Shannon argued that only an artificial intelligence program could play computer chess"[15]
1951 "1951: The first artificial intelligence programs for the Mark 1 device were written"[9]
1952 " Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own."[10]
1952 "Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses."[2]
1953 "Other recent work includes the development of languages for reasoning about time-dependent data such as “the account was paid yesterday.” These languages are based on tense logic, which permits statements to be located in the flow of time. (Tense logic was invented in 1953 by the philosopher Arthur Prior at the University of Canterbury, Christchurch, New Zealand.)"[14]
1954 "In the US, one of the main motivations for the funding of AI research was the promise of machine translation (MT). Because of Cold War concerns, the US government was particularly interested in the automatic and instant translation of Russian. In 1954, the first demonstration of MT, the Georgetown-IBM experiment, showed a great promise. The system was by no means complete, consisting only six rules, a 250-item vocabulary and specialized only in Organic Chemistry."[5]
1954 "It was not until 1954, however, that Belmont Farley and Wesley Clark of MIT succeeded in running the first artificial neural network—albeit limited by computer memory to no more than 128 neurons. They were able to train their networks to recognize simple patterns."[14]
1955 "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field."[10]
1955 "December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica."[10] " An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems."[1]
1955–1956 " An important landmark in this area was a theorem-proving program written in 1955–56 by Allen Newell and J. Clifford Shaw of the RAND Corporation and Herbert Simon of the Carnegie Mellon University. The Logic Theorist, as the program became known, was designed to prove theorems from Principia Mathematica (1910–13), a three-volume work by the British philosopher-mathematicians Alfred North Whitehead and Bertrand Russell. In one instance, a proof devised by the program was more elegant than the proof given in the books."[14]
1956 "In 1956, a conference "Artificial Intelligence" was held for the first time in Hanover, New Hampshire, at Dartmouth College. "[16] "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.""[10] "The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline. Anecdotally, it is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments based on formal logic)."[3][5][1]
1956 "Newell and Simon [1956] built a program, Logic Theorist, that discovers proofs in propositional logic."[13] "Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956."[4] "But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined."[6] "1956: The logic theorist (Logic Theory-LT) program for solving mathematical problems is introduced by Neweell, Shaw and Simon. The system is regarded as the first artificial intelligence system."[9]
1957 " Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a “remarkable machine… capable of what amounts to thought.”"[10]
1957 "Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI then entered a first winter. Simon's vision proved to be right... 30 years later."[3]
1957 "The General Problem Solver (GPS) demonstrated by Newell, Shaw & Simon."[8] "Newell, Simon, and Shaw went on to write a more powerful program, the General Problem Solver, or GPS. The first version of GPS ran in 1957, and work continued on the project for about a decade. GPS could solve an impressive variety of puzzles using a trial and error approach."[14]
1958 "John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research."[10] "1958: John McCarty of MIT created the LISP (list Processing language) language."[9]
1958 "1958: Herbert Gelernter’s “geometry machine” becomes the first advanced AI programme to prove geometric theorems and the third ever in creation."[17]
1959 "Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”"[10]
1959 "Oliver Selfridge publishes “Pandemonium: A paradigm for learning” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes a model for a process by which computers could recognize patterns that have not been specified in advance."[10]
1959 "John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”"[10]
1959 "Once real computers were built, some of the first applications of computers were AI programs. For example, Samuel [1959] built a checkers program in 1952 and implemented a program that learns to play checkers in the late 1950s."[13]
1960 "JCR Licklider described the human-machine relationship in his work"[9]
1961 "The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey."[10]
1961 "James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level."[8]
1961 "James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus."[10]
1963 " 1963 article by Reed C. Lawlor, a member of the California Bar, entitled "What Computers Can Do: Analysis and Prediction of Judicial Decisions""[3]
1963 "Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests."[8]
1963 "Ivan Sutherland's MIT dissertation on Sketchpad introduced the idea of interactive graphics into computing."[8]
1963 "Edward A. Feigenbaum & Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence."[8]
1964 "Daniel Bobrow completes his MIT PhD dissertation titled “Natural Language Input for a Computer Problem Solving System” and develops STUDENT, a natural language understanding computer program."[10]
1964 Society for the Study of Artificial Intelligence and the Simulation of Behaviour United Kingdom
1964 "Danny Bobrow's dissertation at MIT (tech.report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly."[8]
1964 "Bert Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems"[8]
1965 "Herbert Simon said in The Shape of Automation for Men and Management (1965) that “machines will be capable, within 20 years, of doing any work a man can do”"[18] "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""[10]
1965 "Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress."[10]
1965 "I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that “the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”"[10]
1965 " Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program."[10]
1965 "Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science."[10]
1965 Expert system "The path was actually opened at MIT in 1965 with DENDRAL (expert system specialized in molecular chemistry) "[3]
1965 Literature Hubert Dreyfus publishes Alchemy and AI.
1965 "J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. (See Carl Hewitt's downloadable PDF file Middle History of Logic Programming)."[8]
1965 "Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English on any topic. It was a popular toy at AI centers on the ARPA-net when a version that "simulated" the dialogue of a psychotherapist was programmed."[8]
1966 "Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions. In a Life magazine 1970 article about this “first electronic person,” Marvin Minsky is quoted saying with “certitude”: “In from three to eight years we will have a machine with the general intelligence of an average human being.”"[10]
1966 "1966: Birth of the first chatbot The German-American computer scientist Joseph Weizenbaum of the Massachusetts Institute of Technology invents a computer program that communicates with humans. ‘ELIZA’ uses scripts to simulate various conversation partners such as a psychotherapist. Weizenbaum is surprised at the simplicity of the means required for ELIZA to create the illusion of a human conversation partner."[19] " The researchers emphasized developing algorithms which can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA."[1]
1966 "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions

were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."[5] ||

1966 Artificial Intelligence Center[20]
1966 "Ross Quillian (PhD dissertation, Carnegie Inst. of Technology; now CMU) demonstrated semantic nets."[8]
1966 "First Machine Intelligence workshop at Edinburgh - the first of an influential annual series organized by Donald Michie and others."[8]
1966 "Negative report on machine translation kills much work in Natural Language Processing (NLP) for many years."[8]
1967 "Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning."[8]
1967 "Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma (PDF file) program. First successful knowledge-based program in mathematics."[8]
1967 "Richard Greenblatt at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play."[8]
1967 "For example, the STUDENT program of Daniel Bobrow [1967] could solve high school algebra tasks expressed in natural language."[13]
1968 "The film 2001: Space Odyssey is released, featuring Hal, a sentient computer."[10] "In 1968 Stanley Kubrick directed the film "2001 Space Odyssey" where a computer - HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film will naturally not be scientific but it will contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions."[3]
1968 "Terry Winograd develops SHRDLU, an early natural language understanding computer program."[10]
1969 "Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. A learning algorithm for multi-layer artificial neural networks, it has contributed significantly to the success of deep learning in the 2000s and 2010s, once computing power has sufficiently advanced to accommodate the training of large networks."[10]
1969 "Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, highlighting the limitations of simple neural networks. In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: “Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others.”"[10]
1969 Conference International Joint Conference on Artificial Intelligence "First International Joint Conference on Artificial Intelligence (IJCAI) held in Washington, D.C."[8]
1969 "SRI robot, Shakey, demonstrated combining locomotion, perception and problem solving."[8]
1969 "Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner."[8]
1970 "It was with the advent of the first microprocessors at the end of 1970 that AI took off again and entered the golden age of expert systems."[3]
1970 Literature Artificial Intelligence (journal)[21]
1970 "The first anthropomorphic robot, the WABOT-1, is built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system."[10]
1970 " In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved."[4]
1970 "Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer-aided instruction based on semantic nets as the representation of knowledge."[8]
1970 "Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding."[8]
1970 "Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks."[8]
1971 "Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English."[8]
1972 Expert system "MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University."[4] "Stanford University in 1972 with MYCIN (system specialized in the diagnosis of blood diseases and prescription drugs). "[3] "1972: AI enters the medical field. With ‘MYCIN’, artificial intelligence finds its way into medical practices: The expert system developed by Ted Shortliffe at Stanford University is used for the treatment of illnesses. Expert systems are computer programs that bundle the knowledge for a specialist field using formulas, rules, and a knowledge database. They are used for diagnosis and treatment support in medicine."[19]
1972 "The first intelligent humanoid robot was built in Japan which was named as WABOT-1."[1]
1972 Lierature Hubert Dreyfus publishes What Computers Can't Do.
1972 "Prolog developed by Alain Colmerauer."[8]
1972 "Work on MYCIN, an expert system for treating blood infections, began at Stanford University in 1972. MYCIN would attempt to diagnose patients based on reported symptoms and medical test results."[14]
1973 "James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research."[4] "The “Lighthill report” commonly refers to “Artificial Intelligence: A General Survey” by Professor Sir James Lighthill of Cambridge University in 1973. His review of AI was at the request of Brian Flowers, the head of the British Science Research Council, the main funding body of British university scientific research. The review was to help the council evaluate requests for support in AI research. In the paper, Lighthill offered a pessimistic prognosis for AI, stating that “in no part of the field have discoveries made so far produced the major impact that was then promised”"[5]
1973 "The logic programming language PROLOG (Programmation en Logique) was conceived by Alain Colmerauer at the University of Aix-Marseille, France, where the language was first implemented in 1973. PROLOG was further developed by the logician Robert Kowalski, a member of the AI group at the University of Edinburgh. This language makes use of a powerful theorem-proving technique known as resolution, invented in 1963 at the U.S. Atomic Energy Commission’s Argonne National Laboratory in Illinois by the British logician Alan Robinson. PROLOG can determine whether or not a given statement follows logically from other given statements. For example, given the statements “All logicians are rational” and “Robinson is a logician,” a PROLOG program responds in the affirmative to the query “Robinson is rational?” PROLOG is widely used for AI work, especially in Europe and Japan."[14]
1973 "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."[5]
1973 "1973: DARPA begins development for protocols called TCP / IP"[9]
1974 Conference European Conference on Artificial Intelligence[22]
1974 "Ted Shortliffe's PhD dissertation on MYCIN (Stanford) demonstrated the power of rule-based systems for knowledge representation and inference in the domain of medical diagnosis and therapy. Sometimes called the first expert system."[8]
1974 "Earl Sacerdoti developed one of the first planning programs, ABSTRIPS, and developed techniques of hierarchical planning."[8]
1975 "Marvin Minsky published his widely-read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together."[8]
1975 "The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal."[8]
1976 "Computer scientist Raj Reddy publishes “Speech Recognition by Machine: A Review” in the Proceedings of the IEEE, summarizing the early work on Natural Language Processing (NLP)."[10]
1976 "AI research languishes as processing power proves unable to keep up with the promising theoretical groundwork being laid by computer scientists. Roboticist Hans Moravec says computers are “still millions of times too weak to exhibit intelligence.”"[23]
1976 "Doug Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely-guided search for interesting conjectures)."[8]
1976 "Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford."[8]
Mid1970s "Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in NLP."[8]
Mid1970s "Alan Kay and Adele Goldberg (Xerox PARC) developed the Smalltalk language, establishing the power of object-oriented programming and of icon-oriented interfaces."[8]
Mid1970s "David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception."[8]
1977 iLabs[24] Italy
1978 Expert system "The XCON (eXpert CONfigurer) program, a rule-based expert system assisting in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's requirements, is developed at Carnegie Mellon University."[10]
1978 ". In 1978 Japan’s Ministry of International Trade and Industry (MITI) commissioned a study of what the future would hold

for computers, and three years later attempted to construct fifth generation computers – creating what project heads described as an ‘epochal’ leap in computer technology, in order to give Japan the technological lead for years to come. This new generation of machines would not be built on standard microprocessors, but multiprocessor machines specializing in logic programming. The bet was that these high-power logic machines would catalyze the world of information processing and realize artificial intelligence."[5] ||

1978 " Herbert Simon earned a Nobel Prize for his limited Rationality Theory, which is an important work on Artificial Intelligence."[9]
1978 "Tom Mitchell, at Stanford, invented the concept of Version Spaces for describing the search space of a concept formation program."[8]
1978 "Herb Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing""[8]
1978 "The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented representation of knowledge can be used to plan gene-cloning experiments."[8]
1979 "The Stanford Cart successfully crosses a chair-filled room without human intervention in about five hours, becoming one of the earliest examples of an autonomous vehicle."[10]
1979 Association for the Advancement of Artificial Intelligence United States
1979 "Mycin program, initially written as Ted Shortliffe's Ph.D. dissertation at Stanford, was demonstrated to perform at the level of experts. Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells"."[8]
1979 "Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge."[8]
1979 "Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming."[8]
1979 "Drew McDermott & Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance."[8]
Late 1970s "Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration."[8]
1980 "Computer scientist Edward Feigenbaum helps reignite AI research by leading the charge to develop “expert systems”—programs that learn by ask experts in a given field how to respond in certain situations. Once the system compiles expert responses for all known situations likely to occur in that field, the system can provide field-specific expert guidance to nonexperts."[23]
1980 Expert system " After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert."[1]
1980 "Wabot-2 is built at Waseda University in Japan, a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ."[10]
1980 Expert system "Digital Equipment Corporation began requiring their sales team use an Expert System named XCON when placing customer orders. DEC sold a broad range of computer components, but the sales force was not especially knowledgeable about what they were selling."[2]
1980 "In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University."[1]
1980 "The year of AI. In 1980, AI research fired back up with an expansion of funds and algorithmic tools. With deep learning techniques, the computer learned with the user experience."[25]
1980 "Lee Erman, Rick Hayes-Roth, Victor Lesser and Raj Reddy published the first description of the blackboard model, as the framework for the HEARSAY-II speech understanding system."[8]
1980 "First National Conference of the American Association of Artificial Intelligence (AAAI) held at Stanford."[8]
1980 " (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) "[14]
1981 "In 1981 an expert system named SID (Synthesis of Integral Design) designed 93% of the VAX 9000 CPU logic gates. The SID system was existing out of 1,000 hand-written-rules. The final design of the CPU took 3 hours to calculate and outperformed in many ways the human experts. As an example, the SID produced a faster 64-bit adder than the manually designed one. Also the bug per gate rate, which where around 1 bug per 200 gates from human experts, was much lower at around 1 bug per 20,000 gates at the final result of the SID system."[26]
1981 "Danny Hillis designs the connection machine, a massively parallel architecture that brings new power to AI, and to computation in general. (Later founds Thinking Machines, Inc.)"[8]
1981 "he Japanese Ministry of International Trade and Industry budgets $850 million for the Fifth Generation Computer project. The project aimed to develop computers that could carry on conversations, translate languages, interpret pictures, and reason like human beings"[10]
1981 "Japan’s Ministry of International Trade and Industry (MITI) commissioned a study of what the future would hold for computers, and three years later attempted to construct fifth generation computers – creating what project heads described as an ‘epochal’ leap in computer technology, in order to give Japan the technological lead for years to come. This new generation of machines would not be built on standard microprocessors, but multiprocessor machines specializing in logic programming. The bet was that these high-power logic machines would catalyze the world of information processing and realize artificial intelligence."[5] Japan
1982 European Association for Artificial Intelligence
1983 Turing Institute United Kingdom
1983 "John Laird & Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on SOAR."[8]
1983 "James Allen invents the Interval Calculus, the first widely used formalization of temporal events."[8]
1984 "Electric Dreams is released, a film about a love triangle between a man, a woman and a personal computer."[10]
1984 "At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming “AI Winter,” predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s."[10]
1984 "CYC is a large experiment in symbolic AI. The project began in 1984 under the auspices of the Microelectronics and Computer Technology Corporation, a consortium of computer, semiconductor, and electronics manufacturers. "[14]
1985 "The autonomous drawing program, Aaron, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments)."[8]
1986 "First driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets."[10]
1986 Literature Hubert Dreyfus publishes Mind over Machine.
1986 "In one famous connectionist experiment conducted at the University of California at San Diego (published in 1986), David Rumelhart and James McClelland trained a network of 920 artificial neurons, arranged in two layers of 460 neurons, to form the past tenses of English verbs. Root forms of verbs—such as come, look, and sleep—were presented to one layer of neurons, the input layer. A supervisory computer program observed the difference between the actual response at the layer of output neurons and the desired response—came, say—and then mechanically adjusted the connections throughout the network in accordance with the procedure described above to give the network a slight push in the direction of the correct response. About 400 different verbs were presented one by one to the network, and the connections were adjusted after each presentation. This whole procedure was repeated about 200 times using the same verbs, after which the network could correctly form the past tense of many unfamiliar verbs as well as of the original verbs."[14]
1986 (October) Centre for Artificial Intelligence and Robotics[27] India
1986 (October) "October 1986 David Rumelhart, Geoffrey Hinton, and Ronald Williams publish ”Learning representations by back-propagating errors,” in which they describe “a new learning procedure, back-propagation, for networks of neurone-like units.”"[10]
1986 "1986: ‘NETtalk’ speaks. The computer is given a voice for the first time. Terrence J. Sejnowski and Charles Rosenberg teach their ‘NETtalk’ program to speak by inputting sample sentences and phoneme chains. NETtalk is able to read words and pronounce them correctly, and can apply what it has learned to words it does not know. It is one of the early artificial neural networks — programs that are supplied with large datasets and are able to draw their own conclusions on this basis. Their structure and function are thereby similar to those of the human brain."[19]
1986 Conference International Conference on User Modeling, Adaptation, and Personalization
1987 "The video Knowledge Navigator, accompanying Apple CEO John Sculley’s keynote speech at Educom, envisions a future in which “knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information.”"[10]
1987 Literature AI & Society
1987 Literature Applied Artificial Intelligence
1987 Literature International Journal of Pattern Recognition and Artificial Intelligence
1987 "Marvin Minsky publishes The Society of Mind, a theoretical description of the mind as a collection of cooperating agents."[8]
1988 "Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation reads: “Judea Pearl created the representational and computational foundation for the processing of information under uncertainty. He is credited with the invention of Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.”"[10]
1988 Dalle Molle Institute for Artificial Intelligence Research Switzerland
1988 "Rollo Carpenter develops the chat-bot Jabberwacky to "simulate natural human chat in an interesting, entertaining and humorous manner." It is an early attempt at creating artificial intelligence through human interaction."[10]
1988 "Members of the IBM T.J. Watson Research Center publish “A statistical approach to language translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to “machine learning” based on statistical analysis of known examples, not comprehension and “understanding” of the task at hand (IBM’s project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament)."[10]
1988 German Research Centre for Artificial Intelligence Germany
1989 "Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In “Prologue: A View from 1988” they wrote: “One reason why progress has been so slow in this field is that researchers unfamiliar with its history have continued to make many of the same mistakes that others have made before them.”"[10]
1989 "Yann LeCun and other researchers at AT&T Bell Labs successfully apply a backpropagation algorithm to a multi-layer neural network, recognizing handwritten ZIP codes. Given the hardware limitations at the time, it took about 3 days (still a significant improvement over earlier efforts) to train the network."[10]
1989 Literature Journal of Experimental and Theoretical Artificial Intelligence
1989 (November 9) Literature The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics
1989 "Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network), which grew into the system that drove a car coast-to-coast under computer control for all but about 50 of the 2850 miles."[8]
1990 "Rodney Brooks publishes “Elephants Don’t Play Chess,” proposing a new approach to AI—building intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: “The world is its own best model… The trick is to sense it appropriately and often enough.”"[10]
1991 European Neural Network Society[28][29]
1991 In 1991 the American philanthropist Hugh Loebner started the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program has come close to passing an undiluted Turing test.[14]
1992 Literature International Journal on Artificial Intelligence Tools[30]
1993 Journal of Artificial Intelligence Research[31]
1993 "Vernor Vinge publishes “The Coming Technological Singularity,” in which he predicts that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”"[10]
1994 Conference Artificial Evolution Conference[32] France
1995 "Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web."[10]
1995 "1995: AltaVista becomes the first search engine to use natural language processing."[17]
1996 "EQP theorem prover at Argonne National Labs proves the Robbins Conjecture in mathematics (October-November, 1996)."[8]
1997 "Sepp Hochreiter and Jürgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of a recurrent neural network used today in handwriting recognition and speech recognition."[10]
1997 ". In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program."[4] "Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion."[10]
1997 " speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor."[4]
1998 "Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot."[4]
1998 Literature Autonomous Agents and Multi-Agent Systems[33]
1998 "Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation."[4]
1998 " And it was in 1998 that Amazon began using “collaborative filtering” enabling recommendations for millions of customers."[34]
1998 "1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment."[11]
Late 1990s "Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web."[11]
1990s "Demonstration of an Intelligent Room and Emotional Agents at MIT's AI Lab. Initiation of work on the Oxygen Architecture, which connects mobile and stationary computers in an adaptive network."[7]
2000 "MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions."[4]
2000 "Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting."[4]
2000 Conference Mexican International Conference on Artificial Intelligence[35] Mexico
2000 "Cynthia Breazeal at MIT publishes her dissertation on Sociable Machines, describing KISMET, a robot with a face that expresses emotions."[7]
2001 "A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love."[4]
2001 Artificial General Intelligence Research Institute[36] United States
2002 "Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner."[1]
2002 Conference RuleML Symposium[37]
2003 "In 2003, Geoffrey Hinton (University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (University of New York) decided to start a research program to bring neural networks up to date. Experiments conducted simultaneously at Microsoft, Google and IBM with the help of the Toronto laboratory in Hinton showed that this type of learning succeeded in halving the error rates for speech recognition. Similar results were achieved by Hinton's image recognition team."[3]
2003 MIT Computer Science and Artificial Intelligence Laboratory[38] United States
2004 "The first DARPA Grand Challenge, a prize competition for autonomous vehicles, is held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route."[4]
2004 Conference International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics[39] Italy
2006 "Oren Etzioni, Michele Banko, and Michael Cafarella coin the term “machine reading,” defining it as an inherently unsupervised “autonomous understanding of text.”"[4]
2006 "Geoffrey Hinton publishes “Learning Multiple Layers of Representation,” summarizing the ideas that have led to “multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it,” i.e., the new approaches to deep learning."[4]
2006 "Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI."[1]
2006 The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct heart arrythmia. The results are rated as better than an above-average human surgeon. The machine has a database of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".[40][41]
2006 Conference AI@50[42]
2007 "Fei Fei Li and colleagues at Princeton University start to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research."[4]
2008 Eliezer Yudkowsky calls for the creation of “friendly AI” to mitigate existential risk from advanced artificial intelligence. Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[43] United States
2008 Conference Conference on Artificial General Intelligence[44]
2009 "Rajat Raina, Anand Madhavan and Andrew Ng publish “Large-scale Deep Unsupervised Learning using Graphics Processors,” arguing that “modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods.”"[4]
2009 "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."[4]
2009 "Computer scientists at the Intelligent Information Laboratory at Northwestern University develop Stats Monkey, a program that writes sport news stories without human intervention."[4]
2010 "Launch of the ImageNet Large Scale Visual Recognition Challenge (ILSVCR), an annual AI object recognition competition."[4]
2010 DeepMind[45]
2011 "A convolutional neural network wins the German Traffic Sign Recognition competition with 99.46% accuracy (vs. humans at 99.22%)."[4]
2011 "And in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings."[6]
2011 "This year, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges into thinking he was real skin-and-blood human during a Turing test,"[6]
2011 "Watson, a natural language question answering computer, competes on Jeopardy! and defeats two former champions."[4]
2011 "Researchers at the IDSIA in Switzerland report a 0.27% error rate in handwriting recognition using convolutional neural networks, a significant improvement over the 0.35%-0.40% error rate in previous years."[4]
2011 "2011: AI enters everyday life. Technology leaps in the hardware and software fields pave the way for artificial intelligence to enter everyday life. Powerful processors and graphics cards in computers, smartphones, and tablets give regular consumers access to AI programs. Digital assistants in particular enjoy great popularity: Apple’s ‘Siri’ comes to the market in 2011, Microsoft introduces the ‘Cortana’ software in 2014, and Amazon presents Amazon Echo with the voice service ‘Alexa’ in 2015."[19]
2012 "June 2012 Jeff Dean and Andrew Ng report on an experiment in which they showed a very large neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats.”"[4]
2012 (July 13) Literature The Machine Question: Critical Perspectives on AI, Robots, and Ethics
2012 "October 2012 A convolutional neural network designed by researchers at the University of Toronto achieve an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before."[10]
2012 The secutiry market is flooded by computer vision start-ups.[46]
2013 "Boston Dynamics unveils Atlas , an advanced humanoid robot designed for various search-and-rescue tasks."[23][47]
2013 ". Automated Insights published 300 million pieces of content in 2013"[34]
2014 (January) "The 3-years-old DeepMind being acquired by Google in Jan. 2014;"[48]
2014 "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."[4]
2014 Allen Institute for AI[49][50] United States
2014 "Microsoft introduces the ‘Cortana’ software"[19]
2014 Future of Life Institute[51] United States
2014 Squirrel AI[52][53] China
2014 Kiev Laboratory for Artificial Intelligence[54] Ukraine
2014 "Ian Goodfellow comes up with Generative Adversarial Networks (GAN)."[55]
2014 "When programmatic ad buying was popularized in 2014, it introduced us to artificial intelligence-based ad buying, effectively removing the broken, laborious manual tasks of researching target markets, budgets, insertion orders, and layers of additional analytics tracking – not to mention high prices."[34]
2015 "Amazon introduces service ‘Alexa’ in 2015."[19]
2015 The Chinese Congress on Artificial Intelligence 2015 takes place in Beijing, giving the direction of AI-related industries in China.[46] China
2015 Open Letter on Artificial Intelligence[56]
2015 (September 22) The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
2015 "t. In 2015, Google introduced its latest artificial intelligence algorithm, RankBrain, which makes significant advances in interpreting search queries in new ways. Through RankBrain, Google has been successful in interpreting the intent behind a user’s search terms, making for a more relevant result."[34]
2016 "March 2016 Google DeepMind's AlphaGo defeats Go champion Lee Sedol."[4]
2016 Center for Human-Compatible Artificial Intelligence[57] United States
2016 (February 16) Active Intelligence Pte Ltd[58] Singapore
2016 (September 28) Partnership on AI[59]
2016 "Swarm AI, a real-time online tool, predicts the winning horse of the Kentucky Derby"[23]
2016 " McKinsey estimates that in 2016 Google and Baidu spent around $20 to $30 billion on funding their internal R&D and acquiring startups in the field."[60]
2017 OpenAI Five[61] United States
2017 DeepMind releases AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirms that existing algorithms perform poorly, which is "unsurprising" because the algorithms "are not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[62][63][64]
2017 Asilomar Conference on Beneficial AI[65]
2017 "The first AI for Good Global summit took place from 7 to 9 June 2017"
2017 AI Now Institute[66] United States
2017 "The AI market (hardware and software) has reached $8 billion"[26]
2017 "Physicists use AI to search data for evidence of previously undetected particles and other phenomena."[23]
2017 "Google’s DeepMind AI teaches itself to walk."[23]
2017 AI is included in the Chinese government report as a national strategy in China.[46]
2018 "2018: AI debates space travel and makes a hairdressing appointment. These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine."[19]
2018 European Laboratory for Learning and Intelligent Systems[67]
2018 (April 26) Innovation Center for Artificial Intelligence[68][69] Netherlands
2018 " In 2018, the size of China’s artificial intelligence market reached 33.9 billion RMB"[46] China
2018 "Astronomers use AI to spot 6,000 new craters on the moon’s surface."[23][70]
2018 "Paul Rad, assistant director of the University of Texas-San Antonio Open Cloud Institute, and Nicole Beebe, director of the university’s Cyber Center for Security and Analytics, describe a new cloud-based learning platform for AI that teaches machines to learn like humans."[23][71]
2018 "Google demonstrates its Duplex AI, a digital assistant that can make appointments via telephone calls with live humans. Duplex uses natural language understanding, deep learning and text-to-speech capabilities to understand conversational context and nuance in ways no other digital assistant has yet matched."[23]
2018 AI ushers in the first year of commercial applications in China. There are more than 1,000 AI-related companies in the country by the time.[46] China
2018 The AI Now Report finds harmful inaccuracies in AI-driven technology, plus an alarming lack of accountability and, in some cases, systems built on racial discrimination or used for human rights violations.[72]
2019 Center for Security and Emerging Technology[73][74] United States
2019 Google AI Centre in Ghana[75][76] Ghana
2019 AI Artathon[77][78] Saudi Arabia

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by User:Sebastian.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 "History of Artificial Intelligence". javatpoint.com. Retrieved 7 February 2020. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 "A Brief History of Artificial Intelligence". dataversity.net. Retrieved 7 February 2020. 
  3. 3.00 3.01 3.02 3.03 3.04 3.05 3.06 3.07 3.08 3.09 3.10 3.11 "History of Artificial Intelligence". coe.int. Retrieved 7 February 2020. 
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24 4.25 4.26 4.27 4.28 "The History of Artificial Intelligence". harvard.edu. Retrieved 7 February 2020. 
  5. 5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 "The History of Artificial Intelligence" (PDF). washington.edu. Retrieved 7 February 2020. 
  6. 6.0 6.1 6.2 6.3 6.4 6.5 "A Brief History of Artificial Intelligence". livescience.com. Retrieved 7 February 2020. 
  7. 7.0 7.1 7.2 "Tema 1 Brief History of Artificial Intelligence". ocw.uc3m.es. Retrieved 21 March 2020. 
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 8.12 8.13 8.14 8.15 8.16 8.17 8.18 8.19 8.20 8.21 8.22 8.23 8.24 8.25 8.26 8.27 8.28 8.29 8.30 8.31 8.32 8.33 8.34 8.35 8.36 8.37 8.38 8.39 8.40 8.41 8.42 8.43 8.44 8.45 8.46 8.47 8.48 8.49 8.50 8.51 8.52 8.53 8.54 8.55 8.56 8.57 8.58 8.59 8.60 8.61 "A Brief History of AI". aitopics.org. Retrieved 20 March 2020. 
  9. 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 9.12 9.13 Mijwil, Maad M. "History of Artificial Intelligence". Retrieved 9 March 2020. 
  10. 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.19 10.20 10.21 10.22 10.23 10.24 10.25 10.26 10.27 10.28 10.29 10.30 10.31 10.32 10.33 10.34 10.35 10.36 10.37 10.38 10.39 10.40 10.41 10.42 10.43 10.44 10.45 10.46 10.47 10.48 10.49 10.50 10.51 10.52 10.53 10.54 10.55 10.56 10.57 10.58 10.59 10.60 "A Very Short History Of Artificial Intelligence (AI)". forbes.com. Retrieved 7 February 2020. 
  11. 11.0 11.1 11.2 "The History Of Artificial Intelligence". sutori.com. Retrieved 20 March 2020. 
  12. "Artificial Intelligence". people.idsia.ch. Retrieved 21 March 2020. 
  13. 13.0 13.1 13.2 13.3 "1.2 A Brief History of Artificial Intelligence". artint.info. Retrieved 21 March 2020. 
  14. 14.00 14.01 14.02 14.03 14.04 14.05 14.06 14.07 14.08 14.09 14.10 "Artificial intelligence". britannica.com. Retrieved 21 March 2020. 
  15. "A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE". atariarchives.org. Retrieved 21 March 2020. 
  16. "History of Artificial Intelligence". researchgate.net. Retrieved 9 March 2020. 
  17. 17.0 17.1 "A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE: MAKING MYTHOLOGY A REALITY". omnius.com. Retrieved 20 March 2020. 
  18. "7 phases of the history of Artificial intelligence". historyextra.com. Retrieved 21 March 2020. 
  19. 19.0 19.1 19.2 19.3 19.4 19.5 19.6 "The history of artificial intelligence". bosch.com. Retrieved 7 February 2020. 
  20. "AIC Timeline". ai.sri.com. Retrieved 6 March 2020. 
  21. "Artificial Intelligence Journal Division of IJCAI". ijcai.org. Retrieved 6 March 2020. 
  22. "ECAI 2010". iospress.nl. Retrieved 6 March 2020. 
  23. 23.0 23.1 23.2 23.3 23.4 23.5 23.6 23.7 23.8 "The History of Artificial Intelligence". futureoftech.org. Retrieved 9 March 2020. 
  24. "ILabs". semanticscholar.org. Retrieved 6 March 2020. 
  25. "History of Artificial Intelligence – AI of the past, present and the future!". data-flair.training. Retrieved 4 March 2020. 
  26. 26.0 26.1 "A Short History of Artificial Intelligence". dev.to. Retrieved 9 March 2020. 
  27. "Centre for Artificial Intelligence and Robotics (CAIR)". epicos.com. Retrieved 6 March 2020. 
  28. Taylor, J.G. The Promise of Neural Networks. 
  29. Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 (Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa ed.). 
  30. "International Journal on Artificial Intelligence Tools". letpub.com. Retrieved 6 March 2020. 
  31. "Journal of Artificial Intelligence Research". jair.org. Retrieved 6 March 2020. 
  32. "Artificial Evolution 2019 (EA-2019)". iscpif.fr. Retrieved 6 March 2020. 
  33. "Autonomous Agents and Multi-Agent Systems". springer.com. Retrieved 6 March 2020. 
  34. 34.0 34.1 34.2 34.3 "A brief history of artificial intelligence in advertising". econsultancy.com. Retrieved 20 March 2020. 
  35. "MICAI 2000: Advances in Artificial Intelligence". springer.com. Retrieved 6 March 2020. 
  36. "Artificial General Intelligence Research Institute". morebooks.de. Retrieved 6 March 2020. 
  37. Bikakis, Antonis; Fodor, Paul; Roman, Dumitru. Rules on the Web: From Theory to Applications: 8th International Symposium, RuleML 2014, Co-located with the 21st European Conference on Artificial Intelligence, ECAI 2014, Prague, Czech Republic, August 18-20, 2014, Proceedings. 
  38. "Mission & History". csail.mit.edu. Retrieved 6 March 2020. 
  39. "INTERNATIONAL MEETING ON COMPUTATIONAL INTELLIGENCE METHODS FOR BIOINFORMATICS AND BIOSTATISTICS". person.dibris.unige.it. Retrieved 6 March 2020. 
  40. "Autonomous Robotic Surgeon performs surgery on first live human". Engadget. 19 May 2006. 
  41. "Robot surgeon carries out 9-hour operation by itself". Phys.Org. 
  42. "Dartmouth Artificial Intelligence Conference". dartmouth.edu. Retrieved 6 March 2020. 
  43. Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk
  44. "Artificial General Intelligence 2008". iospress.nl. Retrieved 6 March 2020. 
  45. "Expanding our knowledge, finding new answers". deepmind.com. Retrieved 6 March 2020. 
  46. 46.0 46.1 46.2 46.3 46.4 "The history of Artificial Intelligence (AI) in China". daxueconsulting.com. Retrieved 21 March 2020. 
  47. "Atlas". bostondynamics.com. Retrieved 9 March 2020. 
  48. "A Brief History of Artificial Intelligence". kdnuggets.com. Retrieved 9 March 2020. 
  49. "Allen Institute for AI". glassdoor.com.ar. Retrieved 6 March 2020. 
  50. "Allen Institute for AI (AI2)". linkedin.com. Retrieved 6 March 2020. 
  51. "Future of Life Institute". linkedin.com. Retrieved 6 March 2020. 
  52. "Adaptive Learning Startup Squirrel AI Raises CN¥1B". medium.com. Retrieved 6 March 2020. 
  53. "Squirrel AI Learning". crunchbase.com. Retrieved 6 March 2020. 
  54. "Kiev Laboratory for Artificial Intelligence". semanticscholar.org. Retrieved 6 March 2020. 
  55. "History of Artificial Intelligence". qbi.uq.edu.au. Retrieved 9 March 2020. 
  56. "Elon Musk, Stephen Hawking warn of artificial intelligence dangers". mashable.com. Retrieved 6 March 2020. 
  57. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". news.berkeley.edu. Retrieved 6 March 2020. 
  58. "ACTIVE INTELLIGENCE PTE. LTD.". sgpbusiness.com. Retrieved 6 March 2020. 
  59. "Exploring The Partnership on AI". medium.com. Retrieved 6 March 2020. 
  60. "A Brief History of Artificial Intelligence". business2community.com. Retrieved 20 March 2020. 
  61. "OpenAI Five". openai.com. Retrieved 6 March 2020. 
  62. "DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse". Bloomberg.com. 11 December 2017. Retrieved 5 March 2020. 
  63. "Alphabet's DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All". Fortune. Retrieved 5 March 2020. 
  64. "Specifying AI safety problems in simple environments | DeepMind". DeepMind. Retrieved 5 March 2020. 
  65. "Video: Superintelligence Panel at Beneficial AI 2017 (FLI)". medium.com. Retrieved 6 March 2020. 
  66. "NYU Law and NYU's AI Now Institute analyze the ways emerging technology imposes upon civil liberties". law.nyu.edu. Retrieved 6 March 2020. 
  67. "European Laboratory for Learning and Intelligent Systems (ELLIS) launched with Informatics researchers on board". ed.ac.uk. Retrieved 9 March 2020. 
  68. "Innovation Center for Artificial Intelligence officially launched". uva.nl. Retrieved 6 March 2020. 
  69. "Ahold Delhaize Helps Launch AI Innovation Center". consumergoods.com. Retrieved 6 March 2020. 
  70. "New technique uses AI to locate and count craters on the moon". phys.org. Retrieved 9 March 2020. 
  71. "UTSA researchers want to teach computers to learn like humans". utsa.edu. Retrieved 9 March 2020. 
  72. "Rise of the Machines: The History of Artificial Intelligence". looklisten.com. Retrieved 21 March 2020. 
  73. "Center for Security and Emerging Technology". cset.georgetown.edu. Retrieved 6 March 2020. 
  74. "Center for Security and Emerging Technology". linkedin.com. Retrieved 6 March 2020. 
  75. "Google takes on 'Africa's challenges' with first AI centre in Ghana". thestar.com.my. Retrieved 6 March 2020. 
  76. "How Google is driving artificial intelligence for Africa by Africans". techpoint.africa. Retrieved 6 March 2020. 
  77. "About the Global AI Summit". theglobalaisummit.com. Retrieved 6 March 2020. 
  78. "Riyadh to host AI art competition". arabnews.jp. Retrieved 6 March 2020.