Difference between revisions of "Timeline of artificial intelligence"

From Timelines
Jump to: navigation, search
 
(226 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a '''timeline of FIXME'''.
+
This is a '''timeline of {{w|artificial intelligence}}''', which refers to the development and implementation of computer systems or machines that can perform tasks that typically require human intelligence.
  
 
== Sample questions ==
 
== Sample questions ==
Line 6: Line 6:
  
 
==Big picture==
 
==Big picture==
 +
 +
=== Summary by year ===
  
 
{| class="wikitable"
 
{| class="wikitable"
 
! Time period !! Development summary !! More details
 
! Time period !! Development summary !! More details
 
|-
 
|-
| 1940s || Maturation of artificial intelligence || "Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons." "Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning."<ref name="javatpoint.coma"/> "More recently, in the 1940s, a school of thought called “Connectionism” was developed to study the process of thinking"<ref name="dataversity.netw"/>
+
| 1940s-1950s || Early work || This period sees the first explorations of AI, including the development of artificial neurons, learning rules for adjusting neuron connections, and the concept of connectionism.<ref name="javatpoint.coma"/><ref name="dataversity.netw"/> Expert systems, which are a type of AI, are first introduced in the early 1950s.  Allen Newell and Herbert A. Simon create the first artificial intelligence program. In 1956, the term "Artificial Intelligence" is first adopted.<ref name="javatpoint.coma"/> Many consider {{w|John Von Neumann}} and {{w|Alan Turing}} to be the founding fathers of the technology behind AI. They pioneer the transition from 19th century decimal logic to binary logic in computer architecture. This transition leads to the development of modern computers and their ability to execute programs based on Boolean algebra. They also demonstrate that computers are universal machines capable of performing a wide range of tasks based on programming.<ref name="coe.intf"/> By the 1950s, a group of scientists, mathematicians, and philosophers already become familiar with the concept of artificial intelligence (AI).<ref name="harvard.edu d">{{cite web |title=The History of Artificial Intelligence |url=http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ |website=harvard.edu |accessdate=7 February 2020}}</ref>
|-
 
| 1952–1956 || "The birth of Artificial Intelligence (1952-1956)" || "Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems." "Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field."<ref name="javatpoint.coma"/>
 
 
|-
 
|-
| 1950s || || "At the beginning of 1950, John Von Neumann and Alan Turing did not create the term AI but were the founding fathers of the technology behind it: they made the transition from computers to 19th century decimal logic (which thus dealt with values from 0 to 9) and machines to binary logic (which rely on Boolean algebra, dealing with more or less important chains of 0 or 1). The two researchers thus formalized the architecture of our contemporary computers and demonstrated that it was a universal machine, capable of executing what is programmed."<ref name="coe.intf"/>    " By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. "<ref name="harvard.edu d">{{cite web |title=The History of Artificial Intelligence |url=http://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/ |website=harvard.edu |accessdate=7 February 2020}}</ref> "Expert systems, as a subset of AI, first emerged in the early 1950s when the Rand-Carnegie team developed the
+
| 1960s-1970s || Knowledge-based AI || During this time, AI researchers focus on developing rule-based systems that can reason and make decisions based on knowledge representations. Around this period, AI experiences significant growth.The availability and affordability of computers increase, allowing for more data storage and faster processing. Additionally, machine learning algorithms improve and people become more knowledgeable about which algorithm to use for specific problems.
general problem solver to deal with theorems proof, geometric problems and chess playing"<ref name="washington.edu"/>
 
 
|-
 
|-
| 1957–1974 || || "From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem."<ref name="harvard.edu d"/>
+
| 1974–1980 || {{w|AI winter}} || After criticism of the lack of progress in artificial intelligence (AI), government funding and interest in the field decrease during this period. Research efforts focuse on neural networks, but progress is limited, and functional programs can only handle simple problems. AI researchers have been overly optimistic in setting their goals and have made naive assumptions about the challenges they would face. When they failed to deliver promised results, funding was cut. <ref name="livescience.coms"/><ref name="dataversity.netw"/>
|-
 
| 1974–1980 || {{w|AI winter}} || " After several reports criticizing progress in AI, government funding and interest in the field dropped off – a period from 1974–80 that became known as the "AI winter.""<ref name="livescience.coms"/> "Artificial Intelligence research funding was cut in the 1970s, after several reports criticized a lack of progress. Efforts to imitate the human brain, called “neural networks,” were experimented with, and dropped. The most impressive, functional programs were only able to handle simplistic problems, and were described as toys by the unimpressed. AI researchers had been overly optimistic in establishing their goals, and had made naive assumptions about the problems they would encounter. When the results they promised never materialized, it should come as no surprise their funding was cut."<ref name="dataversity.netw"/>
 
 
|-
 
|-
| 1980–1987 || "A boom of AI" || " After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert."<ref name="javatpoint.coma"/> "In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. "<ref name="harvard.edu d"/> " The field later revived in the 1980s when the British government started funding it again in part to compete with efforts by the Japanese."<ref name="livescience.coms"/> " AI research resumed in the 1980s, with the U.S. and Britain providing funding to compete with Japan’s new “fifth generation” computer project, and their goal of becoming the world leader in computer technology."<ref name="dataversity.netw"/> "Amid the AI Winter in the United States an epic attempt to realize the ‘AI dream’ was underway in Japan in the
+
| 1980–1987 || A boom of AI ||   Following the period of AI winter, the field of artificial intelligence makes a comeback with the introduction of expert systems. These systems are designed to mimic the decision-making abilities of a human expert through programming. <ref name="javatpoint.coma"/> AI is reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularize “deep learning” techniques which allow computers to learn using experience.<ref name="harvard.edu d"/>   Funding from the United States and Britain resume to compete with Japan's "fifth generation" computer project and its goal of becoming the global leader in computer technology.<ref name="livescience.coms"/><ref name="dataversity.netw"/><ref name="washington.edu"/>
form of the Fifth Generation Computer System (FGCS) project during the 1980s."<ref name="washington.edu"/>
 
 
|-
 
|-
| 1987–1993 || Second AI winter || "Again Investors and government stopped in funding for AI research as due to high cost but not efficient result. The expert system such as XCON was very cost effective."<ref name="javatpoint.coma"/> "The field experienced another major winter from 1987 to 1993, coinciding with the collapse of the market for some of the early general-purpose computers, and reduced government funding."<ref name="livescience.coms"/> "The AI field experienced another major winter from 1987 to 1993. This second slowdown in AI research coincided with XCON, and other early Expert System computers, being seen as slow and clumsy. Desktop computers were becoming very popular and displacing the older, bulkier, much less user-friendly computer banks. Eventually, Expert Systems simply became too expensive to maintain, when compared to desktops. They were difficult to update, and could not “learn.” These were problems desktop computers did not have. At about the same time, DARPA (Defense Advanced Research Projects Agency) concluded AI would not be “the next wave” and redirected its funds to projects deemed more likely to provide quick results. As a consequence, in the late 1980s, funding for AI research was cut deeply, creating the Second AI Winter."<ref name="dataversity.netw"/> However, " By the end of 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining of expert systems"<ref name="washington.edu"/>
+
| 1987–1993 || Second AI winter || Investors and governments stop funding AI research due to high costs and inefficient results, leading to another major AI winter. This coincides with the decline of early general-purpose computers and reduced government funding. Expert systems such as XCON are cost-effective but become too expensive to maintain compared to desktop computers. At the same time, DARPA concludes that AI would not be the next big thing and redirects funds to other projects. However, by the end of the 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining expert systems.<ref name="javatpoint.coma"/><ref name="livescience.coms"/><ref name="dataversity.netw"/><ref name="washington.edu"/>
 
|-
 
|-
| 1993–2011 || "The emergence of intelligent agents" || "In the early 1990s, Artificial Intelligence research shifted its focus to something called an intelligent agent. These  intelligent agents can be used for news retrieval services, online shopping, and browsing the web. Intelligent agents are also sometimes called agents or bots."<ref name="dataversity.netw"/> "Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved."<ref name="harvard.edu d"/> "However, neural networks would not become financially successful until the 1990s, when they started being used to operate optical character recognition programs and speech pattern recognition programs."<ref name="dataversity.netw"/>
+
| 1993–2011 || Emergence of intelligent agents || AI research shifts its focus to intelligent agents which are used for news retrieval, online shopping, and web browsing. Despite a lack of government funding and hype, AI thrives during the 1990s and 2000s, achieving many landmark goals. Neural networks become financially successful in the 1990s when used for optical character and speech pattern recognition.<ref name="dataversity.netw"/> Major advancements are made in all areas of AI, with significant demonstrations in machine learning, natural language understanding, vision, and other fields.<ref name="ocw.uc3m.es"/>
 
|-
 
|-
| 2011-onward || Massive data and new computing power. "Deep learning, big data and artificial general intelligence" || "In the year 2011, IBM's Watson won jeopardy, a quiz show, where it had to solve the complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly."<ref name="javatpoint.coma"/> "Two factors explain the new boom in the discipline around 2010. 1) First of all, access to massive volumes of data. To be able to use algorithms for image classification and cat recognition, for example, it was previously necessary to carry out sampling yourself. Today, a simple search on Google can find millions. 2) Then the discovery of the very high efficiency of computer graphics card processors to accelerate the calculation of learning algorithms. The process being very iterative, it could take weeks before 2010 to process the entire sample. The computing power of these cards (capable of more than a thousand billion transactions per second) has enabled considerable progress at a limited financial cost (less than 1000 euros per card)."<ref name="coe.intf"/>
+
| 2011-onward || Massive data and new computing power. "Deep learning, big data and artificial general intelligence" || In 2011, IBM's Watson wins Jeopardy, showcasing its ability to understand natural language and solve complex questions quickly. The AI field experiences a new boom in the early 2010s due to the availability of massive amounts of data and the discovery of the high efficiency of computer graphics card processors in accelerating learning algorithms. These advancements enable significant progress at a lower financial cost.<ref name="javatpoint.coma"/><ref name="coe.intf"/>
 
|-
 
|-
 
|}
 
|}
 +
 +
=== Summary by country ===
  
 
==Full timeline==
 
==Full timeline==
Line 37: Line 35:
 
! Year !! Event type !! Details !! Country/location
 
! Year !! Event type !! Details !! Country/location
 
|-
 
|-
| 1 AC || || "May  1. year: Alexander  Heron  in  antiquity  made automatons with mechanical mechanisms working with water and steam power."<ref name="Mijwil">{{cite web |last1=Mijwil |first1=Maad M. |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |accessdate=9 March 2020}}</ref> ||
+
| 4th century B.C. || || Greek philosopher {{w|Aristotle}} invents syllogistic logic, the first formal deductive reasoning system.<ref name="aitopics.org">{{cite web |title=A Brief History of AI |url=https://aitopics.org/misc/brief-history |website=aitopics.org |accessdate=20 March 2020}}</ref> ||
 +
|-
 +
| 1 AC || || Greek mathematician and engineer {{w|Hero of Alexandria}} creates automatons that operate with mechanical mechanisms powered by water and steam.<ref name="Mijwil">{{cite web |last1=Mijwil |first1=Maad M. |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |accessdate=9 March 2020}}</ref> ||
 +
|-
 +
| 1206 || || Ebru İz Bin Rezzaz Al Jezeri, who some consider a pioneer in cybernetic science, creates water-operated automatic controlled machines.<ref name="Mijwil"/> ||
 +
|-
 +
| 1308 || ||  Catalan poet Ramon Llull publishes "Ars generalis ultima" (The Ultimate General Art). This work improves his method of using mechanical tools made of paper to generate new ideas by combining different concepts.<ref name="forbes.coms">{{cite web |title=A Very Short History Of Artificial Intelligence (AI) |url=https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#35827d6b6fba |website=forbes.com |accessdate=7 February 2020}}</ref> ||
 
|-
 
|-
| 1206 || || "1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers of  cybernetic  science,  has  made  water-operated automatic controlled machines."<ref name="Mijwil"/> ||
+
| 1623 || || German professor {{w|Wilhelm Schickard}} invents a calculating machine capable of four operations.<ref>{{cite book |last1=Mehta |first1=Dhaval |last2=Ranadive |first2=Dr Amol |title=What Gamers Want: A Framework to Predict Gaming Habits |date=31 January 2021 |publisher=OrangeBooks Publication |url=https://books.google.com.ar/books?id=xuYXEAAAQBAJ&pg=PA60&dq=Wilhelm+Schickard++calculating+machine+capable+of+four+operations&hl=en&sa=X&ved=2ahUKEwiHr4Xq1LT2AhXSpJUCHWrFBvQQ6AF6BAgEEAI#v=onepage&q=Wilhelm%20Schickard%20%20calculating%20machine%20capable%20of%20four%20operations&f=false |language=en}}</ref><ref name="Mijwil"/> || {{w|Germany}}
 
|-
 
|-
| 1308 || || " Catalan poet and theologian Ramon Llull publishes Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts."<ref name="forbes.coms">{{cite web |title=A Very Short History Of Artificial Intelligence (AI) |url=https://www.forbes.com/sites/gilpress/2016/12/30/a-very-short-history-of-artificial-intelligence-ai/#35827d6b6fba |website=forbes.com |accessdate=7 February 2020}}</ref> ||
+
| 1642 || || {{w|Blaise Pascal}} creates the first mechanical digital calculating machine.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1623 || || "1623:  Wilhelm  Schickard  invented  a  mechanic  and a calculator capable of four operations."<ref name="Mijwil"/> ||
+
| 1666 || || German polymath {{w|Gottfried Leibniz}} releases his work ''Dissertatio de arte combinatoria'' (''On the Combinatorial Art''). In this work, he follows Ramon Llull's idea of suggesting an alphabet of human thought and argues that all ideas are merely combinations of a small number of simple concepts.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1666 || || "Mathematician and philosopher Gottfried Leibniz publishes Dissertatio de arte combinatoria (On the Combinatorial Art), following Ramon Llull in proposing an alphabet of human thought and arguing that all ideas are nothing but combinations of a relatively small number of simple concepts."<ref name="forbes.coms"/> ||
+
| 1672 || || {{w|Gottfried Leibniz}} in {{w|Paris}} develops a binary counting system that forms the abstract basis of modern computers.<ref name="Laurent">{{cite web |last1=Bloch |first1=Laurent |title=Informatics in the light of some Leibniz’s works |url=https://www.laurentbloch.net/MySpip3/IMG/pdf/leibniz-article.pdf |website=laurentbloch.net |access-date=9 March 2022}}</ref><ref name="Mijwil"/> || {{w|France}}
 
|-
 
|-
| 1672 || || "1672: Gottfried Leibniz has developed  a binary counting system  that forms the abstract basis of  today's computers."<ref name="Mijwil"/> ||
+
| 1703 || || {{w|Gottfried Leibniz}} has a foresight of how binary arithmetic could be suitable for automatic calculation.<ref name="Laurent"/> ||
 
|-
 
|-
| 1726 || || "onathan Swift publishes Gulliver's Travels, which includes a description of the Engine, a machine on the island of Laputa (and a parody of Llull's ideas): "a Project for improving speculative Knowledge by practical and mechanical Operations." By using this "Contrivance," "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study.""<ref name="forbes.coms"/> ||
+
| 1726 || || Jonathan Swift releases Gulliver's Travels, a book containing a portrayal of the Engine, a contraption situated on the island of Laputa that satirizes Llull's concepts. The Engine is described as "a Project for improving speculative Knowledge by practical and mechanical Operations." According to the depiction, using this device, even an uneducated individual could produce books on various subjects, such as Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with minimal assistance from creativity or education, but with some physical effort and at a reasonable cost.   <ref name="forbes.coms"/> ||  
 
|-  
 
|-  
| 1763 || || "Thomas Bayes develops a framework for reasoning about the probability of events. Bayesian inference will become a leading approach in machine learning."<ref name="forbes.coms"/> ||
+
| 1763 || || English statistician {{w|Thomas Bayes}} develops a framework for reasoning about the probability of events. The {{w|Bayesian inference}} would become a leading approach in machine learning.<ref name="forbes.coms"/><ref>{{cite web |last1=Kumar |first1=Ajitesh |title=12 Bayesian Machine Learning Applications Examples |url=https://vitalflux.com/bayesian-machine-learning-applications-examples/ |website=Data Analytics |access-date=7 March 2022 |date=17 September 2021}}</ref> || {{w|United Kingdom}} ({{w|Kingdom of Great Britain}})
 
|-
 
|-
| 1854 || || "  George Boole argues that logical reasoning could be performed systematically in the same manner as solving a system of equations."<ref name="forbes.coms"/> ||
+
| 1801 || || {{w|Joseph-Marie Jacquard}} invents the Jacquard loom, the first programmable machine, with instructions on punched cards.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1898 || || "At an electrical exhibition in the recently completed Madison Square Garden, Nikola Tesla makes a demonstration of the world’s first radio-controlled vessel. The boat was equipped with, as Tesla described, “a borrowed mind.”"<ref name="forbes.coms"/> ||
+
| 1854 || || Self-taught English mathematician, philosopher, and logician {{w|George Boole}} claims that logical reasoning can be systematically carried out, similar to solving a system of equations. He develops a binary algebra that represents some "laws of thought," which is published in his work titled The Laws of Thought (1854).<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1914 || || "The Spanish engineer Leonardo Torres y Quevedo demonstrates the first chess-playing machine, capable of king and rook against king endgames without any human intervention."<ref name="forbes.coms"/> ||
+
| 1863 || || English novelist [[w:Samuel Butler (novelist)|Samuel Butler]] suggests that {{w|Darwinian evolution}} also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.<ref name="sutori.comd">{{cite web |title=The History Of Artificial Intelligence |url=https://www.sutori.com/story/the-history-of-artificial-intelligence--4qEzQz1PPuA9Wo4mBkv2a9BX |website=sutori.com |accessdate=20 March 2020}}</ref> || {{w|United Kingdom}}
 
|-
 
|-
| 1921 || || "Czech writer Karel Čapek introduces the word "robot" in his play R.U.R. (Rossum's Universal Robots). The word "robot" comes from the word "robota" (work)."<ref name="forbes.coms"/> ||
+
| 1879 || || German philosopher, logician, and mathematician {{w|Gottlob Frege}} develops modern propositional logic in his work ''{{w|Begriffsschrift}}''. This would be later later clarified and expanded by Russell,Tarski, Godel, Church and others.<ref name="aitopics.org"/> || {{w|Germany}}
 +
|-
 +
| 1898 || || Nikola Tesla showcases the world's first remote-controlled boat at an electrical exhibition in the newly built Madison Square Garden. Tesla referred to the vessel as having "a borrowed mind."<ref name="forbes.coms"/> ||
 +
|-
 +
| 1910 || || ''{{w|Principia Mathematica}}'' is published by {{w|Bertrand Russell}} and {{w|Alfred North Whitehead}}. This book would have a significant impact on formal logic. Russell, along with Ludwig Wittgenstein and Rudolf Carnap, would pave the way for a logical analysis of knowledge in philosophy.<ref name="aitopics.org"/> || {{w|United Kingdom}}
 +
|-
 +
| 1912 || ||  Torres y Quevedo constructs a chess machine called the "Ajedrecista" that utilizes electromagnets located beneath the board to play out the endgame scenario of a rook and king against a single king. This creation is believed to be the earliest example of a computer game. <ref name="aitopics.org"/> ||
 +
|-
 +
| 1914 || ||  Leonardo Torres y Quevedo, a Spanish engineer, presents a chess-playing device that can play endgames with just a king and rook against a king without any human involvement.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1921 || || The term "robot" is first introduced by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots). The word is derived from "robota," which means "work" in Czech. The play explores the idea of artificial workers who ultimately turn against their human creators.<ref name="forbes.coms"/> ||
 
|-  
 
|-  
| 1925 || || " Houdina Radio Control releases a radio-controlled driverless car, travelling the streets of New York City."<ref name="forbes.coms"/> ||
+
| 1925 || || U.S. electrical engineer Francis P. Houdina demonstrates a radio-controlled car called the "American Wonder" on the streets of New York City. The car is able to travel at speeds of up to 20 mph, and it could turn corners and stop on command. The car is also able to avoid obstacles, such as pedestrians and other cars. The demonstration generates a lot of interest in the concept of driverless cars. However, the technology is not yet advanced enough to make driverless cars practical, and the American Wonder would be never put into production.<ref name="forbes.coms"/> || {{w|United States}}
 
|-  
 
|-  
| 1927 || || "he science-fiction film Metropolis is released. It features a robot double of a peasant girl, Maria, which unleashes chaos in Berlin of 2026—it was the first robot depicted on film, inspiring the Art Deco look of C-3PO in Star Wars."<ref name="forbes.coms"/> ||
+
| 1929 || || The first robot ever built in Japan is designed by Makoto Nishimura and named Gakutensoku, which means "learning from the laws of nature." This robot has the ability to alter its facial expression and move its head and hands, which is accomplished through an air pressure mechanism.   <ref name="forbes.coms"/> ||
 
|-
 
|-
| 1929 || || "Makoto Nishimura designs Gakutensoku, Japanese for "learning from the laws of nature," the first robot built in Japan. It could change its facial expression and move its head and hands via an air pressure mechanism."<ref name="forbes.coms"/> ||
+
| 1931 || || {{w|Kurt  Gödel}} introduces the theory of deficiency, which is called by his own name.<ref name="Mijwil"/> "In 1931, Goedel layed the foundations of Theoretical Computer Science and AI"<ref name="people.idsia.ch">{{cite web |title=Artificial Intelligence |url=http://people.idsia.ch/~juergen/ai.html |website=people.idsia.ch |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
| 1931 || || "1931: Kurt Gödel  introduced the  theory of  deficiency, which is called by his own name."<ref name="Mijwil"/> ||
+
| 1936 || ||  Konrad Zuse creates a computer with programmable capabilities called Z1, which has a memory capacity of 64K.<ref name="Mijwil"/> ||
 
|-
 
|-
| 1936 || || "1936: Konrad  Zuse  developed  a  programmable computer named Z1 named 64K memory."<ref name="Mijwil"/>
+
| 1936–1937 || || English mathematician {{w|Alan Turing}} proposes the universal Turing machine.<ref name="aitopics.org"/> || {{w|United Kingdom}}
 
|-
 
|-
| 1943 || || "Warren S. McCulloch and Walter Pitts publish “A Logical Calculus of the Ideas Immanent in Nervous Activity” in the Bulletin of Mathematical Biophysics. This influential paper, in which they discussed networks of idealized and simplified artificial “neurons” and how they might perform simple logical functions, will become the inspiration for computer-based “neural networks” (and later “deep learning”) and their popular description as “mimicking the brain.”"<ref name="forbes.coms"/> "a first mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943."<ref name="coe.intf">{{cite web |title=History of Artificial Intelligence |url=https://www.coe.int/en/web/artificial-intelligence/history-of-ai |website=coe.int |accessdate=7 February 2020}}</ref> "The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons."<ref name="javatpoint.coma">{{cite web |title=History of Artificial Intelligence |url=https://www.javatpoint.com/history-of-artificial-intelligence |website=javatpoint.com |accessdate=7 February 2020}}</ref> ||
+
| 1943 || || Warren McCulloch, a neurophysiologist at the University of Illinois, and Walter Pitts, a mathematician at the University of Chicago, release a significant publication regarding neural networks and automatons. They suggest that each neuron in the brain functions as a basic digital processor and that the entire brain is a type of computerized machine. This concept would have a significant impact on the field of artificial intelligence and would provide a theoretical foundation for the use of neural networks in modern technology.<ref name="javatpoint.coma">{{cite web |title=History of Artificial Intelligence |url=https://www.javatpoint.com/history-of-artificial-intelligence |website=javatpoint.com |accessdate=7 February 2020}}</ref><ref name="britannica.coms"/>  ||
 
|-
 
|-
| 1946 || || ": ENIAC  (Electronic  Numerical  Integrator  and Computer), the first computer in a room size of 30 tons, started to work."<ref name="Mijwil"/>
+
| 1943 || Concept development || Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "{{w|cybernetics}}" in a paper. Wiener would publish a popular book by that name in 1948.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1948 || || "John  von  Neumann introduced  the idea  of self-replicating program"<ref name="Mijwil"/>
+
| 1943 || || Emil Post demonstrates that production systems are a universal computational mechanism. His work on completeness, inconsistency, and proof theory is also significant. Chapter 2 of the book "Rule Based Expert Systems" discusses the applications of production systems in artificial intelligence.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1949 || || " Edmund Berkeley publishes Giant Brains: Or Machines That Think in which he writes: “Recently there have been a good deal of news about strange giant machines that can handle information with vast speed and skill….These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”"<ref name="forbes.coms"/> ||
+
| 1945 || Literature || Hungarian American mathematician {{w|George Polya}} publishes his best-selling book on thinking heuristically, ''{{w|How to Solve It}}''. This book introduces the term 'heuristic' into modern thinking and would influence many AI scientists.<ref name="aitopics.org"/> || {{w|United States}}
 +
|-
 +
| 1945 || Literature || American engineer {{w|Vannevar Bush}} publishes ''{{w|As We May Think}}'', a prescient vision of the future in which computers assist humans in many activities.<ref name="aitopics.org"/> || {{w|United States}}
 
|-
 
|-
| 1949 || || "Donald Hebb publishes Organization of Behavior: A Neuropsychological Theory in which he proposes a theory about learning based on conjectures regarding neural networks and the ability of synapses to strengthen or weaken over time."<ref name="forbes.coms"/> "Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning."<ref name="javatpoint.coma"/> ||
+
| 1946 || || The first computer, ENIAC (Electronic Numerical Integrator and Computer), becomes operational. It is so large that it occupies an entire room and weights 30 tons.<ref name="Mijwil"/> ||  
 
|-
 
|-
| 1950 || || " Claude Shannon’s “Programming a Computer for Playing Chess” is the first published article on developing a chess-playing computer program."<ref name="forbes.coms"/> ||
+
| 1949 || || American computer scientist {{w|Edmund Berkeley}} writes a book titled ''Giant Brains: Or Machines That Think'', where he discusses the emergence of news about large machines with the ability to handle vast amounts of information at a great speed and with great skill. According to him, these machines are comparable to a brain made of wires and hardware instead of flesh and nerves. In his opinion, machines are capable of thinking because they are capable of performing logical operations, making conclusions, and decisions based on information.<ref name="forbes.coms"/> || {{w|United States}}
 
|-
 
|-
| 1950 || || " Alan Turing publishes “Computing Machinery and Intelligence” in which he proposes “the imitation game” which will later become known as the “Turing Test.”"<ref name="forbes.coms"/> ||
+
| 1949 || ||  Donald Hebb publishes a book called "Organization of Behavior: A Neuropsychological Theory," which proposes a theory about learning based on the ability of synapses to strengthen or weaken over time in neural networks. Hebb demonstrates an updating rule for modifying the connection strength between neurons, which would be later known as Hebbian learning.<ref name="forbes.coms"/><ref name="javatpoint.coma"/> ||
 
|-
 
|-
| 1950 || || ". Turing, on the other hand, raised the question of the possible intelligence of a machine for the first time in his famous 1950 article "Computing Machinery and Intelligence" and described a "game of imitation", where a human should be able to distinguish in a teletype dialogue whether he is talking to a man or a machine."<ref name="coe.intf"/> ||
+
| 1950 || || In an article for Scientific American, Claude Shannon argues that only an artificial intelligence program could play computer chess at a high level. He points out that the number of possible moves in a chess game is so vast that it would be impossible for a human to consider all of them. An AI program, on the other hand, could use a search algorithm to explore all of the possible moves and select the best one. Shannon's article would become a landmark in the history of computer chess. It would help to lay the foundation for the development of the first chess-playing programs, which would be developed in the 1950s and 1960s. Today, AI programs are able to play chess at a level that is far superior to any human player.<ref name="forbes.coms"/><ref name="aitopics.org"/><ref name="atariarchives.org">{{cite web |title=A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE |url=https://www.atariarchives.org/deli/artificial_intelligence.php |website=atariarchives.org |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
| 1951 || || " Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons."<ref name="forbes.coms"/> ||
+
| 1950 || Concept development || {{w|Alan Turing}} publishes his article "Computing Machinery and Intelligence", which introduces the concept of the {{w|Turing Test}}, also known as the imitation game. This game involves a human judge trying to distinguish between a human and a machine in a teletype conversation. Turing's article is the first to raise the question of whether a machine could exhibit intelligence.<ref name="coe.intf"/> ||
 
|-
 
|-
| 1951 || || "1951The first artificial intelligence programs for the Mark 1 device were written"<ref name="Mijwil"/>
+
| 1951 || || {{w|Marvin Minsky}} and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1951 || || The first artificial intelligence programs for the {{w|Harvard Mark I}} device are written.<ref name="Mijwil"/> || {{w|United States}}
 
|-  
 
|-  
| 1952 || || " Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own."<ref name="forbes.coms"/> ||
+
| 1952 || || American computer scientist [[w:Arthur Samuel (computer scientist)|Arthur Samuel]] develops the first computer checkers-playing program and the first computer program to learn on its own.<ref name="forbes.coms"/> || {{w|United States}}
 +
|-
 +
| 1952 || || Alan Hodgkin and Andrew Huxley publish a paper in the journal Nature that describe a mathematical model of the electrical activity of neurons. The model, which would be later known as the Hodgkin-Huxley model, is a set of nonlinear differential equations that describe how the membrane potential of a neuron changes over time. The Hodgkin-Huxley model would become a major breakthrough in the field of neuroscience, and it would help to lay the foundation for our understanding of how neurons work. The model would be used to study a wide range of phenomena in neuroscience, including the generation of action potentials, the propagation of action potentials, and the integration of synaptic inputs. The Hodgkin-Huxley model is a simplified model of the neuron, but it is still a very powerful tool for understanding how neurons work.<ref name="dataversity.netw">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.dataversity.net/brief-history-artificial-intelligence/ |website=dataversity.net |accessdate=7 February 2020}}</ref> ||
 +
|-
 +
| 1953 || || Arthur Prior, a philosopher at the University of Canterbury, first introduces tense logic, which would be used by languages to express time-dependent data. Tense logic helps in locating statements in the flow of time.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1954 || || The Georgetown-IBM experiment becomes the first demonstration of machine translation (MT). The experiment is conducted by a team of researchers from Georgetown University and IBM. They use a computer called the IBM 701 to translate 60 Russian sentences into English. The sentences are all related to organic chemistry, and the translation system was able to translate them with an accuracy of 85%. The Georgetown-IBM experiment becomes a major milestone in the history of MT. It shows that MT was a real possibility, and it paves the way for the development of more advanced MT systems.<ref name="washington.edu"/> || {{w|United States}}
 +
|-
 +
| 1954 || || Belmont Farley and Wesley Clark of {{w|MIT}} achieve a significant milestone by running the first artificial neural network. Although limited by computer memory to 128 neurons, they are able to train the network to recognize simple patterns. They also discover that damaging up to 10 percent of the neurons did not affect the network's performance, similar to the brain's ability to tolerate limited damage. The depicted neural network exemplifies the fundamental concepts of connectionism.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1955 || || In August 31, 1955, a proposal titled ''2 month, 10 man study of artificial intelligence'' is submitted by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. This proposal introduces the term "artificial intelligence." The workshop, held in July and August 1956, would be widely regarded as the official birth of the field of artificial intelligence.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1955 (December) || || Herbert Simon and Allen Newell introduce the Logic Theorist, recognized as the first artificial intelligence program. This program achieves a remarkable feat by proving 38 out of the initial 52 theorems found in Whitehead and Russell's Principia Mathematica. Additionally, it discovers new and more elegant proofs for some of these theorems.<ref name="forbes.coms"/><ref name="javatpoint.coma"/> ||
 +
|-
 +
| 1955–1956 || || Allen Newell, J. Clifford Shaw, and Herbert Simon create the Logic Theorist, a groundbreaking program aimed at proving theorems from Principia Mathematica by Whitehead and Russell. The Logic Theorist, as it comes to be known, is capable of producing more elegant proofs than those found in the original books, marking a significant achievement in this field.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1956 || || The inaugural "Artificial Intelligence" conference takes place at Dartmouth College in Hanover, New Hampshire. The term "artificial intelligence" was previously coined in a proposal submitted by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in August 1955, leading to the official birth of the field during the workshop held in July and August 1956. This summer conference, funded by the Rockefeller Institute, is considered the foundation of the discipline. Remarkably, it is a workshop rather than a conventional conference, with only six participants, including McCarthy and Minsky, who would remain consistently engaged in developing the field, primarily through formal logic.<ref>{{cite web |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |website=researchgate.net |accessdate=9 March 2020}}</ref><ref name="forbes.coms"/> <ref name="coe.intf"/><ref name="washington.edu">{{cite web |title=The History of Artificial Intelligence |url=https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf |website=washington.edu |accessdate=7 February 2020}}</ref><ref name="javatpoint.coma"/> || {{w|United States}}
 +
|-
 +
| 1956 || || Newell and Simon develop the Logic Theorist program, an early AI system designed to discover proofs in propositional logic. This marks the inception of artificial intelligence as a field, with Logic Theorist being later often considered the first AI program. It is presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in the same year, a conference hosted by John McCarthy and Marvin Minsky, where the term "artificial intelligence" is first coined. The program aims to simulate human problem-solving skills and was funded by the RAND Corporation.<ref name="artint.info"/><ref name="harvard.edu d"/><ref name="livescience.coms">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.livescience.com/49007-history-of-artificial-intelligence.html |website=livescience.com |accessdate=7 February 2020}}</ref><ref name="Mijwil"/> ||
 +
|-
 +
| 1957 || || Frank Rosenblatt creates the Perceptron, one of the initial artificial neural networks that facilitates pattern recognition through a two-layer computer learning system. The New York Times describes the Perceptron as the early stages of an electronic computer that the Navy anticipates can eventually possess capabilities such as walking, talking, seeing, writing, self-replicating, and self-awareness. The New Yorker characterizes it as an extraordinary machine with the potential for what can be considered as thought processes.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1957 || ||  Herbert Simon, an economist and sociologist, predicts that artificial intelligence would be able to defeat a human at chess within a decade. However, AI research would experience a setback and would go through a period of dormancy. Nevertheless, Simon's prediction ultimately would come true, but it would take 30 years for AI to accomplish this feat.<ref name="coe.intf"/> ||
 +
|-
 +
| 1957 || || Herbert Newell, Cliff Shaw, and Herbert Simon demonstrate the General Problem Solver (GPS). This program, developed over about a decade, is capable of solving a wide range of puzzles through a trial-and-error approach, showcasing significant problem-solving abilities.<ref name="britannica.coms"/><ref name="aitopics.org"/> ||
 +
|-
 +
| 1958 || || American computer scientist John McCarthy develops the {{w|Lisp}} programming language. Lisp is a functional programming language that is well-suited for artificial intelligence applications. It is a recursive language, which means that it can be used to represent recursive data structures, such as lists. This makes it a powerful tool for representing the knowledge that is used in artificial intelligence applications. Lisp would be used in a wide variety of artificial intelligence applications, including natural language processing, machine learning, and robotics. It is still one of the most popular programming languages used in artificial intelligence research.<ref name="forbes.coms"/><ref name="Mijwil"/> ||
 +
|-
 +
| 1958 || || Herbert Gelernter's "geometry machine" becomes the first advanced AI program to prove geometric theorems, marking a significant milestone in artificial intelligence development.<ref name="omnius.com"/> ||
 +
|-
 +
| 1959 || || Arthur Samuel coins the term "machine learning" while reporting his work on programming a computer to improve its checkers game-playing skills beyond the capabilities of its human programmer.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1959 || || Oliver Selfridge publishes ''Pandemonium: A paradigm for learning'', which describes a model in which computers can recognize patterns that has not been pre-specified. This work lays the foundation for pattern recognition and learning in AI.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1959 || || John McCarthy publishes ''Programs with Common Sense'', in which he introduces the concept of the "Advice Taker," a program designed for problem-solving and common-sense reasoning.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1959 || ||  Samuel creates a checkers program. Later in the late 1950s, he would design a program that can learn how to play checkers.<ref name="artint.info">{{cite web |title=1.2 A Brief History of Artificial Intelligence |url=https://artint.info/2e/html/ArtInt2e.Ch1.S2.html |website=artint.info |accessdate=21 March 2020}}</ref> ||
 +
|-
 +
| 1960 || || American psychologist and computer scientist {{w|J. C. R. Licklider}} describes the  human-machine relationship in his work.<ref name="Mijwil"/> || {{w|United States}}
 +
|-
 +
| 1961 || || {{w|Unimate}}, the first industrial robot, starts working on an assembly line in a General Motors plant in New Jersey.<ref name="forbes.coms"/><ref>{{cite book |title=Engineers: From the Great Pyramids to the Pioneers of Space Travel |date=16 April 2012 |publisher=Penguin |isbn=978-1-4654-0682-8 |url=https://books.google.com.ar/books?id=4M01NTdvu3kC&pg=PA238&dq=unimate+1961&hl=en&sa=X&ved=2ahUKEwi-jZLTjcH2AhWcqZUCHTQwAcEQ6AF6BAgKEAI#v=onepage&q=unimate%201961&f=false |language=en}}</ref> || {{w|United States}}
 +
|-
 +
| 1961 || || James Slagle in his PhD dissertation writes in Lisp the first symbolic integration program, SAINT, which solves calculus problems at the college freshman level.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1961 || || American computer scientist {{w|James Robert Slagle}} develops SAINT (Symbolic Automatic INTegrator), a heuristic program designed to solve symbolic integration problems typically found in freshman calculus.<ref name="forbes.coms"/> || {{w|United States}}
 
|-
 
|-
| 1952 || || "Hodgkin-Huxley model of the brain as neurons forming an electrical network, with individual neurons firing in all-or-nothing (on/off) pulses."<ref name="dataversity.netw">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.dataversity.net/brief-history-artificial-intelligence/ |website=dataversity.net |accessdate=7 February 2020}}</ref> ||
+
| 1963 || || Reed C. Lawlor, a member of the California Bar, authors an article titled ''What Computers Can Do: Analysis and Prediction of Judicial Decisions''. The article explores the potential of computers in analyzing and predicting judicial decisions.<ref name="coe.intf"/> ||
 
|-
 
|-
| 1954 || || "In the US, one of the main motivations for the funding of AI research was the promise of machine translation (MT). Because of Cold War concerns, the US government was particularly interested in the automatic and instant translation of Russian. In 1954, the first demonstration of MT, the Georgetown-IBM experiment, showed a great promise. The system was by no means complete, consisting only six rules, a 250-item vocabulary and specialized only in Organic Chemistry."<ref name="washington.edu"/> ||
+
| 1963 || || Thomas Evans develops a program called ANALOGY as part of his MIT PhD work. This program demonstrates that computers are capable of solving analogy problems similar to those found on IQ tests.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1955 || || "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field."<ref name="forbes.coms"/> ||
+
| 1963 || || Ivan Sutherland's MIT dissertation on Sketchpad introduces the concept of interactive graphics into the field of computing.<ref name="aitopics.org"/> || {{w|United States}}
 
|-
 
|-
| 1955 || || "December 1955 Herbert Simon and Allen Newell develop the Logic Theorist, the first artificial intelligence program, which eventually would prove 38 of the first 52 theorems in Whitehead and Russell's Principia Mathematica."<ref name="forbes.coms"/> " An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems."<ref name="javatpoint.coma"/> ||
+
| 1963 || || Edward A. Feigenbaum and Julian Feldman publish ''Computers and Thought'', which is the first collection of articles focused on artificial intelligence.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1956 || || "In 1956, a conference "Artificial Intelligence" was held for the first time in Hanover, New  Hampshire,  at  Dartmouth  College. "<ref>{{cite web |title=History of Artificial Intelligence |url=https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence |website=researchgate.net |accessdate=9 March 2020}}</ref> "August 31, 1955 The term “artificial intelligence” is coined in a proposal for a “2 month, 10 man study of artificial intelligence” submitted by John McCarthy (Dartmouth College), Marvin Minsky (Harvard University), Nathaniel Rochester (IBM), and Claude Shannon (Bell Telephone Laboratories). The workshop, which took place a year later, in July and August 1956, is generally considered as the official birthdate of the new field.""<ref name="forbes.coms"/> "The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline. Anecdotally, it is worth noting the great success of what was not a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments based on formal logic)."<ref name="coe.intf"/><ref name="washington.edu">{{cite web |title=The History of Artificial Intelligence |url=https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf |website=washington.edu |accessdate=7 February 2020}}</ref><ref name="javatpoint.coma"/> ||
+
| 1964 || || Daniel Bobrow completes his MIT PhD dissertation titled ''Natural Language Input for a Computer Problem Solving System'' and creates STUDENT, a computer program for natural language understanding.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1956 || || "Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956."<ref name="harvard.edu d"/> "But the field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term "artificial intelligence" was coined."<ref name="livescience.coms">{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.livescience.com/49007-history-of-artificial-intelligence.html |website=livescience.com |accessdate=7 February 2020}}</ref> "1956:  The logic  theorist  (Logic  Theory-LT)  program for  solving  mathematical  problems  is  introduced  by Neweell,  Shaw  and  Simon. The system is regarded as the first artificial intelligence system."<ref name="Mijwil"/> ||
+
| 1964 || || The {{w|Society for the Study of Artificial Intelligence and the Simulation of Behaviour}} is founded. It is the oldest AI society in the world. || {{w|United Kingdom}}
 
|-
 
|-
| 1957 || || " Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network. The New York Times reported the Perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence." The New Yorker called it a “remarkable machine… capable of what amounts to thought.”"<ref name="forbes.coms"/> ||
+
| 1964 || || Danny Bobrow's MIT dissertation demonstrates that computers can understand natural language well enough to correctly solve algebra word problems.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1957 || || "Herbert Simon, economist and sociologist, prophesied in 1957 that the AI would succeed in beating a human at chess in the next 10 years, but the AI then entered a first winter. Simon's vision proved to be right... 30 years later."<ref name="coe.intf"/> ||
+
| 1964 || || Bert Raphael's MIT dissertation on the SIR program showcases the effectiveness of a logical knowledge representation for question-answering systems.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1958 || || "John McCarthy develops programming language Lisp which becomes the most popular programming language used in artificial intelligence research."<ref name="forbes.coms"/> ||
+
| 1965 || || Herbert Simon predicts in ''The Shape of Automation for Men and Management'' that machines would be capable of doing any work a man can do within 20 years.<ref name="historyextra">{{cite web |title=7 phases of the history of Artificial intelligence |url=https://www.historyextra.com/period/second-world-war/7-phases-of-the-history-of-artificial-intelligence/ |website=historyextra.com |accessdate=21 March 2020}}</ref> "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1959 || || "Arthur Samuel coins the term “machine learning,” reporting on programming a computer “so that it will learn to play a better game of checkers than can be played by the person who wrote the program.”"<ref name="forbes.coms"/> ||
+
| 1965 || || American philosopher {{w|Hubert Dreyfus}} publishes ''Alchemy and AI'', which argues that the mind is not like a computer and that there are limits beyond which artificial intelligence would not progress.<ref name="forbes.coms"/> || {{w|United States}}
 
|-
 
|-
| 1959 || || "Oliver Selfridge publishes “Pandemonium: A paradigm for learning” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes a model for a process by which computers could recognize patterns that have not been specified in advance."<ref name="forbes.coms"/> ||
+
| 1965 || || I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that the first ultraintelligent machine could potentially be humanity's last invention, as long as it remains compliant enough to guide us in controlling it.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1959 || || "John McCarthy publishes “Programs with Common Sense” in the Proceedings of the Symposium on Mechanization of Thought Processes, in which he describes the Advice Taker, a program for solving problems by manipulating sentences in formal languages with the ultimate objective of making programs “that learn from their experience as effectively as humans do.”"<ref name="forbes.coms"/> ||
+
| 1965 || || Joseph Weizenbaum creates ELIZA, an interactive software that engages in conversations in English about various subjects. Weizenbaum's objective was to exhibit the superficial nature of communication between humans and machines. However, he would be taken aback by the number of individuals attributing human-like emotions to the computer program.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1961 || || "The first industrial robot, Unimate, starts working on an assembly line in a General Motors plant in New Jersey."<ref name="forbes.coms"/> ||
+
| 1965 || || Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi begin developing DENDRAL at Stanford University. DENDRAL is the first expert system, designed to automate the decision-making and problem-solving tasks performed by organic chemists. Its primary goal is to explore hypothesis formation and the creation of models for empirical induction in scientific research.<ref name="forbes.coms"/><ref name="coe.intf"/> || {{w|United States}}
 
|-
 
|-
| 1961 || || "James Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program that solved symbolic integration problems in freshman calculus."<ref name="forbes.coms"/> ||
+
| 1965 || Literature || {{w|Hubert Dreyfus}} publishes Alchemy and AI. ||
 
|-
 
|-
| 1963 || || " 1963 article by Reed C. Lawlor, a member of the California Bar, entitled "What Computers Can Do: Analysis and Prediction of Judicial Decisions""<ref name="coe.intf"/> ||
+
| 1965 || || J. Alan Robinson develops the Resolution Method, a mechanical proof procedure that enables programs to efficiently work with formal logic as a representation language.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1964 || || "Daniel Bobrow completes his MIT PhD dissertation titled “Natural Language Input for a Computer Problem Solving System” and develops STUDENT, a natural language understanding computer program."<ref name="forbes.coms"/> ||
+
| 1965 || || Joseph Weizenbaum, a researcher at MIT, develops ELIZA, an interactive software that engages in conversations in the English language on various subjects. Initially, it is a well-liked application at AI centers on the ARPA-net. However, a modified version would be created to imitate the conversation style of a psychotherapist.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1964 || || {{w|Society for the Study of Artificial Intelligence and the Simulation of Behaviour}} || {{w|United Kingdom}}
+
| 1966 || || Shakey the robot is introduced as the first general-purpose mobile robot capable of reasoning about its own actions. An article in Life magazine in 1970 refers to Shakey as the "first electronic person," and Marvin Minsky predicts that within three to eight years, a machine with the general intelligence of an average human would be achieved.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1965 || || "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""<ref name="forbes.coms"/> ||
+
| 1966 || || Joseph Weizenbaum, a German-American computer scientist at MIT, creates the first chatbot named ELIZA. ELIZA uses scripts to simulate conversations with humans, including the role of a psychotherapist. This development highlights the early focus on algorithm development for mathematical problem-solving.<ref name="bosch.coms">{{cite web |title=The history of artificial intelligence |url=https://www.bosch.com/stories/history-of-artificial-intelligence/ |website=bosch.com |accessdate=7 February 2020}}</ref><ref name="javatpoint.coma"/> ||
 
|-
 
|-
| 1965 || || "Hubert Dreyfus publishes "Alchemy and AI," arguing that the mind is not like a computer and that there were limits beyond which AI would not progress."<ref name="forbes.coms"/> ||
+
| 1966 || || The ALPAC report, known for its skepticism about machine translation research and its call for increased focus on basic computational linguistics research, results in a significant reduction in U.S. government funding for this field. This report, along with the 1973 Lighthill report for the British government, contribute to the onset of the AI winter, a period marked by reduced funding and interest in artificial intelligence research.<ref name="washington.edu"/><ref name="aitopics.org"/> ||
 
|-
 
|-
| 1965 || || "I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that “the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”"<ref name="forbes.coms"/> ||
+
| 1966 || Organization || Canadian engineer [[w:Charles Rosen (scientist)|Charles Rosen]] founds the {{w|Artificial Intelligence Center}}.<ref>{{cite web |title=AIC Timeline |url=http://www.ai.sri.com/timeline/ |website=ai.sri.com |accessdate=6 March 2020}}</ref> ||
 
|-
 
|-
| 1965 || || " Joseph Weizenbaum develops ELIZA, an interactive program that carries on a dialogue in English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program."<ref name="forbes.coms"/> ||
+
| 1966 || || Ross Quillian in his PhD dissertation at {{w|Carnegie Institute of Technology}} demonstrates {{w|semantic network}}s<ref name="aitopics.org"/>, which are basically graphic depictions of knowledge composed of nodes and links that show hierarchical relationships between objects.<ref>{{cite web |title=Semantic Network - an overview {{!}} ScienceDirect Topics |url=https://www.sciencedirect.com/topics/computer-science/semantic-network |website=www.sciencedirect.com |access-date=5 March 2022}}</ref> Semantic networks are an alternative to {{w|first-order logic}} as a form of knowledge representation.<ref>{{cite web |title=Notes on Semantic Nets and Frames |url=http://www.eecs.qmul.ac.uk/~mmh/AINotes/AINotes4.pdf |website=eecs.qmul.ac.uk |access-date=5 March 2022}}</ref> || {{w|United States}}
 
|-
 
|-
| 1965 || || "Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi start working on DENDRAL at Stanford University. The first expert system, it automated the decision-making process and problem-solving behavior of organic chemists, with the general aim of studying hypothesis formation and constructing models of empirical induction in science."<ref name="forbes.coms"/> ||
+
| 1966 || || The first Machine Intelligence workshop takes place in Edinburgh, marking the beginning of an influential annual series of workshops organized by Donald Michie and others.<ref name="aitopics.org"/> || {{w|United Kingdom}}
 
|-
 
|-
| 1965 || {{w|Expert system}} || "The path was actually opened at MIT in 1965 with DENDRAL (expert system specialized in molecular chemistry) "<ref name="coe.intf"/> ||
+
| 1967 || || The Dendral program, developed by Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, and Georgia Sutherland at Stanford University, successfully demonstrates the interpretation of mass spectra on organic chemical compounds. This achievement marks the first successful knowledge-based program for scientific reasoning.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1966 || || "Shakey the robot is the first general-purpose mobile robot to be able to reason about its own actions. In a Life magazine 1970 article about this “first electronic person,” Marvin Minsky is quoted saying with “certitude”: “In from three to eight years we will have a machine with the general intelligence of an average human being.”"<ref name="forbes.coms"/> ||
+
| 1967 || || Joel Moses, during his PhD work at MIT, demonstrates the effectiveness of symbolic reasoning for integration problems through the Macsyma program. This marks a significant milestone as the first successful knowledge-based program in mathematics.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1966 || || "1966: Birth of the first chatbot The German-American computer scientist Joseph Weizenbaum of the Massachusetts Institute of Technology invents a computer program that communicates with humans. ‘ELIZA’ uses scripts to simulate various conversation partners such as a psychotherapist. Weizenbaum is surprised at the simplicity of the means required for ELIZA to create the illusion of a human conversation partner."<ref name="bosch.coms">{{cite web |title=The history of artificial intelligence |url=https://www.bosch.com/stories/history-of-artificial-intelligence/ |website=bosch.com |accessdate=7 February 2020}}</ref> " The researchers emphasized developing algorithms which can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA."<ref name="javatpoint.coma"/> ||
+
| 1967 || || Richard Greenblatt at MIT develops MacHack, a knowledge-based chess-playing program that achieved a class-C rating in tournament play. This achievement marks a notable advancement in computer chess.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1966 || || "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions
+
| 1967 || || Daniel Bobrow's STUDENT program demonstrates the ability to solve high school algebra problems expressed in natural language, showcasing early advancements in natural language understanding by computers.<ref name="artint.info"/> ||
were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory
 
Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."<ref name="washington.edu"/> ||
 
 
|-
 
|-
| 1966 || || {{w|Artificial Intelligence Center}}<ref>{{cite web |title=AIC Timeline |url=http://www.ai.sri.com/timeline/ |website=ai.sri.com |accessdate=6 March 2020}}</ref> ||
+
| 1968 || || Stanley Kubrick's film {{w|2001: A Space Odyssey}} is released, featuring HAL 9000, a sentient computer that raises questions about the sophistication, benefits, and dangers of AI. While not a scientific contribution, the film would play a significant role in popularizing AI themes and ethical questions. Science fiction authors like Philip K. Dick also explore the idea of machines experiencing emotions, contributing to the discourse around AI. <ref name="forbes.coms"/><ref name="coe.intf"/> ||
 
|-
 
|-
| 1968 || || "The film 2001: Space Odyssey is released, featuring Hal, a sentient computer."<ref name="forbes.coms"/> "In 1968 Stanley Kubrick directed the film "2001 Space Odyssey" where a computer - HAL 9000 (only one letter away from those of IBM) summarizes in itself the whole sum of ethical questions posed by AI: will it represent a high level of sophistication, a good for humanity or a danger? The impact of the film will naturally not be scientific but it will contribute to popularize the theme, just as the science fiction author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions."<ref name="coe.intf"/> ||
+
| 1968 || || American computer scientist {{w|Terry Winograd}} creates SHRDLU, a groundbreaking multimodal artificial intelligence system capable of manipulating and reasoning about a simulated world of blocks based on user instructions. SHRDLU showcases advanced natural language processing capabilities, enabling users to interact with the system in English to give commands and queries regarding the arrangement and manipulation of blocks. This pioneering work demonstrates significant progress in the field of artificial intelligence, particularly in natural language understanding and semantic interpretation, laying the groundwork for future developments in human-computer interaction and AI reasoning systems.<ref name="forbes.coms"/> || {{w|United States}}
 
|-
 
|-
| 1968 || || "Terry Winograd develops SHRDLU, an early natural language understanding computer program."<ref name="forbes.coms"/> ||
+
| 1969 || || {{w|Arthur E. Bryson}} and Yu-Chi Ho describe {{w|backpropagation}} as a multi-stage dynamic system optimization method. While it doesn't gain prominence immediately, this learning algorithm for multi-layer artificial neural networks would later play a significant role in the success of deep learning during the 2000s and 2010s, as computing power advances to enable the training of large neural networks.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1969 || || "Arthur Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. A learning algorithm for multi-layer artificial neural networks, it has contributed significantly to the success of deep learning in the 2000s and 2010s, once computing power has sufficiently advanced to accommodate the training of large networks."<ref name="forbes.coms"/> ||
+
| 1969 || || Marvin Minsky and Seymour Papert publish ''Perceptrons: An Introduction to Computational Geometry'', which highlights the limitations of simple neural networks called perceptrons. An expanded edition in 1988 would clarify that their conclusions in 1969 didn't significantly reduce funding for neural network research. Instead, they would argued that progress has stalled due to a lack of adequate basic theories in the mid-1960s, despite many experiments with perceptrons. The book emphasizes the need for a deeper understanding of why certain patterns could be recognized by neural networks while others could not.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1969 || || "Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, highlighting the limitations of simple neural networks. In an expanded edition published in 1988, they responded to claims that their 1969 conclusions significantly reduced funding for neural network research: “Our version is that progress had already come to a virtual halt because of the lack of adequate basic theories… by the mid-1960s there had been a great many experiments with perceptrons, but no one had been able to explain why they were able to recognize certain kinds of patterns and not others.”"<ref name="forbes.coms"/> ||
+
| 1969 || Conference || The first {{w|International Joint Conference on Artificial Intelligence}} (IJCAI) is held in {{w|Washington, D.C.}}<ref name="aitopics.org"/> || {{w|United States}}
 
|-
 
|-
| 1969 || Conference || {{w|International Joint Conference on Artificial Intelligence}} ||
+
| 1969 || || The SRI robot [[w:Shakey the robot|Shakey]] demonstrates the ability to combine locomotion, perception, and problem solving. This is a major breakthrough in the field of robotics, as it shows that it is possible to build a robot that can interact with its environment in a meaningful way. Shakey is equipped with a variety of sensors, including a television camera, a laser range finder, and a bump sensor. These sensors allow Shakey to see its surroundings, measure the distance to objects, and detect obstacles. Shakey is also equipped with a problem-solving system that allows it to plan its movements and solve simple problems. Shakey's success shows that it is possible to build a robot that can combine locomotion, perception, and problem solving. This is a major breakthrough in the field of robotics, as it paves the way for the development of more advanced mobile robots.<ref name="aitopics.org"/> || {{w|United States}}
 
|-
 
|-
| 1970 || || "It was with the advent of the first microprocessors at the end of 1970 that AI took off again and entered the golden age of expert systems."<ref name="coe.intf"/> ||
+
| 1969 || || Roger Schank, a researcher at {{w|Stanford University}}, introduces the conceptual dependency model for natural language understanding. This model would be further developed for applications in story understanding by Robert Wilensky and Wendy Lehnert during their PhD dissertations at Yale University. Additionally, Janet Kolodner would expand its use in understanding memory.<ref name="aitopics.org"/> || {{w|United States}}
 
|-
 
|-
| 1970 || Literature || ''{{w|Artificial Intelligence (journal)}}''<ref>{{cite web |title=Artificial Intelligence Journal Division of IJCAI |url=https://www.ijcai.org/aijd |website=ijcai.org |accessdate=6 March 2020}}</ref> ||
+
| 1970 || Literature || Journal ''[[w:Artificial Intelligence (journal)|Artificial Intelligence]]'' is first published by {{w|Elsevier}}.<ref>{{cite web |title=Artificial Intelligence Journal Division of IJCAI |url=https://www.ijcai.org/aijd |website=ijcai.org |accessdate=6 March 2020}}</ref> || {{w|Netherlands}}
 
|-
 
|-
| 1970 || || "The first anthropomorphic robot, the WABOT-1, is built at Waseda University in Japan. It consisted of a limb-control system, a vision system and a conversation system."<ref name="forbes.coms"/> ||
+
| 1970 || || Waseda University in Japan creates the WABOT-1, the first anthropomorphic robot. This robot features limb control, a vision system, and a conversation system, marking a significant advancement in robotics.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1970 || || " In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved."<ref name="harvard.edu d"/> ||
+
| 1970 || || {{w|Marvin Minsky}} expresses optimism to Life Magazine, suggesting that within three to eight years, a machine with the general intelligence of an average human being would be developed. However, despite the progress made in basic principles, there is still a considerable distance to cover before achieving goals like natural language processing, abstract thinking, and self-recognition in AI.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 1972 || {{w|Expert system}} || "MYCIN, an early expert system for identifying bacteria causing severe infections and recommending antibiotics, is developed at Stanford University."<ref name="harvard.edu d"/> "Stanford University in 1972 with MYCIN (system specialized in the diagnosis of blood diseases and prescription drugs). "<ref name="coe.intf"/> "1972: AI enters the medical field. With ‘MYCIN’, artificial intelligence finds its way into medical practices: The expert system developed by Ted Shortliffe at Stanford University is used for the treatment of illnesses. Expert systems are computer programs that bundle the knowledge for a specialist field using formulas, rules, and a knowledge database. They are used for diagnosis and treatment support in medicine."<ref name="bosch.coms"/> ||
+
| 1970 || || Uruguayan American {{w|Jaime Carbonell}} develops SCHOLAR, an interactive program for computer-aided instruction based on semantic nets as the representation of knowledge.<ref name="aitopics.org"/> SCHOLAR is perhaps the first intelligent tutoring system.<ref>{{cite book |last1=Harris |first1=Randy Allen |title=Voice Interaction Design: Crafting the New Conversational Speech Systems |date=31 December 2004 |publisher=Elsevier |isbn=978-0-08-047480-9 |url=https://books.google.com.ar/books?id=92ISybAfXagC&pg=PA154&lpg=PA154&dq=Jaime+Carbonell+scholar+1970&source=bl&ots=SaN0AIjkG2&sig=ACfU3U2d2asn3HHm4EUodK8XSFIyzg5DZA&hl=en&sa=X&ved=2ahUKEwjBuNqzyNj2AhVsg5UCHS8vBckQ6AF6BAgQEAM#v=onepage&q=Jaime%20Carbonell%20scholar%201970&f=false |language=en}}</ref> || {{w|United States}}
 +
|-
 +
| 1970 || || Bill Woods describes {{w|Augmented Transition Networks}} (ATN) as a representation for natural language understanding.<ref name="aitopics.org"/> The ATN is a formalism for writing parsing grammars that would be much used in artificial intelligence and {{w|computational linguistics}}.<ref>{{cite journal |last1=Shapiro |first1=Stuart C. |title=Generalized augmented transition network grammars for generation from semantic networks |journal=Computational Linguistics |date=1 January 1982 |volume=8 |issue=1 |pages=12–25 |doi=10.5555/972923.972925 |url=https://dl.acm.org/doi/10.5555/972923.972925 |issn=0891-2017}}</ref> ||
 
|-
 
|-
| 1972 || || "The first intelligent humanoid robot was built in Japan which was named as WABOT-1."<ref name="javatpoint.coma"/> ||
+
| 1970 || || Patrick Winston's PhD program called ARCH, which is conducted at MIT, focuses on teaching computers to learn concepts from examples in the context of children's building blocks.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1973 || || "James Lighthill reports to the British Science Research Council on the state artificial intelligence research, concluding that "in no part of the field have discoveries made so far produced the major impact that was then promised," leading to drastically reduced government support for AI research."<ref name="harvard.edu d"/> "The “Lighthill report” commonly refers to “Artificial Intelligence: A General Survey” by Professor Sir James Lighthill of Cambridge University in 1973. His review of AI was at the request of Brian Flowers, the head of the British Science Research Council, the main funding body of British university scientific research. The review was to help the council evaluate requests for support in AI research. In the paper, Lighthill offered a pessimistic prognosis for AI, stating that “in no part of the field have discoveries made so far produced the major impact that was then promised”"<ref name="washington.edu"/> ||
+
| 1971 || || Terry Winograd's MIT PhD thesis showcases computers' capacity to comprehend English sentences within a limited context involving children's building blocks. He achieves this by integrating his language comprehension program, SHRDLU, with a robot arm that executes instructions provided in English text.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1973 || || "The onset of the AI winter could be traced to the government’s decision to pull back on AI research. The decisions were often attributed to a couple of infamous reports, specifically the Automatic Language Processing Advisory Committee (ALPAC) report by U.S. Government in 1966, and the Lighthill report for the British government in 1973."<ref name="washington.edu"/> ||
+
| 1972 || {{w|Expert system}} || Stanford University introduces MYCIN, one of the early expert systems designed for diagnosing severe infections, identifying bacteria responsible, and recommending suitable antibiotics. MYCIN represents a pioneering application of artificial intelligence in the medical field, serving as an expert system that utilized rules, formulas, and a knowledge database to assist in diagnosing and treating illnesses.<ref name="harvard.edu d"/><ref name="coe.intf"/><ref name="bosch.coms"/> ||
 +
|-
 +
| 1972 || || The WABOT-1 becomes the first full-scale humanoid intelligent robot built in the world. It is developed by a team of researchers at {{w|Waseda University}} in Tokyo, Japan, led by Ichiro Kato. The WABOT-1 is able to walk, talk, and interact with people in a limited way. A major breakthrough in the field of robotics it shows that it is possible to build a robot that could interact with humans in a meaningful way. The research that is done on the WABOT-1 would help to pave the way for the development of more advanced humanoid robots, such as the ASIMO robot developed by Honda.<ref name="javatpoint.coma"/> || {{w|Japan}}
 +
|-
 +
| 1972 || Lierature || {{w|Hubert Dreyfus}} publishes ''What Computers Can't Do''.<ref>{{cite web |last1=Dreyfus |first1=Hubert L. |title=What Computers Still Can't Do: A Critique of Artificial Reason |url=https://mitpress.mit.edu/books/what-computers-still-cant-do |website=mitpress.mit.edu |publisher=MIT Press |access-date=21 March 2022 |language=en |date=30 October 1992}}</ref> ||
 +
|-
 +
| 1972 || || French computer scientist {{w|Alain Colmerauer}} develops {{w|Prolog}}, a {{w|programming language}} commonly used for artificial intelligence and symbolic reasoning.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1972 || || Work commences on MYCIN, an expert system designed to diagnose blood infections. Developed at Stanford University, MYCIN aims to diagnose patients by analyzing their reported symptoms and medical test results.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1972 || || Alan Kay, Dan Ingalls, and Adele Goldberg at Xerox PARC introduce the Smalltalk programming language. Smalltalk is a groundbreaking, purely object-oriented language primarily created for teaching programming to young individuals. It emphasizes the message-passing paradigm, marking a significant development in object-oriented programming and icon-oriented interfaces.<ref name="aitopics.org"/><ref>{{cite web |last1=Eng |first1=Richard Kenneth |title=Celebrating 50 Years of Smalltalk |url=https://itnext.io/celebrating-50-years-of-smalltalk-172d4e664d30 |website=Medium |access-date=9 September 2023 |language=en |date=23 July 2022}}</ref> ||
 +
|-
 +
| 1973 || ||  James Lighthill is commissioned by the head of the British Science Research Council, Brian Flowers, to evaluate requests for support in AI research. His report, "Artificial Intelligence: A General Survey," published in 1973, concludes that the discoveries made in the field of AI research had not lived up to the earlier promises of major impact. This pessimistic prognosis by Lighthill would result in reduced government funding for AI research, and his report would be commonly referred to as the "Lighthill report."<ref name="harvard.edu d"/><ref name="washington.edu"/> ||
 +
|-
 +
| 1973 || || Alain Colmerauer at the University of Aix-Marseille, France, conceive the logic programming language PROLOG (Programmation en Logique), which is first implemented that same year. PROLOG would be further developed by Robert Kowalski, a logician at the University of Edinburgh. This language employs a powerful theorem-proving technique called resolution, which was invented in 1963 by British logician Alan Robinson. PROLOG is capable of determining the logical validity of statements, making it widely used in AI research, particularly in Europe and Japan.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1973 || || DARPA initiates the development of protocols known as TCP/IP.<ref name="Mijwil"/> ||
 
|-
 
|-
 
| 1974 || Conference || {{w|European Conference on Artificial Intelligence}}<ref>{{cite web |title=ECAI 2010 |url=https://www.iospress.nl/book/ecai-2010/ |website=iospress.nl |accessdate=6 March 2020}}</ref> ||
 
| 1974 || Conference || {{w|European Conference on Artificial Intelligence}}<ref>{{cite web |title=ECAI 2010 |url=https://www.iospress.nl/book/ecai-2010/ |website=iospress.nl |accessdate=6 March 2020}}</ref> ||
 
|-
 
|-
| 1976 || || "Computer scientist Raj Reddy publishes “Speech Recognition by Machine: A Review” in the Proceedings of the IEEE, summarizing the early work on Natural Language Processing (NLP)."<ref name="forbes.coms"/> ||
+
| 1974 || || Ted Shortliffe's PhD dissertation at Stanford University showcases the effectiveness of rule-based systems in the realm of medical diagnosis and treatment, specifically focusing on MYCIN. This work is often regarded as a pioneering example of an expert system in the field of artificial intelligence.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1974 || || Earl Sacerdoti made significant advancements in the field of artificial intelligence by developing one of the earliest planning programs known as ABSTRIPS. His work also introduced techniques for hierarchical planning, which had a substantial impact on AI planning systems.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1975 || || Marvin Minsky publishes a highly influential article on ''Frames'' as a knowledge representation. This work brings together various ideas related to schemas and semantic links, contributing significantly to the field of artificial intelligence and knowledge representation.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1975 || || The Meta-Dendral learning program achieves a significant milestone by generating new findings in chemistry, specifically in the realm of mass spectrometry. These results mark the first instance of scientific discoveries made by a computer that are published in a peer-reviewed journal.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1976 || || Computer scientist Raj Reddy publishes a seminal paper titled ''Speech Recognition by Machine: A Review'' in the Proceedings of the IEEE. This paper provides a comprehensive overview of the early developments in Natural Language Processing (NLP) and speech recognition by machines.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1976 || || AI research faces challenges as processing power fails to match the promising theoretical advancements made by computer scientists. Roboticist Hans Moravec asserts that computers are "still millions of times too weak to exhibit intelligence," highlighting the limitations in computational capabilities during that era.<ref name="futureoftech.org"/> ||
 +
|-
 +
| 1976 || || Doug Lenat's AM program, which is the subject of his Stanford PhD dissertation, showcases the discovery model, involving a loosely-guided search for intriguing conjectures.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1976 || || Randall Davis demonstrates the significance of meta-level reasoning through his PhD dissertation at Stanford University.<ref name="aitopics.org"/> ||
 +
|-
 +
| Mid1970s || || American computer scientist {{w|Barbara J. Grosz}} at SRI sets limits to traditional AI approaches in discourse modeling. Her subsequent work, along with Bonnie Webber and Candace Sidner, introduces the concept of "centering," which would become important in determining discourse focus and managing anaphoric references in Natural Language Processing (NLP).<ref name="aitopics.org"/> || {{w|United States}}
 +
|-
 +
| Mid1970s || || British neuroscientist [[w:David Marr (neuroscientist)|David Marr]] and his colleagues at MIT propose a theory of visual perception that includes the concept of the "primal sketch." The primal sketch is a low-level representation of the visual world that is based on the edges and textures of surfaces. It is the first step in Marr's theory of visual perception, which is a hierarchical model that describes how the brain processes visual information.<ref name="aitopics.org"/> ||
 
|-
 
|-
 
| 1977 || || {{w|iLabs}}<ref>{{cite web |title=ILabs |url=https://www.semanticscholar.org/topic/ILabs/1906300 |website=semanticscholar.org |accessdate=6 March 2020}}</ref> || {{w|Italy}}
 
| 1977 || || {{w|iLabs}}<ref>{{cite web |title=ILabs |url=https://www.semanticscholar.org/topic/ILabs/1906300 |website=semanticscholar.org |accessdate=6 March 2020}}</ref> || {{w|Italy}}
 
|-
 
|-
| 1978 || Expert system || "The XCON (eXpert CONfigurer) program, a rule-based expert system assisting in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's requirements, is developed at Carnegie Mellon University."<ref name="forbes.coms"/> ||
+
| 1978 || Expert system || The XCON (eXpert CONfigurer) program, which is a rule-based expert system, is developed at Carnegie Mellon University. XCON aims to assist in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's specific requirements. This marks an important milestone in the development of expert systems, showcasing their ability to automate complex decision-making processes.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1978 || || ". In 1978 Japan’s Ministry of International Trade and Industry (MITI) commissioned a study of what the future would hold
+
| 1978 || || Japan's Ministry of International Trade and Industry (MITI) initiates a study to explore the future of computers. Three years later, MITI would embark on a project to develop fifth-generation computers, aiming to achieve a significant advancement in computer technology. These new computers are intended to surpass existing technology, relying on multiprocessor machines specialized in logic programming instead of standard microprocessors. The goal is to position Japan as a technological leader in information processing and artificial intelligence, betting on high-power logic machines to catalyze these advancements.<ref name="washington.edu"/> ||
for computers, and three years later attempted to construct fifth generation computers – creating what project heads described as an ‘epochal’ leap in computer technology, in order to give Japan the technological lead for years to come. This new generation of machines would not be built on standard microprocessors, but multiprocessor machines specializing in logic programming. The bet was that these high-power logic machines would catalyze the world of information processing and realize artificial intelligence."<ref name="washington.edu"/> ||
 
 
|-
 
|-
| 1979 || || "The Stanford Cart successfully crosses a chair-filled room without human intervention in about five hours, becoming one of the earliest examples of an autonomous vehicle."<ref name="forbes.coms"/> ||
+
| 1978 || || Herbert Simon is awarded the Nobel Prize for his pioneering work on the Limited Rationality Theory, a significant contribution to the field of Artificial Intelligence.<ref name="Mijwil"/><ref name="aitopics.org"/> ||
 
|-
 
|-
| 1979 || || {{w|Association for the Advancement of Artificial Intelligence}} || {{w|United States}}
+
| 1978 || || Tom Mitchell, based at Stanford, introduces the concept of Version Spaces, a framework for describing the search space in concept formation programs.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1980 || Expert system || " After AI winter duration, AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert."<ref name="javatpoint.coma"/> ||
+
| 1978 || || The MOLGEN program, developed by Mark Stefik and Peter Friedland at Stanford, showcases the utility of an object-oriented knowledge representation for planning gene-cloning experiments.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1980 || || "Wabot-2 is built at Waseda University in Japan, a musician humanoid robot able to communicate with a person, read a musical score and play tunes of average difficulty on an electronic organ."<ref name="forbes.coms"/> ||
+
| 1979 || || The Stanford Cart achieves the significant milestone of autonomously navigating a room filled with chairs, completing the task in approximately five hours. This accomplishment marks one of the early instances of an autonomous vehicle demonstrating its capabilities.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1980 || Expert system || "Digital Equipment Corporation began requiring their sales team use an Expert System named XCON when placing customer orders. DEC sold a broad range of computer components, but the sales force was not especially knowledgeable about what they were selling."<ref name="dataversity.netw"/> ||
+
| 1979 || || The {{w|Association for the Advancement of Artificial Intelligence}} is founded.<ref>{{cite web |title=The Association for the Advancement of Artificial Intelligence (AAAI) |url=https://www.omicsonline.org/societies/the-association-for-the-advancement-of-artificial-intelligence-aaai/ |website=www.omicsonline.org |access-date=21 March 2022}}</ref> || {{w|United States}}
 
|-
 
|-
| 1980 || || "In the Year 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University."<ref name="javatpoint.coma"/> ||
+
| 1979 || || The MYCIN program, initially developed as Ted Shortliffe's Ph.D. dissertation at Stanford, is demonstrated to perform at the level of experts. Another significant development is Bill VanMelle's Ph.D. dissertation at Stanford, which showcases the generality of MYCIN's knowledge representation and reasoning style in his EMYCIN program. EMYCIN serves as a model for many commercial expert system "shells," marking a milestone in the field of artificial intelligence and expert systems.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1980 || || "The year of AI. In 1980, AI research fired back up with an expansion of funds and algorithmic tools. With deep learning techniques, the computer learned with the user experience."<ref name="data-flair.training">{{cite web |title=History of Artificial Intelligence – AI of the past, present and the future! |url=https://data-flair.training/blogs/history-of-artificial-intelligence/ |website=data-flair.training |accessdate=4 March 2020}}</ref> ||
+
| 1979 || || Jack Myers and Harry Pople at the University of Pittsburgh develop INTERNIST, a knowledge-based medical diagnosis program that leveraged Dr. Myers' clinical expertise. This program represents a significant advancement in the application of artificial intelligence to the field of medical diagnosis.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1979 || || Cordell Green, David Barstow, Elaine Kant, and their team at Stanford demonstrate the CHI system, which is designed for automatic programming. This system marks a notable development in the field of artificial intelligence and its applications in automating programming tasks.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1981 || || "In 1981 an expert system named SID (Synthesis of Integral Design) designed 93% of the VAX 9000 CPU logic gates. The SID system was existing out of 1,000 hand-written-rules. The final design of the CPU took 3 hours to calculate and outperformed in many ways the human experts. As an example, the SID produced a faster 64-bit adder than the manually designed one. Also the bug per gate rate, which where around 1 bug per 200 gates from human experts, was much lower at around 1 bug per 20,000 gates at the final result of the SID system."<ref name="dev.to">{{cite web |title=A Short History of Artificial Intelligence |url=https://dev.to/lschultebraucks/a-short-history-of-artificial-intelligence-7hm |website=dev.to |accessdate=9 March 2020}}</ref> ||
+
| 1979 || || Drew McDermott and Jon Doyle at MIT, along with John McCarthy at Stanford, begin publishing research on non-monotonic logics and formal aspects of truth maintenance. Their work in this area would contribute to advancing the understanding and development of logic-based systems in artificial intelligence.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1981 || || "he Japanese Ministry of International Trade and Industry budgets $850 million for the Fifth Generation Computer project. The project aimed to develop computers that could carry on conversations, translate languages, interpret pictures, and reason like human beings"<ref name="forbes.coms"/> ||
+
| Late 1970s || || Stanford's SUMEX-AIM resource, led by Ed Feigenbaum and Joshua Lederberg, showcases the potential of the ARPAnet for facilitating scientific collaboration, highlighting the impact of computer networks on research and information sharing in the field of artificial intelligence and beyond.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1981 || || "Japan’s Ministry of International Trade and Industry (MITI) commissioned a study of what the future would hold for computers, and three years later attempted to construct fifth generation computers – creating what project heads described as an ‘epochal’ leap in computer technology, in order to give Japan the technological lead for years to come. This new generation of machines would not be built on standard microprocessors, but multiprocessor machines specializing in logic programming. The bet was that these high-power logic machines would catalyze the world of information processing and realize artificial intelligence."<ref name="washington.edu"/> || {{w|Japan}}
+
| 1980 || || Computer scientist Edward Feigenbaum plays a pivotal role in rekindling AI research by championing the development of "expert systems." These systems learn by consulting experts in a particular domain to gather responses for various situations. Once these expert responses are collected and compiled for a wide range of scenarios in that domain, the expert system can offer specialized guidance to non-experts in that field, marking a significant advancement in AI research.<ref name="futureoftech.org">{{cite web |title=The History of Artificial Intelligence |url=https://www.futureoftech.org/artificial-intelligence/5-history-of-ai/ |website=futureoftech.org |accessdate=9 March 2020}}</ref>
 +
|-
 +
| 1980 || Expert system || After the AI winter period, AI experiences a resurgence with the introduction of "Expert Systems." These systems are designed to replicate the decision-making capabilities of human experts, signifying a significant revival in the field of artificial intelligence.<ref name="javatpoint.coma"/> AI research experiences a resurgence with increased funding and the development of algorithmic tools, including deep learning techniques, which allow computers to learn from user experiences.<ref name="data-flair.training">{{cite web |title=History of Artificial Intelligence – AI of the past, present and the future! |url=https://data-flair.training/blogs/history-of-artificial-intelligence/ |website=data-flair.training |accessdate=4 March 2020}}</ref> ||
 +
|-
 +
| 1980 || || Waseda University in Japan develops Wabot-2, a humanoid musician robot. This robot has the ability to interact with humans, read musical scores, and play moderately complex tunes on an electronic organ.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1980 || Expert system || Digital Equipment Corporation (DEC) implements an Expert System called XCON to assist its sales team in placing customer orders. DEC, a company selling various computer components, utilized XCON because their sales force lacked in-depth knowledge about the products they were selling. This move helps streamline the ordering process and improve customer service.<ref name="dataversity.netw"/> ||
 +
|-
 +
| 1980 || || The American Association of Artificial Intelligence (AAAI) held its first national conference at Stanford University.<ref name="javatpoint.coma"/> ||
 +
|-
 +
| 1980 || || Lee Erman, Rick Hayes-Roth, Victor Lesser, and Raj Reddy publish the first description of the blackboard model, which serves as the framework for the HEARSAY-II speech understanding system.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1980 || || The first National Conference of the American Association of Artificial Intelligence (AAAI) is held at Stanford University.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1980 || || The term "strong AI" is introduced by philosopher John Searle of the University of California at Berkeley to categorize a specific area of AI research.<ref name="britannica.coms">{{cite web |title=Artificial intelligence |url=https://www.britannica.com/technology/artificial-intelligence/Methods-and-goals-in-AI |website=britannica.com |accessdate=21 March 2020}}</ref> ||
 +
|-
 +
| 1981 || || An expert system called SID (Synthesis of Integral Design) is able to design 93% of the VAX 9000 CPU logic gates. This system, consisting of 1,000 hand-written rules, completes the CPU design in just 3 hours, surpassing human experts in various aspects. For instance, it produces a faster 64-bit adder than the manually designed one and achieves a significantly lower bug rate, reducing it from approximately 1 bug per 200 gates in human-designed systems to about 1 bug per 20,000 gates in the final output of the SID system.<ref name="dev.to">{{cite web |title=A Short History of Artificial Intelligence |url=https://dev.to/lschultebraucks/a-short-history-of-artificial-intelligence-7hm |website=dev.to |accessdate=9 March 2020}}</ref> ||
 +
|-
 +
| 1981 || || Danny Hillis designs the Connection Machine, a massively parallel architecture that significantly boosts the capabilities of artificial intelligence and computing in general. This development ultimately led to the founding of Thinking Machines, Inc.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1981 || || The Japanese Ministry of International Trade and Industry allocates a substantial budget of $850 million for the Fifth Generation Computer project. This ambitious project aims to develop computers capable of engaging in conversations, translating languages, interpreting images, and reasoning like human beings.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1981 || || Japan's Ministry of International Trade and Industry (MITI) commissions a study to explore the future of computers. Three years later, MITI launches the ambitious Fifth Generation Computer project with a budget of $850 million. The project aims to create a new generation of computers that would represent a significant leap in technology. These machines would not rely on standard microprocessors but would be multiprocessor systems specialized in logic programming. The goal is to propel Japan to the forefront of technology by catalyzing advancements in information processing and realizing artificial intelligence capabilities.<ref name="washington.edu"/> || {{w|Japan}}
 
|-
 
|-
 
| 1982 || || {{w|European Association for Artificial Intelligence}} ||
 
| 1982 || || {{w|European Association for Artificial Intelligence}} ||
 
|-
 
|-
| 1983 || || {{w|Turing Institute}} || {{w|United Kingdom}}
+
| 1983 || Organization || The {{w|Turing Institute}} is founded in {{w|Glasgow}}, {{w|Scotland}} as an [[w:Artificial intelligence|Artificial Intelligence laboratory]]. The company would undertake basic and applied research, working directly with large companies across {{w|Europe}}, the {{w|United States}}, and {{w|Japan}} developing software as well as providing training, consultancy and information services.<ref name="books.google">{{cite book|last=Lamb|first=John|title=Making Friends with Intelligence|url=https://books.google.com/books?id=BMaVDEwRhpcC&q=Machine+learning+conference+glasgow+turing+institute&pg=PA30|work=The New Scientist|accessdate=10 December 2013| date=August 1985 }}</ref> From 1989 onwards, the company would face financial difficulties and would close in 1994.<ref>{{cite web|title=Column 468: The Turing Institute|url=https://publications.parliament.uk/pa/cm199394/cmhansrd/1994-06-14/Writtens-15.html|publisher=UK Parliament|accessdate=2 March 2022}}</ref>
 +
|| {{w|United Kingdom}}
 
|-
 
|-
| 1984 || || "Electric Dreams is released, a film about a love triangle between a man, a woman and a personal computer."<ref name="forbes.coms"/> ||
+
| 1983 || || John Laird and Paul Rosenbloom, under the guidance of Allen Newell, complete their dissertations at Carnegie Mellon University on the SOAR project.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1984 || || "At the annual meeting of AAAI, Roger Schank and Marvin Minsky warn of the coming “AI Winter,” predicting an immanent bursting of the AI bubble (which did happen three years later), similar to the reduction in AI investment and research funding in the mid-1970s."<ref name="forbes.coms"/> ||
+
| 1983 || || [[w:James F. Allen (computer scientist)|James Allen]] invents the later called {{w|Allen's interval algebra}}, the first widely used formalization of temporal events.<ref name="aitopics.org"/><ref>{{cite book |last1=Aydin |first1=Berkay |last2=Angryk |first2=Rafal A. |title=Spatiotemporal Frequent Pattern Mining from Evolving Region Trajectories |date=15 October 2018 |publisher=Springer |isbn=978-3-319-99873-2 |url=https://books.google.com.ar/books?id=3aVyDwAAQBAJ&pg=PA18&dq=James+Allen+1983+algebra&hl=en&sa=X&ved=2ahUKEwiooZOGi8H2AhVJq5UCHWBHCcQQ6AF6BAgHEAI#v=onepage&q=James%20Allen%201983%20algebra&f=false |language=en}}</ref><ref>{{cite book |last1=Liang-Jie |first1=Zhang |last2=Yishuang |first2=Ning |title=Innovative Solutions and Applications of Web Services Technology |date=19 October 2018 |publisher=IGI Global |isbn=978-1-5225-7269-5 |url=https://books.google.com.ar/books?id=GK9wDwAAQBAJ&pg=PA172&dq=James+Allen+1983+algebra&hl=en&sa=X&ved=2ahUKEwiooZOGi8H2AhVJq5UCHWBHCcQQ6AF6BAgJEAI#v=onepage&q=James%20Allen%201983%20algebra&f=false |language=en}}</ref> Alsocalled Allen's Interval Calculus, it is certainly the most well-known qualitative temporal calculus in {{w|artificial intelligence}}.<ref>{{cite web |title=Qualitative Spatio-Temporal Reasoning with RCC-8 and Allen’s Interval Calculus: Computational Complexity |url=https://gki.informatik.uni-freiburg.de/papers/gerevini-nebel-ecai02.pdf |website=gki.informatik.uni-freiburg.de |access-date=12 March 2022}}</ref> ||
 
|-
 
|-
| 1986 || || "First driverless car, a Mercedes-Benz van equipped with cameras and sensors, built at Bundeswehr University in Munich under the direction of Ernst Dickmanns, drives up to 55 mph on empty streets."<ref name="forbes.coms"/> ||
+
| 1984 || || The film "Electric Dreams" was released, depicting a love triangle between a man, a woman, and a personal computer.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1984 || || At the annual meeting of AAAI (American Association for Artificial Intelligence), Roger Schank and Marvin Minsky warn of the impending "AI Winter." They predict a downturn in AI investment and research funding, similar to the reduction that had occurred in the mid-1970s. This prediction would indeed materialize three years later when AI research faces a decline in support and interest.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1984 || || The CYC project is initiated as a significant endeavor in symbolic AI. This project is launched under the sponsorship of the Microelectronics and Computer Technology Corporation, a consortium consisting of computer, semiconductor, and electronics manufacturers.<ref name="britannica.coms"/> ||
 +
|-
 +
| 1985 || || Harold Cohen demonstrates the autonomous drawing program called Aaron at the AAAI National Conference. Aaron, which was developed over more than a decade, showcases significant advancements in autonomous drawing capabilities.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1986 || || A team of researchers at the {{w|Bundeswehr University Munich}}, Germany, led by {{w|Ernst Dickmanns}}, builds the first driverless car, a Mercedes-Benz van equipped with cameras and sensors that allow it to navigate empty streets at speeds of up to 55 mph. The car is able to follow the road markings, avoid obstacles, and even change lanes. This is a major milestone in the development of self-driving cars, and it shows that it is possible to build a car that could drive itself safely on public roads. The research that is done on this car would help to pave the way for the development of the self-driving cars that we see today.<ref name="forbes.coms"/> ||
 
|-  
 
|-  
| 1986 (October) || || {{w|Centre for Artificial Intelligence and Robotics}}<ref>{{cite web |title=Centre for Artificial Intelligence and Robotics (CAIR) |url=https://www.epicos.com/company/13386/centre-artificial-intelligence-and-robotics-cair |website=epicos.com |accessdate=6 March 2020}}</ref> || {{w|India}}
+
| 1986 || Literature || {{w|Hubert Dreyfus}} publishes ''Mind over Machine''. ||
 
|-
 
|-
| 1986 (October) || || "October 1986 David Rumelhart, Geoffrey Hinton, and Ronald Williams publish ”Learning representations by back-propagating errors,” in which they describe “a new learning procedure, back-propagation, for networks of neurone-like units.”"<ref name="forbes.coms"/> ||
+
| 1986 || || A notable connectionist experiment at the {{w|University of California in San Diego}}, led by David Rumelhart and James McClelland, involves training a {{w|neural network}} comprising 920 artificial neurons arranged in two layers (460 neurons each) to generate past tenses for English verbs. The root forms of verbs, like "come," "look," and "sleep," are fed into the input layer. A supervisory computer program observes the output layer's response and the desired response (e.g., "came") and adjustes network connections accordingly. After approximately 400 verb presentations, repeated 200 times, the network can correctly generate past tenses for both familiar and unfamiliar verbs.<ref name="britannica.coms"/> || {{w|United States}}
 
|-
 
|-
| 1986 || || "1986: ‘NETtalk’ speaks. The computer is given a voice for the first time. Terrence J. Sejnowski and Charles Rosenberg teach their ‘NETtalk’ program to speak by inputting sample sentences and phoneme chains. NETtalk is able to read words and pronounce them correctly, and can apply what it has learned to words it does not know. It is one of the early artificial neural networks — programs that are supplied with large datasets and are able to draw their own conclusions on this basis. Their structure and function are thereby similar to those of the human brain."<ref name="bosch.coms"/> ||
+
| 1986 (October) || Organization || The {{w|Centre for Artificial Intelligence and Robotics}} is founded in {{w|Bangalore}} as a laboratory of the {{w|Defence Research & Development Organization}}.<ref>{{cite web |title=Centre for Artificial Intelligence and Robotics (CAIR) |url=https://www.epicos.com/company/13386/centre-artificial-intelligence-and-robotics-cair |website=epicos.com |accessdate=6 March 2020}}</ref> || {{w|India}}
 +
|-
 +
| 1986 (October) || || David Rumelhart, Geoffrey Hinton, and Ronald Williams publish a groundbreaking paper titled "Learning representations by back-propagating errors." This paper introduces a novel learning procedure known as back-propagation, designed for networks of neuron-like units. Back-propagation would later become a fundamental technique in training artificial neural networks, contributing significantly to the success of deep learning in subsequent decades.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1986 || || Terrence J. Sejnowski and Charles Rosenberg introduce the 'NETtalk' program, a significant achievement in the development of artificial intelligence. 'NETtalk' is capable of speech synthesis, which allows a computer to speak for the first time. It learns to speak by processing sample sentences and phoneme chains. Moreover, 'NETtalk' has the ability to read and correctly pronounce words, and it can apply its learning to words it has never encountered before. This program is one of the early examples of artificial neural networks, which functions similarly to the human brain and learns from extensive datasets.<ref name="bosch.coms"/> ||
 
|-
 
|-
 
| 1986 || Conference || {{w|International Conference on User Modeling, Adaptation, and Personalization}} ||
 
| 1986 || Conference || {{w|International Conference on User Modeling, Adaptation, and Personalization}} ||
 
|-
 
|-
| 1987 || || "The video Knowledge Navigator, accompanying Apple CEO John Sculley’s keynote speech at Educom, envisions a future in which “knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information."<ref name="forbes.coms"/> ||
+
| 1987 || || A video titled "Knowledge Navigator" is presented during Apple CEO John Sculley's keynote speech at Educom. This video depicts a futuristic vision in which "knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information."<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1987 || Literature || ''{{w|AI & Society}}'' ||
 
| 1987 || Literature || ''{{w|AI & Society}}'' ||
Line 232: Line 364:
 
| 1987 || Literature || ''{{w|International Journal of Pattern Recognition and Artificial Intelligence}}'' ||
 
| 1987 || Literature || ''{{w|International Journal of Pattern Recognition and Artificial Intelligence}}'' ||
 
|-
 
|-
| 1988 || || "Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems. His 2011 Turing Award citation reads: “Judea Pearl created the representational and computational foundation for the processing of information under uncertainty. He is credited with the invention of Bayesian networks, a mathematical formalism for defining complex probability models, as well as the principal algorithms used for inference in these models. This work not only revolutionized the field of artificial intelligence but also became an important tool for many other branches of engineering and the natural sciences.”"<ref name="forbes.coms"/> ||
+
| 1987 || || Marvin Minsky publishes "The Society of Mind," a theoretical work that describes the mind as a collection of cooperating agents.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1988 || || Judea Pearl publishes ''Probabilistic Reasoning in Intelligent Systems'', laying the foundation for processing information under uncertainty. His pioneering work includes the invention of Bayesian networks and algorithms for inference in these models, which revolutionized artificial intelligence and found applications in various engineering and scientific fields. He would be later awarded the Turing Award for his contributions.<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1988 || || {{w|Dalle Molle Institute for Artificial Intelligence Research}} || {{w|Switzerland}}
 
| 1988 || || {{w|Dalle Molle Institute for Artificial Intelligence Research}} || {{w|Switzerland}}
 
|-
 
|-
| 1988 || || "Rollo Carpenter develops the chat-bot Jabberwacky to "simulate natural human chat in an interesting, entertaining and humorous manner." It is an early attempt at creating artificial intelligence through human interaction."<ref name="forbes.coms"/> ||
+
| 1988 || || Rollo Carpenter develops Jabberwacky, a chat-bot aimed at simulating natural human chat in an entertaining and humorous manner. This marks an early attempt at using human interaction to create artificial intelligence.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1988 || || "Members of the IBM T.J. Watson Research Center publish “A statistical approach to language translation,” heralding the shift from rule-based to probabilistic methods of machine translation, and reflecting a broader shift to “machine learning” based on statistical analysis of known examples, not comprehension and “understanding” of the task at hand (IBM’s project Candide, successfully translating between English and French, was based on 2.2 million pairs of sentences, mostly from the bilingual proceedings of the Canadian parliament)."<ref name="forbes.coms"/> ||
+
| 1988 || || Members of the IBM T.J. Watson Research Center publish a paper titled ''A statistical approach to language translation''. This marks a shift from rule-based to probabilistic methods of machine translation. It reflects a broader transition towards "machine learning" based on statistical analysis of known examples rather than a deep understanding of the task. IBM's project Candide, which successfully translates between English and French, relies on a massive dataset of 2.2 million pairs of sentences, primarily from the bilingual proceedings of the Canadian parliament.<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1988 || || {{w|German Research Centre for Artificial Intelligence}} || {{w|Germany}}
 
| 1988 || || {{w|German Research Centre for Artificial Intelligence}} || {{w|Germany}}
 
|-
 
|-
| 1989 || || "Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In “Prologue: A View from 1988” they wrote: “One reason why progress has been so slow in this field is that researchers unfamiliar with its history have continued to make many of the same mistakes that others have made before them.”"<ref name="forbes.coms"/> ||
+
| 1989 || || Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book ''Perceptrons''. In a prologue added to the 1988 edition, they point out that progress in the field of artificial intelligence has been slow due to researchers repeating past mistakes, often because they were unaware of the field's history.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1989 || || "Yann LeCun and other researchers at AT&T Bell Labs successfully apply a backpropagation algorithm to a multi-layer neural network, recognizing handwritten ZIP codes. Given the hardware limitations at the time, it took about 3 days (still a significant improvement over earlier efforts) to train the network."<ref name="forbes.coms"/> ||
+
| 1989 || || Yann LeCun and a team of researchers at AT&T Bell Labs achieve success by applying a backpropagation algorithm to a multi-layer neural network. This network is used to recognize handwritten ZIP codes. Despite hardware limitations at the time, the training of the network takes approximately three days, marking a significant improvement compared to earlier efforts.<ref name="forbes.coms"/> || {{w|United States}}
 
|-
 
|-
 
| 1989 || Literature || ''{{w|Journal of Experimental and Theoretical Artificial Intelligence}}'' ||
 
| 1989 || Literature || ''{{w|Journal of Experimental and Theoretical Artificial Intelligence}}'' ||
 
|-
 
|-
| 1990 || || "Rodney Brooks publishes “Elephants Don’t Play Chess,” proposing a new approach to AI—building intelligent systems, specifically robots, from the ground up and on the basis of ongoing physical interaction with the environment: “The world is its own best model… The trick is to sense it appropriately and often enough.”"<ref name="forbes.coms"/> ||
+
| 1989 (November 9) || Literature || {{w|The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics}} ||
 +
|-
 +
| 1989 || || Dean Pomerleau at {{w|Carnegie Mellon University}} develops ALVINN (An Autonomous Land Vehicle in a Neural Network). This system would evolve into the technology that enables a car to be driven across the United States under computer control, with human intervention only required for about 50 of the 2850 miles of the journey.<ref name="aitopics.org"/> ||
 +
|-
 +
| 1990 || || Rodney Brooks publishes ''Elephants Don't Play Chess'', advocating a novel approach to AI. His idea is to construct intelligent systems, particularly robots, by starting from the basics and allowing them to learn through continuous physical interaction with their environment. This approach emphasizes the importance of the real world as a model for intelligence and highlighted the need for effective and frequent sensory perception.<ref name="forbes.coms"/> ||
 
|-
 
|-
 
| 1991 || || {{w|European Neural Network Society}}<ref>{{cite book |last1=Taylor |first1=J.G. |title=The Promise of Neural Networks |url=https://books.google.com.ar/books?id=GbnkBwAAQBAJ&pg=PA63&lpg=PA63&dq=1991+European+Neural+Network+Society&source=bl&ots=o-ZMzEz2eC&sig=ACfU3U0g5hGyXuqYPyp4I5XUQr2ZwW3YlQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwA3oECAYQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref><ref>{{cite book |title=Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 |edition=Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa |url=https://books.google.com.ar/books?id=ozU7DwAAQBAJ&pg=PR5&lpg=PR5&dq=1991+European+Neural+Network+Society&source=bl&ots=9T2UfbE_J0&sig=ACfU3U3fExCFGSypH9eCD2Sjj9I_k_4vrQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwBHoECAoQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref> ||  
 
| 1991 || || {{w|European Neural Network Society}}<ref>{{cite book |last1=Taylor |first1=J.G. |title=The Promise of Neural Networks |url=https://books.google.com.ar/books?id=GbnkBwAAQBAJ&pg=PA63&lpg=PA63&dq=1991+European+Neural+Network+Society&source=bl&ots=o-ZMzEz2eC&sig=ACfU3U0g5hGyXuqYPyp4I5XUQr2ZwW3YlQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwA3oECAYQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref><ref>{{cite book |title=Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 |edition=Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa |url=https://books.google.com.ar/books?id=ozU7DwAAQBAJ&pg=PR5&lpg=PR5&dq=1991+European+Neural+Network+Society&source=bl&ots=9T2UfbE_J0&sig=ACfU3U3fExCFGSypH9eCD2Sjj9I_k_4vrQ&hl=en&sa=X&ved=2ahUKEwig7_iokYboAhWgIbkGHcZrC-kQ6AEwBHoECAoQAQ#v=onepage&q=1991%20European%20Neural%20Network%20Society&f=false}}</ref> ||  
 +
|-
 +
| 1991 || || American philanthropist Hugh Loebner starts the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program would come close to passing an undiluted Turing test.<ref name="britannica.coms"/> ||
 
|-
 
|-
 
| 1992 || Literature || ''{{w|International Journal on Artificial Intelligence Tools}}''<ref>{{cite web |title=International Journal on Artificial Intelligence Tools |url=https://www.letpub.com/index.php?journalid=3920&page=journalapp&view=detail |website=letpub.com |accessdate=6 March 2020}}</ref> ||  
 
| 1992 || Literature || ''{{w|International Journal on Artificial Intelligence Tools}}''<ref>{{cite web |title=International Journal on Artificial Intelligence Tools |url=https://www.letpub.com/index.php?journalid=3920&page=journalapp&view=detail |website=letpub.com |accessdate=6 March 2020}}</ref> ||  
Line 256: Line 396:
 
| 1993 || || ''{{w|Journal of Artificial Intelligence Research}}''<ref>{{cite web |title=Journal of Artificial Intelligence Research |url=https://www.jair.org/index.php/jair |website=jair.org |accessdate=6 March 2020}}</ref> ||
 
| 1993 || || ''{{w|Journal of Artificial Intelligence Research}}''<ref>{{cite web |title=Journal of Artificial Intelligence Research |url=https://www.jair.org/index.php/jair |website=jair.org |accessdate=6 March 2020}}</ref> ||
 
|-
 
|-
| 1993 || || "Vernor Vinge publishes “The Coming Technological Singularity,in which he predicts that “within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”"<ref name="forbes.coms"/> ||
+
| 1993 || || Vernor Vinge publishes "The Coming Technological Singularity," in which he forecasts that within thirty years, humanity would possess the technology to generate superhuman intelligence. He further anticipates that shortly after achieving this, the era of human dominance would come to an end.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1994 || Conference || {{w|Artificial Evolution Conference}}<ref>{{cite web |title=Artificial Evolution 2019 (EA-2019) |url=https://iscpif.fr/evenements/conferenceae-inria-oct2019/ |website=iscpif.fr |accessdate=6 March 2020}}</ref> || {{w|France}}
+
| 1994 (September) || Conference || The first {{w|Artificial Evolution Conference}} is held in Toulouse, France. It is the first international conference dedicated to the field of artificial evolution.<ref>{{cite web |title=Artificial Evolution 2019 (EA-2019) |url=https://iscpif.fr/evenements/conferenceae-inria-oct2019/ |website=iscpif.fr |accessdate=6 March 2020}}</ref> The conference is organized by the French Artificial Life Society (Société Française d'Évolution Artificielle) and the European Neural Networks Society (ESANN). The main topics of the conference were genetic algorithms, evolutionary programming, and evolutionary strategies. || {{w|France}}
 
|-
 
|-
| 1995 || || "Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program, but with the addition of natural language sample data collection on an unprecedented scale, enabled by the advent of the Web."<ref name="forbes.coms"/> ||
+
| 1995 || || Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program. A.L.I.C.E incorporates natural language sample data collected on an unprecedented scale, made possible by the advent of the World Wide Web.<ref name="forbes.coms"/> ||
 
|-
 
|-
| 1997 || || "Sepp Hochreiter and Jürgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of a recurrent neural network used today in handwriting recognition and speech recognition."<ref name="forbes.coms"/> ||
+
| 1995 || || A computer program called Chinook defeates the world checkers champion, Marion Tinsley, in a series of matches. Chinook uses a brute-force approach to checkers, evaluating all possible moves and selecting the best one. This approach is very computationally expensive, but becomes ultimately successful.<ref name=Leigh>{{cite book |last1=Leigh |first1=Andrew |title=What's the Worst That Could Happen?: Existential Risk and Extreme Politics |date=9 November 2021 |publisher=MIT Press |isbn=978-0-262-36661-8 |url=https://books.google.com.ar/books/about/What_s_the_Worst_That_Could_Happen.html?id=siMZEAAAQBAJ&redir_esc=y |language=en}}</ref> ||
 
|-
 
|-
| 1997 || || ". In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program."<ref name="harvard.edu d"/> "Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion."<ref name="forbes.coms"/> ||
+
| 1995 || || AltaVista becomes the first search engine to incorporate natural language processing into its functionality, enabling users to search for information using more human-like language and queries.<ref name="omnius.com">{{cite web |title=A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE: MAKING MYTHOLOGY A REALITY |url=https://omnius.com/blog/a-short-history-of-artificial-intelligence-making-mythology-a-reality/ |website=omnius.com |accessdate=20 March 2020}}</ref> ||
 
|-
 
|-
| 1997 || || " speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor."<ref name="harvard.edu d"/> ||
+
| 1996 || || The EQP theorem prover at Argonne National Labs successfully proves the Robbins Conjecture in mathematics.<ref name="aitopics.org"/> ||
 
|-
 
|-
| 1998 || || "Dave Hampton and Caleb Chung create Furby, the first domestic or pet robot."<ref name="harvard.edu d"/> ||
+
| 1997 || || Sepp Hochreiter and Jürgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of recurrent neural network that is widely used today in applications such as handwriting recognition and speech recognition.<ref name="forbes.coms"/> ||
 +
|-
 +
| 1997 || || IBM's Deep Blue chess computer defeates the reigning world chess champion, Garry Kasparov, in a six-game match. This is a major milestone in the field of artificial intelligence, as it shows that machines can now compete with humans at the highest level of chess.<ref name=Leigh/><ref name="harvard.edu d"/><ref name="forbes.coms"/> ||
 +
|-
 +
| 1997 || || Speech recognition software developed by Dragon Systems is implemented on Windows, marking significant progress in the field of spoken language interpretation.<ref name="harvard.edu d"/> ||
 +
|-
 +
| 1998 || || {{w|Furby}}, the first domestic or pet robot, is created by Dave Hampton and Caleb Chung.<ref name="harvard.edu d"/> ||
 
|-
 
|-
 
| 1998 || Literature || ''{{w|Autonomous Agents and Multi-Agent Systems}}''<ref>{{cite web |title=Autonomous Agents and Multi-Agent Systems |url=https://www.springer.com/journal/10458 |website=springer.com |accessdate=6 March 2020}}</ref> ||
 
| 1998 || Literature || ''{{w|Autonomous Agents and Multi-Agent Systems}}''<ref>{{cite web |title=Autonomous Agents and Multi-Agent Systems |url=https://www.springer.com/journal/10458 |website=springer.com |accessdate=6 March 2020}}</ref> ||
 
|-
 
|-
| 1998 || || "Yann LeCun, Yoshua Bengio and others publish papers on the application of neural networks to handwriting recognition and on optimizing backpropagation."<ref name="harvard.edu d"/> ||
+
| 1998 || || Yann LeCun, Yoshua Bengio, and other researchers published papers on the application of neural networks to handwriting recognition and the optimization of backpropagation. These contributions were instrumental in advancing the field of neural network-based handwriting recognition.<ref name="harvard.edu d"/> ||
 +
|-
 +
| 1998 || || Amazon introduces "collaborative filtering" to provide recommendations for millions of customers, a significant advancement in personalized recommendation systems.<ref name="econsultancy.com">{{cite web |title=A brief history of artificial intelligence in advertising |url=https://econsultancy.com/a-brief-history-of-artificial-intelligence-in-advertising/ |website=econsultancy.com |accessdate=20 March 2020}}</ref> ||
 +
|-
 +
| 1998 || || Tiger Electronics releases Furby, marking the first successful introduction of AI technology into a domestic environment.<ref name="sutori.comd"/> ||
 +
|-
 +
| Late 1990s || || Web crawlers and other AI-based information extraction programs become essential tools for the widespread use of the World Wide Web.<ref name="sutori.comd"/> ||
 +
|-
 +
| 1990s || || MIT's AI Lab demonstrates an Intelligent Room and Emotional Agents, showcasing advancements in intelligent environments and emotionally responsive agents. This period also marks the initiation of work on the Oxygen Architecture, which aims to connect mobile and stationary computers in an adaptive network, contributing to the development of pervasive computing.<ref name="ocw.uc3m.es"/> ||
 
|-
 
|-
| 2000 || || "MIT’s Cynthia Breazeal develops Kismet, a robot that could recognize and simulate emotions."<ref name="harvard.edu d"/> ||
+
| 2000 || || MIT researcher Cynthia Breazeal develops Kismet, a robot capable of recognizing and simulating emotions, marking a significant advancement in emotional AI and human-robot interaction.<ref name="harvard.edu d"/><ref name="ocw.uc3m.es">{{cite web |title=Tema 1 Brief History of Artificial Intelligence |url=http://ocw.uc3m.es/ingenieria-telematica/inteligencia-en-redes-de-comunicaciones/material-de-clase-1/01a-brief-history-of-ai |website=ocw.uc3m.es |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
| 2000 || || "Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in a restaurant setting."<ref name="harvard.edu d"/> ||
+
| 2000 || || Honda's ASIMO robot, a humanoid robot endowed with artificial intelligence, achieves the capability to walk at a human-like speed and serve trays to customers in a restaurant setting, demonstrating significant progress in robotics and AI technology.<ref name="harvard.edu d"/> ||
 
|-
 
|-
 
| 2000 || Conference || {{w|Mexican International Conference on Artificial Intelligence}}<ref>{{cite web |title=MICAI 2000: Advances in Artificial Intelligence |url=https://www.springer.com/gp/book/9783540673545 |website=springer.com |accessdate=6 March 2020}}</ref> || {{w|Mexico}}
 
| 2000 || Conference || {{w|Mexican International Conference on Artificial Intelligence}}<ref>{{cite web |title=MICAI 2000: Advances in Artificial Intelligence |url=https://www.springer.com/gp/book/9783540673545 |website=springer.com |accessdate=6 March 2020}}</ref> || {{w|Mexico}}
|-
 
| 2001 || || "A.I. Artificial Intelligence is released, a Steven Spielberg film about David, a childlike android uniquely programmed with the ability to love."<ref name="harvard.edu d"/> ||
 
 
|-
 
|-
 
| 2001 || || {{w|Artificial General Intelligence Research Institute}}<ref>{{cite web |title=Artificial General Intelligence Research Institute |url=https://www.morebooks.de/store/gb/book/artificial-general-intelligence-research-institute/isbn/978-613-1-38428-8 |website=morebooks.de |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2001 || || {{w|Artificial General Intelligence Research Institute}}<ref>{{cite web |title=Artificial General Intelligence Research Institute |url=https://www.morebooks.de/store/gb/book/artificial-general-intelligence-research-institute/isbn/978-613-1-38428-8 |website=morebooks.de |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
|-
 
|-
| 2002 || || "Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner."<ref name="javatpoint.coma"/> ||
+
| 2002 || || AI technology enters people's homes with the introduction of Roomba, an autonomous robotic vacuum cleaner. This marked a significant development in the application of AI to consumer products for everyday use.<ref name="javatpoint.coma"/> ||
 
|-
 
|-
 
| 2002 || Conference || {{w|RuleML Symposium}}<ref>{{cite book |last1=Bikakis |first1=Antonis |last2=Fodor |first2=Paul |last3=Roman |first3=Dumitru |title=Rules on the Web: From Theory to Applications: 8th International Symposium, RuleML 2014, Co-located with the 21st European Conference on Artificial Intelligence, ECAI 2014, Prague, Czech Republic, August 18-20, 2014, Proceedings |url=https://books.google.com.ar/books?id=gWwqBAAAQBAJ&pg=PR5&lpg=PR5&dq=2002+Conference+RuleML+Symposium&source=bl&ots=mVz8Giu6iT&sig=ACfU3U0jCi8DyHI2LZYnJkLwDFGSMFNMuw&hl=en&sa=X&ved=2ahUKEwiGzYOUloboAhXzHrkGHXZYAHoQ6AEwBHoECAwQAQ#v=onepage&q=2002%20Conference%20RuleML%20Symposium&f=false}}</ref> ||
 
| 2002 || Conference || {{w|RuleML Symposium}}<ref>{{cite book |last1=Bikakis |first1=Antonis |last2=Fodor |first2=Paul |last3=Roman |first3=Dumitru |title=Rules on the Web: From Theory to Applications: 8th International Symposium, RuleML 2014, Co-located with the 21st European Conference on Artificial Intelligence, ECAI 2014, Prague, Czech Republic, August 18-20, 2014, Proceedings |url=https://books.google.com.ar/books?id=gWwqBAAAQBAJ&pg=PR5&lpg=PR5&dq=2002+Conference+RuleML+Symposium&source=bl&ots=mVz8Giu6iT&sig=ACfU3U0jCi8DyHI2LZYnJkLwDFGSMFNMuw&hl=en&sa=X&ved=2ahUKEwiGzYOUloboAhXzHrkGHXZYAHoQ6AEwBHoECAwQAQ#v=onepage&q=2002%20Conference%20RuleML%20Symposium&f=false}}</ref> ||
 
|-
 
|-
| 2003 || || "In 2003, Geoffrey Hinton (University of Toronto), Yoshua Bengio (University of Montreal) and Yann LeCun (University of New York) decided to start a research program to bring neural networks up to date. Experiments conducted simultaneously at Microsoft, Google and IBM with the help of the Toronto laboratory in Hinton showed that this type of learning succeeded in halving the error rates for speech recognition. Similar results were achieved by Hinton's image recognition team."<ref name="coe.intf"/> ||
+
| 2003 || || Geoffrey Hinton, Yoshua Bengio, and Yann LeCun initiate a research program aimed at advancing neural networks. Experiments conducted in collaboration with Microsoft, Google, and IBM, with support from the Toronto laboratory led by Hinton, demonstrate significant improvements in speech recognition, effectively reducing error rates by half. Similar progress is achieved by Hinton's team in the field of image recognition. This marks a significant milestone in the development of neural network-based AI technologies.<ref name="coe.intf"/> ||
 
|-
 
|-
 
| 2003 || || {{w|MIT Computer Science and Artificial Intelligence Laboratory}}<ref>{{cite web |title=Mission & History |url=https://www.csail.mit.edu/about/mission-history |website=csail.mit.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2003 || || {{w|MIT Computer Science and Artificial Intelligence Laboratory}}<ref>{{cite web |title=Mission & History |url=https://www.csail.mit.edu/about/mission-history |website=csail.mit.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
|-
 
|-
| 2004 || || "The first DARPA Grand Challenge, a prize competition for autonomous vehicles, is held in the Mojave Desert. None of the autonomous vehicles finished the 150-mile route."<ref name="harvard.edu d"/> ||
+
| 2004 || || The first DARPA Grand Challenge takes place, featuring a prize competition for autonomous vehicles. Unfortunately, none of the autonomous vehicles are able to complete the challenging 150-mile route in the Mojave Desert.<ref name="harvard.edu d"/> ||
 
|-
 
|-
 
| 2004 || Conference || {{w|International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics}}<ref>{{cite web |title=INTERNATIONAL MEETING ON  COMPUTATIONAL INTELLIGENCE METHODS FOR  BIOINFORMATICS AND BIOSTATISTICS |url=https://person.dibris.unige.it/masulli-francesco/conferences/CIBB04-cfp.html |website=person.dibris.unige.it |accessdate=6 March 2020}}</ref> || {{w|Italy}}  
 
| 2004 || Conference || {{w|International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics}}<ref>{{cite web |title=INTERNATIONAL MEETING ON  COMPUTATIONAL INTELLIGENCE METHODS FOR  BIOINFORMATICS AND BIOSTATISTICS |url=https://person.dibris.unige.it/masulli-francesco/conferences/CIBB04-cfp.html |website=person.dibris.unige.it |accessdate=6 March 2020}}</ref> || {{w|Italy}}  
 
|-
 
|-
| 2006 || || "Oren Etzioni, Michele Banko, and Michael Cafarella coin the term “machine reading,defining it as an inherently unsupervised “autonomous understanding of text.”"<ref name="harvard.edu d"/> ||
+
| 2006 || || Oren Etzioni, Michele Banko, and Michael Cafarella introduce the term "machine reading," defining it as the autonomous understanding of text without the need for human supervision.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2006 || || "Geoffrey Hinton publishes “Learning Multiple Layers of Representation,” summarizing the ideas that have led to “multilayer neural networks that contain top-down connections and training them to generate sensory data rather than to classify it,” i.e., the new approaches to deep learning."<ref name="harvard.edu d"/> ||
+
| 2006 || || Geoffrey Hinton publishes a paper titled "Learning Multiple Layers of Representation," which summarizes ideas related to multilayer neural networks with top-down connections. This work represents a new approach to deep learning, focusing on training networks to generate sensory data rather than just classifying it.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2006 || || "Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI."<ref name="javatpoint.coma"/> ||
+
| 2006 || || AI begins to make its presence felt in the business world, with companies like Facebook, Twitter, and Netflix starting to utilize AI technologies for various purposes.<ref name="javatpoint.coma"/> ||
 
|-
 
|-
| 2006 || || The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct {{w|heart arrythmia}}. The results are rated as better than an above-average human surgeon. The machine has a {{w|database}} of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".<ref>{{cite news|url=https://www.engadget.com/2006/05/19/robot-surgeon-performs-worlds-first-unassisted-operation|title=Autonomous Robotic Surgeon performs surgery on first live human|date=19 May 2006|publisher=[[Engadget]]}}</ref><ref>{{cite web |url=http://www.physorg.com/news67222790.html |title=Robot surgeon carries out 9-hour operation by itself|publisher=[[Phys.Org]]}}</ref>
+
| 2006 || || The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct {{w|heart arrythmia}}. The results are rated as better than an above-average human surgeon. The machine has a {{w|database}} of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".<ref>{{cite news|url=https://www.engadget.com/2006/05/19/robot-surgeon-performs-worlds-first-unassisted-operation|title=Autonomous Robotic Surgeon performs surgery on first live human|date=19 May 2006|publisher=[[Engadget]]}}</ref><ref>{{cite web |url=http://www.physorg.com/news67222790.html |title=Robot surgeon carries out 9-hour operation by itself|publisher=[[Phys.Org]]}}</ref> ||
 
|-
 
|-
| 2006 || Conference || {{w|AI@50}}<ref>{{cite web |title=Dartmouth Artificial Intelligence Conference |url=https://www.dartmouth.edu/~ai50/homepage.html |website=dartmouth.edu |accessdate=6 March 2020}}</ref> ||
+
| 2006 || Conference || {{w|AI@50}}, also known as the ''Dartmouth Artificial Intelligence Conference: The Next Fifty Years'', takes place, marking the 50th anniversary of the Dartmouth workshop that initiated AI history. It features five of the original ten attendees, including Marvin Minsky and John McCarthy. The conference, sponsored by Dartmouth College, General Electric, and the Frederick Whittemore Foundation, receives a $200,000 grant from DARPA. Its goals include assessing AI's progress, identifying future challenges, and relating these to other fields. Conference topics range from emotion in machines to machine learning, vision, reasoning, and ethics.<ref>{{cite web |title=Dartmouth Artificial Intelligence Conference |url=https://www.dartmouth.edu/~ai50/homepage.html |website=dartmouth.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
|-
 
|-
| 2007 || || "Fei Fei Li and colleagues at Princeton University start to assemble ImageNet, a large database of annotated images designed to aid in visual object recognition software research."<ref name="harvard.edu d"/> ||
+
| 2007 || || Fei Fei Li and her team at Princeton University initiate the creation of ImageNet, a substantial database of annotated images intended to support research in visual object recognition software.<ref name="harvard.edu d"/> || {{w|United States}}
 
|-
 
|-
 
| 2008 || || {{w|Eliezer Yudkowsky}} calls for the creation of “[[w:Friendly artificial intelligence|friendly AI]]” to mitigate {{w|existential risk from advanced artificial intelligence}}. Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."<ref>[[Eliezer Yudkowsky]] (2008) in ''[http://intelligence.org/files/AIPosNegFactor.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk]''</ref> || {{w|United States}}
 
| 2008 || || {{w|Eliezer Yudkowsky}} calls for the creation of “[[w:Friendly artificial intelligence|friendly AI]]” to mitigate {{w|existential risk from advanced artificial intelligence}}. Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."<ref>[[Eliezer Yudkowsky]] (2008) in ''[http://intelligence.org/files/AIPosNegFactor.pdf Artificial Intelligence as a Positive and Negative Factor in Global Risk]''</ref> || {{w|United States}}
Line 312: Line 464:
 
| 2008 || Conference || {{w|Conference on Artificial General Intelligence}}<ref>{{cite web |title=Artificial General Intelligence 2008 |url=https://www.iospress.nl/book/artificial-general-intelligence-2008/ |website=iospress.nl |accessdate=6 March 2020}}</ref> ||
 
| 2008 || Conference || {{w|Conference on Artificial General Intelligence}}<ref>{{cite web |title=Artificial General Intelligence 2008 |url=https://www.iospress.nl/book/artificial-general-intelligence-2008/ |website=iospress.nl |accessdate=6 March 2020}}</ref> ||
 
|-
 
|-
| 2009 || || "Rajat Raina, Anand Madhavan and Andrew Ng publish “Large-scale Deep Unsupervised Learning using Graphics Processors,” arguing that “modern graphics processors far surpass the computational capabilities of multicore CPUs, and have the potential to revolutionize the applicability of deep unsupervised learning methods.”"<ref name="harvard.edu d"/> ||
+
| 2009 || || Rajat Raina, Anand Madhavan, and Andrew Ng publish ''Large-scale Deep Unsupervised Learning using Graphics Processors''. They assert that modern graphics processors had significantly greater computational power compared to multicore CPUs and had the potential to revolutionize the use of deep unsupervised learning methods.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2009 || || "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."<ref name="harvard.edu d"/> ||
+
| 2009 || || Google initiates the development of a driverless car project, which is kept confidential. By 2014, it would achieve a significant milestone by becoming the first to pass a self-driving test in the U.S. state of Nevada.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2009 || || "Computer scientists at the Intelligent Information Laboratory at Northwestern University develop Stats Monkey, a program that writes sport news stories without human intervention."<ref name="harvard.edu d"/> ||
+
| 2009 || || Computer scientists at Northwestern University's Intelligent Information Laboratory develop Stats Monkey, a program capable of autonomously generating sports news articles without any human involvement.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2010 || || "Launch of the ImageNet Large Scale Visual Recognition Challenge (ILSVCR), an annual AI object recognition competition."<ref name="harvard.edu d"/> ||   
+
| 2010 || || The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is launched as an annual competition focused on AI object recognition.<ref name="harvard.edu d"/> ||   
 
|-
 
|-
| 2010 || || {{w|DeepMind}}<ref>{{cite web |title=Expanding our knowledge, finding new answers |url=https://deepmind.com/about |website=deepmind.com |accessdate=6 March 2020}}</ref> ||
+
| 2010 || || {{w|DeepMind}} is established in the United Kingdom, focusing on developing cutting-edge AI technologies and advancing the field through research and innovation.<ref>{{cite web |title=Expanding our knowledge, finding new answers |url=https://deepmind.com/about |website=deepmind.com |accessdate=6 March 2020}}</ref> Known for its significant contributions to AI, such as the creation of {{w|AlphaGo}}, an AI program that would defeat a world champion Go player, DeepMind would be positioned itself at the forefront of AI research. It would be acquired by Google in 2014.<ref>{{cite web |last1=Bray |first1=Chad |title=Google Acquires British Artificial Intelligence Developer |url=https://archive.nytimes.com/dealbook.nytimes.com/2014/01/27/google-acquires-british-artificial-intelligence-developer/ |website=DealBook |access-date=17 June 2024 |language=en |date=27 January 2014}}</ref><ref>{{cite web |title=A Brief History of Artificial Intelligence |url=https://www.kdnuggets.com/2017/04/brief-history-artificial-intelligence.html |website=kdnuggets.com |accessdate=9 March 2020}}</ref> || {{w|United Kingdom}}
 
|-
 
|-
| 2011 || || "A convolutional neural network wins the German Traffic Sign Recognition competition with 99.46% accuracy (vs. humans at 99.22%)."<ref name="harvard.edu d"/> ||
+
| 2011 || || A convolutional neural network (CNN) achieved a remarkable victory in the German Traffic Sign Recognition competition by achieving an accuracy rate of 99.46%, surpassing the performance of human participants who scored 99.22%.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2011 || || "And in 2011, the computer giant's question-answering system Watson won the quiz show "Jeopardy!" by beating reigning champions Brad Rutter and Ken Jennings."<ref name="livescience.coms"/> ||
+
| 2011 || || IBM's question-answering system, Watson, achieves a significant milestone by winning the quiz show "Jeopardy!" This victory occurs when Watson defeates the reigning champions, Brad Rutter and Ken Jennings.<ref name="livescience.coms"/><ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2011 || || "This year, the talking computer "chatbot" Eugene Goostman captured headlines for tricking judges into thinking he was real skin-and-blood human during a Turing test,"<ref name="livescience.coms"/> ||
+
| 2011 || || A talking computer chatbot named Eugene Goostman gains attention for successfully deceiving judges into believing it was a genuine human during a Turing test.<ref name="livescience.coms"/> ||
 
|-
 
|-
| 2011 || || "Watson, a natural language question answering computer, competes on Jeopardy! and defeats two former champions."<ref name="harvard.edu d"/> ||
+
| 2011 || || Researchers at the IDSIA in Switzerland report a 0.27% error rate in handwriting recognition using convolutional neural networks in 2011. This is a significant improvement over the 0.35%-0.40% error rate in previous years.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2011 || || "Researchers at the IDSIA in Switzerland report a 0.27% error rate in handwriting recognition using convolutional neural networks, a significant improvement over the 0.35%-0.40% error rate in previous years."<ref name="harvard.edu d"/> ||
+
| 2011 || || A study published in the journal ''{{w|Nature Medicine}}'' shows that a machine learning algorithm called BioMind is able to outperform radiologists in interpreting breast cancer scans. The algorithm is trained on a dataset of over 100,000 scans, and is able to identify cancer with a 99% accuracy rate, compared to 96% for radiologists.<ref name=Leigh/> ||
 
|-
 
|-
| 2011 || || "2011: AI enters everyday life. Technology leaps in the hardware and software fields pave the way for artificial intelligence to enter everyday life. Powerful processors and graphics cards in computers, smartphones, and tablets give regular consumers access to AI programs. Digital assistants in particular enjoy great popularity: Apple’s ‘Siri’ comes to the market in 2011, Microsoft introduces the ‘Cortana’ software in 2014, and Amazon presents Amazon Echo with the voice service ‘Alexa’ in 2015."<ref name="bosch.coms"/> ||
+
| 2011 || || Apple's Siri is first released as part of the iPhone 4S. It is a major breakthrough in the field of artificial intelligence, as it is the first voice-activated personal assistant that is widely available.<ref name="bosch.coms"/> ||
 
|-
 
|-
| 2012 || || "June 2012 Jeff Dean and Andrew Ng report on an experiment in which they showed a very large neural network 10 million unlabeled images randomly taken from YouTube videos, and “to our amusement, one of our artificial neurons learned to respond strongly to pictures of... cats.”"<ref name="harvard.edu d"/> ||
+
| 2012 (June) || || Jeff Dean and Andrew Ng conduct an experiment where they expose a massive neural network to 10 million unlabeled images randomly sourced from YouTube videos. Surprisingly, during this experiment, one of the artificial neurons within the network learns to respond strongly to images of cats, leading to an unexpected and amusing result.<ref name="harvard.edu d"/> ||
 
|-
 
|-
| 2012 || || "October 2012 A convolutional neural network designed by researchers at the University of Toronto achieve an error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge, a significant improvement over the 25% error rate achieved by the best entry the year before."<ref name="forbes.coms"/> ||
+
| 2012 (July 13) || Literature || {{w|The Machine Question: Critical Perspectives on AI, Robots, and Ethics}} ||
 
|-
 
|-
| 2014 || || "Google starts developing, in secret, a driverless car. In 2014, it became the first to pass, in Nevada, a U.S. state self-driving test."<ref name="harvard.edu d"/> ||
+
| 2012 || || Researchers at the {{w|University of Toronto}} develop a convolutional neural network that achieves a remarkable error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge. This marks a significant improvement compared to the previous year's best entry, which has an error rate of 25%.<ref name="forbes.coms"/> || {{w|Canada}}
 +
|-
 +
| 2012 || || The secutiry market is flooded by computer vision start-ups.<ref name="daxueconsulting.com">{{cite web |title=The history of Artificial Intelligence (AI) in China |url=https://daxueconsulting.com/history-china-artificial-intelligence/ |website=daxueconsulting.com |accessdate=21 March 2020}}</ref> ||
 +
|-
 +
| 2013 || || {{w|Boston Dynamics}} unveils [[w:Atlas (robot)|Atlas]], an advanced humanoid robot designed for various search-and-rescue tasks. The robot is developed for the DARPA Robotics Challenge, a competition to develop robots that can perform tasks in disaster zones.<ref name="futureoftech.org"/><ref>{{cite web |title=Atlas |url=https://www.bostondynamics.com/atlas |website=bostondynamics.com |accessdate=9 March 2020}}</ref> || {{w|United States}}
 +
|-
 +
| 2013 || || Automated Insights publish 300 million pieces of content, which Mashable reports is greater than the output of all major media companies combined. In 2014, the company's software would generate one billion stories. In 2016, Automated Insights would publish over 1.5 billion pieces of content.<ref name="econsultancy.com"/> ||
 +
|-
 +
| 2014 || || Google starts developing a self-driving car in secret. The project is called "Project Chauffeur". In 2014, the project would be renamed to "Waymo".<ref name="harvard.edu d"/> ||
 
|-
 
|-
 
| 2014 || || {{w|Allen Institute for AI}}<ref>{{cite web |title=Allen Institute for AI |url=https://www.glassdoor.com.ar/Descripci%C3%B3n-general/Trabajar-en-Allen-Institute-for-AI-EI_IE851958.12,34.htm?countryRedirect=true |website=glassdoor.com.ar |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Allen Institute for AI (AI2) |url=https://www.linkedin.com/company/allen-ai/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}   
 
| 2014 || || {{w|Allen Institute for AI}}<ref>{{cite web |title=Allen Institute for AI |url=https://www.glassdoor.com.ar/Descripci%C3%B3n-general/Trabajar-en-Allen-Institute-for-AI-EI_IE851958.12,34.htm?countryRedirect=true |website=glassdoor.com.ar |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Allen Institute for AI (AI2) |url=https://www.linkedin.com/company/allen-ai/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}   
 
|-
 
|-
| 2014 || || "Microsoft introduces the ‘Cortana’ software"<ref name="bosch.coms"/> ||
+
| 2014 || || A research team from the Chinese University of Hong Kong (CUHK) develops a facial recognition system that is able to achieve a human-level accuracy of 97.53%. This system is able to identify faces from a variety of angles and lighting conditions, and it is even able to identify faces that has been obscured by sunglasses or a mask.<ref name=Leigh/> || {{w|China}} ({{w|Hong Kong}})
 +
|-
 +
| 2014 || || Microsoft introduces Cortana, a virtual assistant software. Cortana is first released for Windows Phone 8.1, and it is later released for Windows 10, Windows 10 Mobile, Xbox One, and Android.<ref name="bosch.coms"/> || {{w|United States}}
 
|-
 
|-
 
| 2014 || || {{w|Future of Life Institute}}<ref>{{cite web |title=Future of Life Institute |url=https://www.linkedin.com/company/future-of-life-institute/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2014 || || {{w|Future of Life Institute}}<ref>{{cite web |title=Future of Life Institute |url=https://www.linkedin.com/company/future-of-life-institute/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
|-
 
|-
| 2014 || || {{w|Squirrel AI}}<ref>{{cite web |title=Adaptive Learning Startup Squirrel AI Raises CN¥1B |url=https://medium.com/syncedreview/adaptive-learning-startup-squirrel-ai-raises-cn-1b-df275cbce068 |website=medium.com |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Squirrel AI Learning |url=https://www.crunchbase.com/organization/yixue-squirrel-ai |website=crunchbase.com |accessdate=6 March 2020}}</ref> || {{w|China}}
+
| 2014 || || {{w|Kiev Laboratory for Artificial Intelligence}}<ref>{{cite web |title=Kiev Laboratory for Artificial Intelligence |url=https://www.semanticscholar.org/topic/Kiev-Laboratory-for-Artificial-Intelligence/8853881 |website=semanticscholar.org |accessdate=6 March 2020}}</ref> || {{w|Ukraine}}
 +
|-
 +
| 2014 || || Ian Goodfellow introduces Generative Adversarial Networks (GAN), a revolutionary concept in artificial intelligence that involves two neural networks, a generator, and a discriminator, engaged in a competitive learning process to generate realistic data.<ref name="qbi.uq.edu.au">{{cite web |title=History of Artificial Intelligence |url=https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence |website=qbi.uq.edu.au |accessdate=9 March 2020}}</ref> ||
 
|-
 
|-
| 2014 || || {{w|Kiev Laboratory for Artificial Intelligence}}<ref>{{cite web |title=Kiev Laboratory for Artificial Intelligence |url=https://www.semanticscholar.org/topic/Kiev-Laboratory-for-Artificial-Intelligence/8853881 |website=semanticscholar.org |accessdate=6 March 2020}}</ref> || {{w|Ukraine}}
+
| 2014 || || The rise of programmatic ad buying popularizes artificial intelligence-based ad purchasing. This innovation eliminates the need for time-consuming manual tasks such as market research, budgeting, insertion orders, and complex analytics tracking, making the ad buying process more efficient and cost-effective.<ref name="econsultancy.com"/> ||
 +
|-
 +
| 2015 || || Amazon introduces the Alexa service. The first device to use Alexa is the Amazon Echo, a smart speaker that is released in June. Alexa is a cloud-based voice service that can be used to control smart home devices, play music, get news and weather updates, set alarms, and more. It would since become one of the most popular voice assistants in the world, with over 300 million active users.<ref name="bosch.coms"/> ||
 
|-
 
|-
| 2014 || || "Ian Goodfellow comes up with Generative Adversarial Networks (GAN)."<ref name="qbi.uq.edu.au">{{cite web |title=History of Artificial Intelligence |url=https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence |website=qbi.uq.edu.au |accessdate=9 March 2020}}</ref> ||
+
| 2015 (march) || || The algorithm for diffusion that would later serve as the foundation for text-to-image tools is initially introduced by researchers from Stanford and Berkeley. ||  
 
|-
 
|-
| 2015 || || "Amazon introduces service ‘Alexa’ in 2015."<ref name="bosch.coms"/> ||
+
| 2015 || || The Chinese Congress on Artificial Intelligence 2015 takes place in Beijing, giving the direction of AI-related industries in China.<ref name="daxueconsulting.com"/> || {{w|China}}
 
|-
 
|-
 
| 2015 || || {{w|Open Letter on Artificial Intelligence}}<ref>{{cite web |title=Elon Musk, Stephen Hawking warn of artificial intelligence dangers |url=https://mashable.com/2015/01/13/elon-musk-stephen-hawking-artificial-intelligence/ |website=mashable.com |accessdate=6 March 2020}}</ref> ||
 
| 2015 || || {{w|Open Letter on Artificial Intelligence}}<ref>{{cite web |title=Elon Musk, Stephen Hawking warn of artificial intelligence dangers |url=https://mashable.com/2015/01/13/elon-musk-stephen-hawking-artificial-intelligence/ |website=mashable.com |accessdate=6 March 2020}}</ref> ||
 +
|-
 +
| 2015 (September 22) || || {{w|The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World}} ||
 
|-  
 
|-  
| 2016 || || "March 2016 Google DeepMind's AlphaGo defeats Go champion Lee Sedol."<ref name="harvard.edu d"/> ||
+
| 2015 || || Google launches RankBrain, an advanced artificial intelligence algorithm. RankBrain would revolutionize search query interpretation by effectively understanding the user's search intent, resulting in more relevant search results.<ref name="econsultancy.com"/> ||
 +
|-
 +
| 2016 (March) || || Google DeepMind's AlphaGo defeates Go champion Lee Sedol. This is a major milestone in the development of artificial intelligence, as Go is a much more complex game than chess.<ref name="harvard.edu d"/> ||
 +
|-
 +
| 2016 (March) || || Microsoft releases the Tay chatbot, but quickly takes it offline after it becomes Holocaust denying. ||
 +
|-
 +
| 2016 || || A team of researchers from Google AI and the University of Washington develops a machine learning model that can transcribe telephone calls with 97% accuracy. This is a significant improvement over previous methods, which has an accuracy of around 85%.<ref name=Leigh/> ||
 +
|-
 +
| 2016 || || A team of researchers from the University of Oxford develops a machine learning model that can lipread with 94% accuracy. This is a significant improvement over previous methods, which has an accuracy of around 80%.<ref name=Leigh/> ||
 
|-
 
|-
 
| 2016 || || {{w|Center for Human-Compatible Artificial Intelligence}}<ref>{{cite web |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |url=https://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |website=news.berkeley.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2016 || || {{w|Center for Human-Compatible Artificial Intelligence}}<ref>{{cite web |title=UC Berkeley launches Center for Human-Compatible Artificial Intelligence |url=https://news.berkeley.edu/2016/08/29/center-for-human-compatible-artificial-intelligence/ |website=news.berkeley.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
Line 362: Line 538:
 
| 2016 (February 16) || || {{w|Active Intelligence Pte Ltd}}<ref>{{cite web |title=ACTIVE INTELLIGENCE PTE. LTD. |url=https://www.sgpbusiness.com/company/Active-Intelligence-Pte-Ltd |website=sgpbusiness.com |accessdate=6 March 2020}}</ref> || {{w|Singapore}}
 
| 2016 (February 16) || || {{w|Active Intelligence Pte Ltd}}<ref>{{cite web |title=ACTIVE INTELLIGENCE PTE. LTD. |url=https://www.sgpbusiness.com/company/Active-Intelligence-Pte-Ltd |website=sgpbusiness.com |accessdate=6 March 2020}}</ref> || {{w|Singapore}}
 
|-  
 
|-  
| 2016 (September 28) || || {{w|Partnership on AI}}<ref>{{cite web |title=Exploring The Partnership on AI |url=https://medium.com/@alexmoltzau/exploring-the-partnership-on-ai-9495ff845a39 |website=medium.com |accessdate=6 March 2020}}</ref> ||  
+
| 2016 (September 28) || || {{w|Partnership on AI}} (full name Partnership on Artificial Intelligence to Benefit People and Society) is established. It is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society.<ref>{{cite web |title=Exploring The Partnership on AI |url=https://medium.com/@alexmoltzau/exploring-the-partnership-on-ai-9495ff845a39 |website=medium.com |accessdate=6 March 2020}}</ref><ref>{{cite web |title=About |url=https://partnershiponai.org/about/#:~:text=Partnership%20on%20AI%20(PAI)%20is,collective%20wisdom%20to%20make%20change. |website=Partnership on AI |access-date=3 March 2022}}</ref> Its founding members are [[w:Amazon.com|Amazon]], {{w|Facebook}}, {{w|Google}}, {{w|DeepMind}}, {{w|Microsoft}}, and {{w|IBM}}, with interim co-chairs {{w|Eric Horvitz}} of {{w|Microsoft Research}} and {{w|Mustafa Suleyman}} of DeepMind.<ref>{{cite web |title=About |url=https://partnershiponai.org/about/#:~:text=Partnership%20on%20AI%20(PAI)%20is,collective%20wisdom%20to%20make%20change. |website=Partnership on AI |access-date=3 March 2022}}</ref><ref>{{cite web |title='Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft |url=https://www.theguardian.com/technology/2016/sep/28/google-facebook-amazon-ibm-microsoft-partnership-on-ai-tech-firms |website=the Guardian |access-date=3 March 2022 |language=en |date=28 September 2016}}</ref> [[w:Apple Inc.|Apple]] would join the consortium as a founding member in January 2017.<ref>{{cite web|title=Partnership on AI Update|url=https://www.partnershiponai.org/2017/01/partnership-ai-update/|website=Partnership on AI|accessdate=3 March 2022}}</ref> By 2019, more than 100 partners from academia, civil society, industry, and nonprofits would be member organizations.<ref>{{cite web |title=New Partners To Bolster Perspective For Responsible AI |url=https://partnershiponai.org/new-partners-to-bolster-perspective-for-responsible-ai/ |website=Partnership on AI |access-date=3 March 2022 |date=24 September 2019}}</ref> ||
 +
|-
 +
| 2016 || || A real-time online tool called Swarm AI successfully predicts the winner of the Kentucky Derby horse race. This demonstrates the potential of collective intelligence and real-time collaboration among users to make accurate predictions.<ref name="futureoftech.org"/> ||
 
|-
 
|-
 
| 2017 || || {{w|OpenAI Five}}<ref>{{cite web |title=OpenAI Five |url=https://openai.com/projects/five/ |website=openai.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2017 || || {{w|OpenAI Five}}<ref>{{cite web |title=OpenAI Five |url=https://openai.com/projects/five/ |website=openai.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
Line 368: Line 546:
 
| 2017 || || {{w|DeepMind}} releases AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirms that existing algorithms perform poorly, which is "unsurprising" because the algorithms "are not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".<ref>{{cite news|title=DeepMind Has Simple Tests That Might Prevent Elon Musk’s AI Apocalypse|url=https://www.bloomberg.com/news/articles/2017-12-11/deepmind-has-simple-tests-that-might-prevent-elon-musk-s-ai-apocalypse|accessdate=5 March 2020|work=Bloomberg.com|date=11 December 2017}}</ref><ref>{{cite news|title=Alphabet's DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All|url=http://fortune.com/2017/12/12/alphabet-deepmind-ai-safety-musk-games/|accessdate=5 March 2020|work=Fortune|language=en}}</ref><ref>{{cite web|title=Specifying AI safety problems in simple environments {{!}} DeepMind|url=https://deepmind.com/blog/specifying-ai-safety-problems/|website=DeepMind|accessdate=5 March 2020}}</ref> ||
 
| 2017 || || {{w|DeepMind}} releases AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirms that existing algorithms perform poorly, which is "unsurprising" because the algorithms "are not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".<ref>{{cite news|title=DeepMind Has Simple Tests That Might Prevent Elon Musk’s AI Apocalypse|url=https://www.bloomberg.com/news/articles/2017-12-11/deepmind-has-simple-tests-that-might-prevent-elon-musk-s-ai-apocalypse|accessdate=5 March 2020|work=Bloomberg.com|date=11 December 2017}}</ref><ref>{{cite news|title=Alphabet's DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All|url=http://fortune.com/2017/12/12/alphabet-deepmind-ai-safety-musk-games/|accessdate=5 March 2020|work=Fortune|language=en}}</ref><ref>{{cite web|title=Specifying AI safety problems in simple environments {{!}} DeepMind|url=https://deepmind.com/blog/specifying-ai-safety-problems/|website=DeepMind|accessdate=5 March 2020}}</ref> ||
 
|-
 
|-
| 2017 || || {{w|Asilomar Conference on Beneficial AI}}<ref>{{cite web |title=Video: Superintelligence Panel at Beneficial AI 2017 (FLI) |url=https://medium.com/aifromscratch/video-superintelligence-panel-at-beneficial-ai-2017-fli-5b0f0a64e82 |website=medium.com |accessdate=6 March 2020}}</ref> ||
+
| 2017 || Conference || The {{w|Asilomar Conference on Beneficial AI}} isn held, focusing on discussing the potential risks and benefits associated with artificial intelligence (AI) and how to ensure the development of AI in a way that benefits humanity.<ref>{{cite web |title=Video: Superintelligence Panel at Beneficial AI 2017 (FLI) |url=https://medium.com/aifromscratch/video-superintelligence-panel-at-beneficial-ai-2017-fli-5b0f0a64e82 |website=medium.com |accessdate=6 March 2020}}</ref> ||
 +
|-
 +
| 2017 || || The first {{w|AI for Good}} Global summit takes place.<ref>{{cite web |title=AI for Good Global Summit 2017 |url=https://www.itu.int/en/ITU-T/AI/Pages/201706-default.aspx |website=ITU |access-date=10 March 2023}}</ref> ||
 +
|-
 +
| 2017 || Organization || {{w|AI Now Institute}} is founded. It is an American research institute studying the social implications of artificial intelligence.<ref>{{cite web |title=NYU Law and NYU’s AI Now Institute analyze the ways emerging technology imposes upon civil liberties |url=https://www.law.nyu.edu/news/ai-now-institute-artificial-Intelligence-dirty-data-policing |website=law.nyu.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
 +
|-
 +
| 2017 || || The AI market, including both hardware and software, reaches a total value of $8 billion.<ref name="dev.to"/> ||
 +
|-
 +
| 2017 || || A convolutional neural network (CNN) achieves a remarkable victory in the German Traffic Sign Recognition competition by achieving an accuracy rate of 99.46%, surpassing the performance of human participants who scored 99.22%.<ref name="futureoftech.org"/> ||
 
|-
 
|-
| 2017 || || "The first {{w|AI for Good}} Global summit took place from 7 to 9 June 2017" ||
+
| 2017 || || Google's DeepMind AI achieves the remarkable feat of teaching itself how to walk autonomously.<ref name="futureoftech.org"/> ||
 
|-
 
|-
| 2017 || || {{w|AI Now Institute}}<ref>{{cite web |title=NYU Law and NYU’s AI Now Institute analyze the ways emerging technology imposes upon civil liberties |url=https://www.law.nyu.edu/news/ai-now-institute-artificial-Intelligence-dirty-data-policing |website=law.nyu.edu |accessdate=6 March 2020}}</ref> || {{w|United States}}
+
| 2017 || || AI is included in the Chinese government report as a national strategy in China.<ref name="daxueconsulting.com"/> ||
 
|-
 
|-
| 2017 || || "The AI market (hardware and software) has reached $8 billion"<ref name="dev.to"/> ||
+
| 2018 || || Artificial intelligence showcases its abilities in different ways. IBM's 'Project Debater' engaged in complex debates with human master debaters and performed impressively. Meanwhile, Google's 'Duplex' AI demonstrated its conversational skills by making a hairdressing appointment over the phone without the recipient realizing they were talking to a machine. These examples illustrated AI's capacity to tackle advanced tasks and engage in natural conversations.<ref name="bosch.coms"/> ||
 
|-
 
|-
| 2018 || || "2018: AI debates space travel and makes a hairdressing appointment. These two examples demonstrate the capabilities of artificial intelligence: In June, ‘Project Debater’ from IBM debated complex topics with two master debaters — and performed remarkably well. A few weeks before, Google demonstrated at a conference how the AI program ‘Duplex’ phones a hairdresser and conversationally makes an appointment — without the lady on the other end of the line noticing that she is talking to a machine."<ref name="bosch.coms"/> ||
+
| 2018 || || A machine learning algorithm called BioMind is able to outperform radiologists in interpreting breast cancer scans. The algorithm is trained on a dataset of over 100,000 scans, and is able to identify cancer with a 99% accuracy rate, compared to 96% for radiologists.<ref name=Leigh/> ||
 
|-
 
|-
 
| 2018 || || {{w|European Laboratory for Learning and Intelligent Systems}}<ref>{{cite web |title=European Laboratory for Learning and Intelligent Systems (ELLIS) launched with Informatics researchers on board |url=https://www.ed.ac.uk/informatics/news-events/stories/2018/ellis-launched-informatics-researchers |website=ed.ac.uk |accessdate=9 March 2020}}</ref> ||  
 
| 2018 || || {{w|European Laboratory for Learning and Intelligent Systems}}<ref>{{cite web |title=European Laboratory for Learning and Intelligent Systems (ELLIS) launched with Informatics researchers on board |url=https://www.ed.ac.uk/informatics/news-events/stories/2018/ellis-launched-informatics-researchers |website=ed.ac.uk |accessdate=9 March 2020}}</ref> ||  
 
|-
 
|-
 
| 2018 (April 26) || || {{w|Innovation Center for Artificial Intelligence}}<ref>{{cite web |title=Innovation Center for Artificial Intelligence officially launched |url=https://www.uva.nl/en/content/news/press-releases/2018/04/innovation-center-for-artificial-intelligence-officially-launched.html |website=uva.nl |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Ahold Delhaize Helps Launch AI Innovation Center |url=https://consumergoods.com/ahold-delhaize-helps-launch-ai-innovation-center |website=consumergoods.com |accessdate=6 March 2020}}</ref> || {{w|Netherlands}}
 
| 2018 (April 26) || || {{w|Innovation Center for Artificial Intelligence}}<ref>{{cite web |title=Innovation Center for Artificial Intelligence officially launched |url=https://www.uva.nl/en/content/news/press-releases/2018/04/innovation-center-for-artificial-intelligence-officially-launched.html |website=uva.nl |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Ahold Delhaize Helps Launch AI Innovation Center |url=https://consumergoods.com/ahold-delhaize-helps-launch-ai-innovation-center |website=consumergoods.com |accessdate=6 March 2020}}</ref> || {{w|Netherlands}}
 +
|-
 +
| 2018 || || The artificial intelligence market in China amounts to 33.9 billion RMB.<ref name="daxueconsulting.com"/> || {{w|China}}
 +
|-
 +
| 2018 || || Astronomers harness the power of AI to identify and locate approximately 6,000 new craters on the moon's surface, enhancing our understanding of lunar geology.<ref name="futureoftech.org"/><ref>{{cite web |title=New technique uses AI to locate and count craters on the moon |url=https://phys.org/news/2018-03-technique-ai-craters-moon.html |website=phys.org |accessdate=9 March 2020}}</ref> ||
 +
|-
 +
| 2018 || || Paul Rad, assistant director of the University of Texas-San Antonio Open Cloud Institute, and Nicole Beebe, director of the university's Cyber Center for Security and Analytics, introduce a novel cloud-based learning platform for AI. This platform aims to teach machines to learn in a manner similar to human learning processes.<ref name="futureoftech.org"/><ref>{{cite web |title=UTSA researchers want to teach computers to learn like humans |url=https://www.utsa.edu/today/2018/03/story/Artificial_Intelligence.html |website=utsa.edu |accessdate=9 March 2020}}</ref> ||
 +
|-
 +
| 2018 || || Google showcases Duplex AI, a digital assistant capable of making appointments via telephone calls with live humans. Duplex utilizes natural language understanding, deep learning, and text-to-speech technologies to grasp conversational context and nuances, achieving a level of sophistication unmatched by other digital assistants.<ref name="futureoftech.org"/> ||
 +
|-
 +
| 2018 || || AI ushers in the first year of commercial applications in China. There are more than 1,000 AI-related companies in the country by the time.<ref name="daxueconsulting.com"/> || {{w|China}}
 +
|-
 +
| 2018 || || The AI Now Report finds harmful inaccuracies in AI-driven technology, plus an alarming lack of accountability and, in some cases, systems built on racial discrimination or used for human rights violations.<ref name="looklisten.com">{{cite web |title=Rise of the Machines: The History of Artificial Intelligence |url=https://www.looklisten.com/blog/rise-of-the-machines-the-history-of-artificial-intelligence/ |website=looklisten.com |accessdate=21 March 2020}}</ref> ||
 
|-
 
|-
 
| 2019 || || {{w|Center for Security and Emerging Technology}}<ref>{{cite web |title=Center for Security and Emerging Technology |url=https://cset.georgetown.edu/about-us/ |website=cset.georgetown.edu |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Center for Security and Emerging Technology |url=https://www.linkedin.com/company/georgetown-cset/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
| 2019 || || {{w|Center for Security and Emerging Technology}}<ref>{{cite web |title=Center for Security and Emerging Technology |url=https://cset.georgetown.edu/about-us/ |website=cset.georgetown.edu |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Center for Security and Emerging Technology |url=https://www.linkedin.com/company/georgetown-cset/ |website=linkedin.com |accessdate=6 March 2020}}</ref> || {{w|United States}}
 
|-
 
|-
 
| 2019 || || {{w|Google AI Centre in Ghana}}<ref>{{cite web |title=Google takes on ‘Africa’s challenges’ with first AI centre in Ghana |url=https://www.thestar.com.my/tech/tech-news/2019/04/15/google-takes-on-africas-challenges-with-first-ai-centre-in-ghana/ |website=thestar.com.my |accessdate=6 March 2020}}</ref><ref>{{cite web |title=How Google is driving artificial intelligence for Africa by Africans |url=https://techpoint.africa/2019/04/18/google-ai-accra-centre/ |website=techpoint.africa |accessdate=6 March 2020}}</ref> || {{w|Ghana}}
 
| 2019 || || {{w|Google AI Centre in Ghana}}<ref>{{cite web |title=Google takes on ‘Africa’s challenges’ with first AI centre in Ghana |url=https://www.thestar.com.my/tech/tech-news/2019/04/15/google-takes-on-africas-challenges-with-first-ai-centre-in-ghana/ |website=thestar.com.my |accessdate=6 March 2020}}</ref><ref>{{cite web |title=How Google is driving artificial intelligence for Africa by Africans |url=https://techpoint.africa/2019/04/18/google-ai-accra-centre/ |website=techpoint.africa |accessdate=6 March 2020}}</ref> || {{w|Ghana}}
 +
|-
 +
| 2019 || || A team of five AI bots developed by OpenAI called OpenAI Five defeates a team of professional Dota 2 players in a best-of-three match. This is a significant achievement, as Dota 2 is a complex multiplayer game that requires a high degree of teamwork and strategy.<ref name=Leigh/> ||
 
|-
 
|-
 
| 2019 || || {{w|AI Artathon}}<ref>{{cite web |title=About the Global AI Summit |url=https://www.theglobalaisummit.com/ |website=theglobalaisummit.com |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Riyadh to host AI art competition |url=https://www.arabnews.jp/en/arts-culture/article_7339/ |website=arabnews.jp |accessdate=6 March 2020}}</ref> || {{w|Saudi Arabia}}
 
| 2019 || || {{w|AI Artathon}}<ref>{{cite web |title=About the Global AI Summit |url=https://www.theglobalaisummit.com/ |website=theglobalaisummit.com |accessdate=6 March 2020}}</ref><ref>{{cite web |title=Riyadh to host AI art competition |url=https://www.arabnews.jp/en/arts-culture/article_7339/ |website=arabnews.jp |accessdate=6 March 2020}}</ref> || {{w|Saudi Arabia}}
 +
|-
 +
| 2020 || || An AI called Agent57 developed by DeepMind is able to beat humans at all 57 Atari 2600 games. This is a significant achievement, as the Atari 2600 is a classic console with a wide range of challenging games.<ref name=Leigh/> ||
 +
|-
 +
| 2020 (June) || || OpenAI reveals GPT-3, but releases it only to a small pool of users. ||
 
|-
 
|-
 
|}
 
|}
Line 406: Line 610:
 
===What the timeline is still missing===
 
===What the timeline is still missing===
 
   
 
   
+
* https://every.to/p/a-short-history-of-artificial-intelligence?fbclid=IwAR32SgIgUUBYuqbiq2LIiCoGbmLFbyBk8vQ-djpR7JeWABDY_UBg_xQekak
* [https://www.researchgate.net/publication/322234922_History_of_Artificial_Intelligence]
+
* [http://mediangroup.org/docs/AI_insights.pdf]
 
* [https://www.analyticsinsight.net/artificial-intelligence-the-history-now-and-future/]
 
* [https://www.kdnuggets.com/2017/04/brief-history-artificial-intelligence.html]
 
* [https://www.futureoftech.org/artificial-intelligence/5-history-of-ai/]
 
* [https://info.aiim.org/aiim-blog/a-brief-history-of-artificial-intelligence]
 
* [http://world-information.org/wio/infostructure/100437611663/100438659360]
 
* [https://blog.hurree.co/blog/the-history-of-artificial-intelligence-infographic]
 
* [https://www.phocuswire.com/A-brief-history-of-artificial-intelligence]
 
* [https://aitopics.org/misc/brief-history]
 
* [https://www.bbc.co.uk/teach/ai-15-key-moments-in-the-story-of-artificial-intelligence/zh77cqt]
 
* [https://www.codementor.io/@paulwarren/a-brief-history-of-artificial-intelligence-1956-to-now-mgoracvnx]
 
* [https://omnius.com/blog/a-short-history-of-artificial-intelligence-making-mythology-a-reality/]
 
* [https://www.wsj.com/articles/test-your-knowledge-about-the-history-of-ai-11571018521]
 
* [https://www.business2community.com/tech-gadgets/brief-history-artificial-intelligence-02004150]
 
* [https://econsultancy.com/a-brief-history-of-artificial-intelligence-in-advertising/]
 
* [https://amt-lab.org/blog/2017/3/a-brief-history-of-artificial-intelligence-wxn6d]
 
* [https://www.sutori.com/story/the-history-of-artificial-intelligence--4qEzQz1PPuA9Wo4mBkv2a9BX]
 
* [https://www.britannica.com/technology/artificial-intelligence]
 
* [http://people.idsia.ch/~juergen/ai.html]
 
* [https://www.atariarchives.org/deli/artificial_intelligence.php]
 
* [https://daxueconsulting.com/history-china-artificial-intelligence/]
 
* [https://www.dummies.com/software/other-software/history-artificial-intelligence/]
 
* [http://blog.bccresearch.com/a-short-history-of-artificial-intelligence]
 
* [https://www.marktechpost.com/2018/07/18/15-moments-that-defined-the-history-of-artificial-intelligence/]
 
* [https://artint.info/2e/html/ArtInt2e.Ch1.S2.html]
 
* [https://becominghuman.ai/the-curious-history-of-artificial-intelligence-an-african-perspective-46002515934e]
 
* [https://www.historyextra.com/period/second-world-war/7-phases-of-the-history-of-artificial-intelligence/]
 
* [http://ocw.uc3m.es/ingenieria-telematica/inteligencia-en-redes-de-comunicaciones/material-de-clase-1/01a-brief-history-of-ai]
 
* [https://medium.com/datadriveninvestor/evolution-of-ai-past-present-future-6f995d5f964a]
 
* [https://matthewljones.github.io/historyai2019/]
 
* [http://www.inf.ed.ac.uk/about/AIhistory.html]
 
* [https://www.looklisten.com/blog/rise-of-the-machines-the-history-of-artificial-intelligence/]
 
* [http://www.historyofinformation.com/detail.php?id=4289]
 
 
 
 
* {{w|Category:Artificial intelligence applications}}
 
* {{w|Category:Artificial intelligence applications}}
 
* {{w|Category:Artificial intelligence publications}}
 
* {{w|Category:Artificial intelligence publications}}

Latest revision as of 11:03, 5 August 2024

This is a timeline of artificial intelligence, which refers to the development and implementation of computer systems or machines that can perform tasks that typically require human intelligence.

Sample questions

The following are some interesting questions that can be answered by reading this timeline:

Big picture

Summary by year

Time period Development summary More details
1940s-1950s Early work This period sees the first explorations of AI, including the development of artificial neurons, learning rules for adjusting neuron connections, and the concept of connectionism.[1][2] Expert systems, which are a type of AI, are first introduced in the early 1950s. Allen Newell and Herbert A. Simon create the first artificial intelligence program. In 1956, the term "Artificial Intelligence" is first adopted.[1] Many consider John Von Neumann and Alan Turing to be the founding fathers of the technology behind AI. They pioneer the transition from 19th century decimal logic to binary logic in computer architecture. This transition leads to the development of modern computers and their ability to execute programs based on Boolean algebra. They also demonstrate that computers are universal machines capable of performing a wide range of tasks based on programming.[3] By the 1950s, a group of scientists, mathematicians, and philosophers already become familiar with the concept of artificial intelligence (AI).[4]
1960s-1970s Knowledge-based AI During this time, AI researchers focus on developing rule-based systems that can reason and make decisions based on knowledge representations. Around this period, AI experiences significant growth.The availability and affordability of computers increase, allowing for more data storage and faster processing. Additionally, machine learning algorithms improve and people become more knowledgeable about which algorithm to use for specific problems.
1974–1980 AI winter After criticism of the lack of progress in artificial intelligence (AI), government funding and interest in the field decrease during this period. Research efforts focuse on neural networks, but progress is limited, and functional programs can only handle simple problems. AI researchers have been overly optimistic in setting their goals and have made naive assumptions about the challenges they would face. When they failed to deliver promised results, funding was cut. [5][2]
1980–1987 A boom of AI Following the period of AI winter, the field of artificial intelligence makes a comeback with the introduction of expert systems. These systems are designed to mimic the decision-making abilities of a human expert through programming. [1] AI is reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularize “deep learning” techniques which allow computers to learn using experience.[4] Funding from the United States and Britain resume to compete with Japan's "fifth generation" computer project and its goal of becoming the global leader in computer technology.[5][2][6]
1987–1993 Second AI winter Investors and governments stop funding AI research due to high costs and inefficient results, leading to another major AI winter. This coincides with the decline of early general-purpose computers and reduced government funding. Expert systems such as XCON are cost-effective but become too expensive to maintain compared to desktop computers. At the same time, DARPA concludes that AI would not be the next big thing and redirects funds to other projects. However, by the end of the 1980s, over half of the Fortune 500 companies were involved in either developing or maintaining expert systems.[1][5][2][6]
1993–2011 Emergence of intelligent agents AI research shifts its focus to intelligent agents which are used for news retrieval, online shopping, and web browsing. Despite a lack of government funding and hype, AI thrives during the 1990s and 2000s, achieving many landmark goals. Neural networks become financially successful in the 1990s when used for optical character and speech pattern recognition.[2] Major advancements are made in all areas of AI, with significant demonstrations in machine learning, natural language understanding, vision, and other fields.[7]
2011-onward Massive data and new computing power. "Deep learning, big data and artificial general intelligence" In 2011, IBM's Watson wins Jeopardy, showcasing its ability to understand natural language and solve complex questions quickly. The AI field experiences a new boom in the early 2010s due to the availability of massive amounts of data and the discovery of the high efficiency of computer graphics card processors in accelerating learning algorithms. These advancements enable significant progress at a lower financial cost.[1][3]

Summary by country

Full timeline

Year Event type Details Country/location
4th century B.C. Greek philosopher Aristotle invents syllogistic logic, the first formal deductive reasoning system.[8]
1 AC Greek mathematician and engineer Hero of Alexandria creates automatons that operate with mechanical mechanisms powered by water and steam.[9]
1206 Ebru İz Bin Rezzaz Al Jezeri, who some consider a pioneer in cybernetic science, creates water-operated automatic controlled machines.[9]
1308 Catalan poet Ramon Llull publishes "Ars generalis ultima" (The Ultimate General Art). This work improves his method of using mechanical tools made of paper to generate new ideas by combining different concepts.[10]
1623 German professor Wilhelm Schickard invents a calculating machine capable of four operations.[11][9] Germany
1642 Blaise Pascal creates the first mechanical digital calculating machine.[8]
1666 German polymath Gottfried Leibniz releases his work Dissertatio de arte combinatoria (On the Combinatorial Art). In this work, he follows Ramon Llull's idea of suggesting an alphabet of human thought and argues that all ideas are merely combinations of a small number of simple concepts.[10]
1672 Gottfried Leibniz in Paris develops a binary counting system that forms the abstract basis of modern computers.[12][9] France
1703 Gottfried Leibniz has a foresight of how binary arithmetic could be suitable for automatic calculation.[12]
1726 Jonathan Swift releases Gulliver's Travels, a book containing a portrayal of the Engine, a contraption situated on the island of Laputa that satirizes Llull's concepts. The Engine is described as "a Project for improving speculative Knowledge by practical and mechanical Operations." According to the depiction, using this device, even an uneducated individual could produce books on various subjects, such as Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with minimal assistance from creativity or education, but with some physical effort and at a reasonable cost. [10]
1763 English statistician Thomas Bayes develops a framework for reasoning about the probability of events. The Bayesian inference would become a leading approach in machine learning.[10][13] United Kingdom (Kingdom of Great Britain)
1801 Joseph-Marie Jacquard invents the Jacquard loom, the first programmable machine, with instructions on punched cards.[8]
1854 Self-taught English mathematician, philosopher, and logician George Boole claims that logical reasoning can be systematically carried out, similar to solving a system of equations. He develops a binary algebra that represents some "laws of thought," which is published in his work titled The Laws of Thought (1854).[8]
1863 English novelist Samuel Butler suggests that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[14] United Kingdom
1879 German philosopher, logician, and mathematician Gottlob Frege develops modern propositional logic in his work Begriffsschrift. This would be later later clarified and expanded by Russell,Tarski, Godel, Church and others.[8] Germany
1898 Nikola Tesla showcases the world's first remote-controlled boat at an electrical exhibition in the newly built Madison Square Garden. Tesla referred to the vessel as having "a borrowed mind."[10]
1910 Principia Mathematica is published by Bertrand Russell and Alfred North Whitehead. This book would have a significant impact on formal logic. Russell, along with Ludwig Wittgenstein and Rudolf Carnap, would pave the way for a logical analysis of knowledge in philosophy.[8] United Kingdom
1912 Torres y Quevedo constructs a chess machine called the "Ajedrecista" that utilizes electromagnets located beneath the board to play out the endgame scenario of a rook and king against a single king. This creation is believed to be the earliest example of a computer game. [8]
1914 Leonardo Torres y Quevedo, a Spanish engineer, presents a chess-playing device that can play endgames with just a king and rook against a king without any human involvement.[10]
1921 The term "robot" is first introduced by Czech writer Karel Čapek in his play R.U.R. (Rossum's Universal Robots). The word is derived from "robota," which means "work" in Czech. The play explores the idea of artificial workers who ultimately turn against their human creators.[10]
1925 U.S. electrical engineer Francis P. Houdina demonstrates a radio-controlled car called the "American Wonder" on the streets of New York City. The car is able to travel at speeds of up to 20 mph, and it could turn corners and stop on command. The car is also able to avoid obstacles, such as pedestrians and other cars. The demonstration generates a lot of interest in the concept of driverless cars. However, the technology is not yet advanced enough to make driverless cars practical, and the American Wonder would be never put into production.[10] United States
1929 The first robot ever built in Japan is designed by Makoto Nishimura and named Gakutensoku, which means "learning from the laws of nature." This robot has the ability to alter its facial expression and move its head and hands, which is accomplished through an air pressure mechanism. [10]
1931 Kurt Gödel introduces the theory of deficiency, which is called by his own name.[9] "In 1931, Goedel layed the foundations of Theoretical Computer Science and AI"[15]
1936 Konrad Zuse creates a computer with programmable capabilities called Z1, which has a memory capacity of 64K.[9]
1936–1937 English mathematician Alan Turing proposes the universal Turing machine.[8] United Kingdom
1943 Warren McCulloch, a neurophysiologist at the University of Illinois, and Walter Pitts, a mathematician at the University of Chicago, release a significant publication regarding neural networks and automatons. They suggest that each neuron in the brain functions as a basic digital processor and that the entire brain is a type of computerized machine. This concept would have a significant impact on the field of artificial intelligence and would provide a theoretical foundation for the use of neural networks in modern technology.[1][16]
1943 Concept development Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics" in a paper. Wiener would publish a popular book by that name in 1948.[8]
1943 Emil Post demonstrates that production systems are a universal computational mechanism. His work on completeness, inconsistency, and proof theory is also significant. Chapter 2 of the book "Rule Based Expert Systems" discusses the applications of production systems in artificial intelligence.[8]
1945 Literature Hungarian American mathematician George Polya publishes his best-selling book on thinking heuristically, How to Solve It. This book introduces the term 'heuristic' into modern thinking and would influence many AI scientists.[8] United States
1945 Literature American engineer Vannevar Bush publishes As We May Think, a prescient vision of the future in which computers assist humans in many activities.[8] United States
1946 The first computer, ENIAC (Electronic Numerical Integrator and Computer), becomes operational. It is so large that it occupies an entire room and weights 30 tons.[9]
1949 American computer scientist Edmund Berkeley writes a book titled Giant Brains: Or Machines That Think, where he discusses the emergence of news about large machines with the ability to handle vast amounts of information at a great speed and with great skill. According to him, these machines are comparable to a brain made of wires and hardware instead of flesh and nerves. In his opinion, machines are capable of thinking because they are capable of performing logical operations, making conclusions, and decisions based on information.[10] United States
1949 Donald Hebb publishes a book called "Organization of Behavior: A Neuropsychological Theory," which proposes a theory about learning based on the ability of synapses to strengthen or weaken over time in neural networks. Hebb demonstrates an updating rule for modifying the connection strength between neurons, which would be later known as Hebbian learning.[10][1]
1950 In an article for Scientific American, Claude Shannon argues that only an artificial intelligence program could play computer chess at a high level. He points out that the number of possible moves in a chess game is so vast that it would be impossible for a human to consider all of them. An AI program, on the other hand, could use a search algorithm to explore all of the possible moves and select the best one. Shannon's article would become a landmark in the history of computer chess. It would help to lay the foundation for the development of the first chess-playing programs, which would be developed in the 1950s and 1960s. Today, AI programs are able to play chess at a level that is far superior to any human player.[10][8][17]
1950 Concept development Alan Turing publishes his article "Computing Machinery and Intelligence", which introduces the concept of the Turing Test, also known as the imitation game. This game involves a human judge trying to distinguish between a human and a machine in a teletype conversation. Turing's article is the first to raise the question of whether a machine could exhibit intelligence.[3]
1951 Marvin Minsky and Dean Edmunds build SNARC (Stochastic Neural Analog Reinforcement Calculator), the first artificial neural network, using 3000 vacuum tubes to simulate a network of 40 neurons.[10]
1951 The first artificial intelligence programs for the Harvard Mark I device are written.[9] United States
1952 American computer scientist Arthur Samuel develops the first computer checkers-playing program and the first computer program to learn on its own.[10] United States
1952 Alan Hodgkin and Andrew Huxley publish a paper in the journal Nature that describe a mathematical model of the electrical activity of neurons. The model, which would be later known as the Hodgkin-Huxley model, is a set of nonlinear differential equations that describe how the membrane potential of a neuron changes over time. The Hodgkin-Huxley model would become a major breakthrough in the field of neuroscience, and it would help to lay the foundation for our understanding of how neurons work. The model would be used to study a wide range of phenomena in neuroscience, including the generation of action potentials, the propagation of action potentials, and the integration of synaptic inputs. The Hodgkin-Huxley model is a simplified model of the neuron, but it is still a very powerful tool for understanding how neurons work.[2]
1953 Arthur Prior, a philosopher at the University of Canterbury, first introduces tense logic, which would be used by languages to express time-dependent data. Tense logic helps in locating statements in the flow of time.[16]
1954 The Georgetown-IBM experiment becomes the first demonstration of machine translation (MT). The experiment is conducted by a team of researchers from Georgetown University and IBM. They use a computer called the IBM 701 to translate 60 Russian sentences into English. The sentences are all related to organic chemistry, and the translation system was able to translate them with an accuracy of 85%. The Georgetown-IBM experiment becomes a major milestone in the history of MT. It shows that MT was a real possibility, and it paves the way for the development of more advanced MT systems.[6] United States
1954 Belmont Farley and Wesley Clark of MIT achieve a significant milestone by running the first artificial neural network. Although limited by computer memory to 128 neurons, they are able to train the network to recognize simple patterns. They also discover that damaging up to 10 percent of the neurons did not affect the network's performance, similar to the brain's ability to tolerate limited damage. The depicted neural network exemplifies the fundamental concepts of connectionism.[16]
1955 In August 31, 1955, a proposal titled 2 month, 10 man study of artificial intelligence is submitted by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. This proposal introduces the term "artificial intelligence." The workshop, held in July and August 1956, would be widely regarded as the official birth of the field of artificial intelligence.[10]
1955 (December) Herbert Simon and Allen Newell introduce the Logic Theorist, recognized as the first artificial intelligence program. This program achieves a remarkable feat by proving 38 out of the initial 52 theorems found in Whitehead and Russell's Principia Mathematica. Additionally, it discovers new and more elegant proofs for some of these theorems.[10][1]
1955–1956 Allen Newell, J. Clifford Shaw, and Herbert Simon create the Logic Theorist, a groundbreaking program aimed at proving theorems from Principia Mathematica by Whitehead and Russell. The Logic Theorist, as it comes to be known, is capable of producing more elegant proofs than those found in the original books, marking a significant achievement in this field.[16]
1956 The inaugural "Artificial Intelligence" conference takes place at Dartmouth College in Hanover, New Hampshire. The term "artificial intelligence" was previously coined in a proposal submitted by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in August 1955, leading to the official birth of the field during the workshop held in July and August 1956. This summer conference, funded by the Rockefeller Institute, is considered the foundation of the discipline. Remarkably, it is a workshop rather than a conventional conference, with only six participants, including McCarthy and Minsky, who would remain consistently engaged in developing the field, primarily through formal logic.[18][10] [3][6][1] United States
1956 Newell and Simon develop the Logic Theorist program, an early AI system designed to discover proofs in propositional logic. This marks the inception of artificial intelligence as a field, with Logic Theorist being later often considered the first AI program. It is presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in the same year, a conference hosted by John McCarthy and Marvin Minsky, where the term "artificial intelligence" is first coined. The program aims to simulate human problem-solving skills and was funded by the RAND Corporation.[19][4][5][9]
1957 Frank Rosenblatt creates the Perceptron, one of the initial artificial neural networks that facilitates pattern recognition through a two-layer computer learning system. The New York Times describes the Perceptron as the early stages of an electronic computer that the Navy anticipates can eventually possess capabilities such as walking, talking, seeing, writing, self-replicating, and self-awareness. The New Yorker characterizes it as an extraordinary machine with the potential for what can be considered as thought processes.[10]
1957 Herbert Simon, an economist and sociologist, predicts that artificial intelligence would be able to defeat a human at chess within a decade. However, AI research would experience a setback and would go through a period of dormancy. Nevertheless, Simon's prediction ultimately would come true, but it would take 30 years for AI to accomplish this feat.[3]
1957 Herbert Newell, Cliff Shaw, and Herbert Simon demonstrate the General Problem Solver (GPS). This program, developed over about a decade, is capable of solving a wide range of puzzles through a trial-and-error approach, showcasing significant problem-solving abilities.[16][8]
1958 American computer scientist John McCarthy develops the Lisp programming language. Lisp is a functional programming language that is well-suited for artificial intelligence applications. It is a recursive language, which means that it can be used to represent recursive data structures, such as lists. This makes it a powerful tool for representing the knowledge that is used in artificial intelligence applications. Lisp would be used in a wide variety of artificial intelligence applications, including natural language processing, machine learning, and robotics. It is still one of the most popular programming languages used in artificial intelligence research.[10][9]
1958 Herbert Gelernter's "geometry machine" becomes the first advanced AI program to prove geometric theorems, marking a significant milestone in artificial intelligence development.[20]
1959 Arthur Samuel coins the term "machine learning" while reporting his work on programming a computer to improve its checkers game-playing skills beyond the capabilities of its human programmer.[10]
1959 Oliver Selfridge publishes Pandemonium: A paradigm for learning, which describes a model in which computers can recognize patterns that has not been pre-specified. This work lays the foundation for pattern recognition and learning in AI.[10]
1959 John McCarthy publishes Programs with Common Sense, in which he introduces the concept of the "Advice Taker," a program designed for problem-solving and common-sense reasoning.[10]
1959 Samuel creates a checkers program. Later in the late 1950s, he would design a program that can learn how to play checkers.[19]
1960 American psychologist and computer scientist J. C. R. Licklider describes the human-machine relationship in his work.[9] United States
1961 Unimate, the first industrial robot, starts working on an assembly line in a General Motors plant in New Jersey.[10][21] United States
1961 James Slagle in his PhD dissertation writes in Lisp the first symbolic integration program, SAINT, which solves calculus problems at the college freshman level.[8]
1961 American computer scientist James Robert Slagle develops SAINT (Symbolic Automatic INTegrator), a heuristic program designed to solve symbolic integration problems typically found in freshman calculus.[10] United States
1963 Reed C. Lawlor, a member of the California Bar, authors an article titled What Computers Can Do: Analysis and Prediction of Judicial Decisions. The article explores the potential of computers in analyzing and predicting judicial decisions.[3]
1963 Thomas Evans develops a program called ANALOGY as part of his MIT PhD work. This program demonstrates that computers are capable of solving analogy problems similar to those found on IQ tests.[8]
1963 Ivan Sutherland's MIT dissertation on Sketchpad introduces the concept of interactive graphics into the field of computing.[8] United States
1963 Edward A. Feigenbaum and Julian Feldman publish Computers and Thought, which is the first collection of articles focused on artificial intelligence.[8]
1964 Daniel Bobrow completes his MIT PhD dissertation titled Natural Language Input for a Computer Problem Solving System and creates STUDENT, a computer program for natural language understanding.[10]
1964 The Society for the Study of Artificial Intelligence and the Simulation of Behaviour is founded. It is the oldest AI society in the world. United Kingdom
1964 Danny Bobrow's MIT dissertation demonstrates that computers can understand natural language well enough to correctly solve algebra word problems.[8]
1964 Bert Raphael's MIT dissertation on the SIR program showcases the effectiveness of a logical knowledge representation for question-answering systems.[8]
1965 Herbert Simon predicts in The Shape of Automation for Men and Management that machines would be capable of doing any work a man can do within 20 years.[22] "Herbert Simon predicts that "machines will be capable, within twenty years, of doing any work a man can do.""[10]
1965 American philosopher Hubert Dreyfus publishes Alchemy and AI, which argues that the mind is not like a computer and that there are limits beyond which artificial intelligence would not progress.[10] United States
1965 I.J. Good writes in "Speculations Concerning the First Ultraintelligent Machine" that the first ultraintelligent machine could potentially be humanity's last invention, as long as it remains compliant enough to guide us in controlling it.[10]
1965 Joseph Weizenbaum creates ELIZA, an interactive software that engages in conversations in English about various subjects. Weizenbaum's objective was to exhibit the superficial nature of communication between humans and machines. However, he would be taken aback by the number of individuals attributing human-like emotions to the computer program.[10]
1965 Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi begin developing DENDRAL at Stanford University. DENDRAL is the first expert system, designed to automate the decision-making and problem-solving tasks performed by organic chemists. Its primary goal is to explore hypothesis formation and the creation of models for empirical induction in scientific research.[10][3] United States
1965 Literature Hubert Dreyfus publishes Alchemy and AI.
1965 J. Alan Robinson develops the Resolution Method, a mechanical proof procedure that enables programs to efficiently work with formal logic as a representation language.[8]
1965 Joseph Weizenbaum, a researcher at MIT, develops ELIZA, an interactive software that engages in conversations in the English language on various subjects. Initially, it is a well-liked application at AI centers on the ARPA-net. However, a modified version would be created to imitate the conversation style of a psychotherapist.[8]
1966 Shakey the robot is introduced as the first general-purpose mobile robot capable of reasoning about its own actions. An article in Life magazine in 1970 refers to Shakey as the "first electronic person," and Marvin Minsky predicts that within three to eight years, a machine with the general intelligence of an average human would be achieved.[10]
1966 Joseph Weizenbaum, a German-American computer scientist at MIT, creates the first chatbot named ELIZA. ELIZA uses scripts to simulate conversations with humans, including the role of a psychotherapist. This development highlights the early focus on algorithm development for mathematical problem-solving.[23][1]
1966 The ALPAC report, known for its skepticism about machine translation research and its call for increased focus on basic computational linguistics research, results in a significant reduction in U.S. government funding for this field. This report, along with the 1973 Lighthill report for the British government, contribute to the onset of the AI winter, a period marked by reduced funding and interest in artificial intelligence research.[6][8]
1966 Organization Canadian engineer Charles Rosen founds the Artificial Intelligence Center.[24]
1966 Ross Quillian in his PhD dissertation at Carnegie Institute of Technology demonstrates semantic networks[8], which are basically graphic depictions of knowledge composed of nodes and links that show hierarchical relationships between objects.[25] Semantic networks are an alternative to first-order logic as a form of knowledge representation.[26] United States
1966 The first Machine Intelligence workshop takes place in Edinburgh, marking the beginning of an influential annual series of workshops organized by Donald Michie and others.[8] United Kingdom
1967 The Dendral program, developed by Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, and Georgia Sutherland at Stanford University, successfully demonstrates the interpretation of mass spectra on organic chemical compounds. This achievement marks the first successful knowledge-based program for scientific reasoning.[8]
1967 Joel Moses, during his PhD work at MIT, demonstrates the effectiveness of symbolic reasoning for integration problems through the Macsyma program. This marks a significant milestone as the first successful knowledge-based program in mathematics.[8]
1967 Richard Greenblatt at MIT develops MacHack, a knowledge-based chess-playing program that achieved a class-C rating in tournament play. This achievement marks a notable advancement in computer chess.[8]
1967 Daniel Bobrow's STUDENT program demonstrates the ability to solve high school algebra problems expressed in natural language, showcasing early advancements in natural language understanding by computers.[19]
1968 Stanley Kubrick's film 2001: A Space Odyssey is released, featuring HAL 9000, a sentient computer that raises questions about the sophistication, benefits, and dangers of AI. While not a scientific contribution, the film would play a significant role in popularizing AI themes and ethical questions. Science fiction authors like Philip K. Dick also explore the idea of machines experiencing emotions, contributing to the discourse around AI. [10][3]
1968 American computer scientist Terry Winograd creates SHRDLU, a groundbreaking multimodal artificial intelligence system capable of manipulating and reasoning about a simulated world of blocks based on user instructions. SHRDLU showcases advanced natural language processing capabilities, enabling users to interact with the system in English to give commands and queries regarding the arrangement and manipulation of blocks. This pioneering work demonstrates significant progress in the field of artificial intelligence, particularly in natural language understanding and semantic interpretation, laying the groundwork for future developments in human-computer interaction and AI reasoning systems.[10] United States
1969 Arthur E. Bryson and Yu-Chi Ho describe backpropagation as a multi-stage dynamic system optimization method. While it doesn't gain prominence immediately, this learning algorithm for multi-layer artificial neural networks would later play a significant role in the success of deep learning during the 2000s and 2010s, as computing power advances to enable the training of large neural networks.[10]
1969 Marvin Minsky and Seymour Papert publish Perceptrons: An Introduction to Computational Geometry, which highlights the limitations of simple neural networks called perceptrons. An expanded edition in 1988 would clarify that their conclusions in 1969 didn't significantly reduce funding for neural network research. Instead, they would argued that progress has stalled due to a lack of adequate basic theories in the mid-1960s, despite many experiments with perceptrons. The book emphasizes the need for a deeper understanding of why certain patterns could be recognized by neural networks while others could not.[10]
1969 Conference The first International Joint Conference on Artificial Intelligence (IJCAI) is held in Washington, D.C.[8] United States
1969 The SRI robot Shakey demonstrates the ability to combine locomotion, perception, and problem solving. This is a major breakthrough in the field of robotics, as it shows that it is possible to build a robot that can interact with its environment in a meaningful way. Shakey is equipped with a variety of sensors, including a television camera, a laser range finder, and a bump sensor. These sensors allow Shakey to see its surroundings, measure the distance to objects, and detect obstacles. Shakey is also equipped with a problem-solving system that allows it to plan its movements and solve simple problems. Shakey's success shows that it is possible to build a robot that can combine locomotion, perception, and problem solving. This is a major breakthrough in the field of robotics, as it paves the way for the development of more advanced mobile robots.[8] United States
1969 Roger Schank, a researcher at Stanford University, introduces the conceptual dependency model for natural language understanding. This model would be further developed for applications in story understanding by Robert Wilensky and Wendy Lehnert during their PhD dissertations at Yale University. Additionally, Janet Kolodner would expand its use in understanding memory.[8] United States
1970 Literature Journal Artificial Intelligence is first published by Elsevier.[27] Netherlands
1970 Waseda University in Japan creates the WABOT-1, the first anthropomorphic robot. This robot features limb control, a vision system, and a conversation system, marking a significant advancement in robotics.[10]
1970 Marvin Minsky expresses optimism to Life Magazine, suggesting that within three to eight years, a machine with the general intelligence of an average human being would be developed. However, despite the progress made in basic principles, there is still a considerable distance to cover before achieving goals like natural language processing, abstract thinking, and self-recognition in AI.[4]
1970 Uruguayan American Jaime Carbonell develops SCHOLAR, an interactive program for computer-aided instruction based on semantic nets as the representation of knowledge.[8] SCHOLAR is perhaps the first intelligent tutoring system.[28] United States
1970 Bill Woods describes Augmented Transition Networks (ATN) as a representation for natural language understanding.[8] The ATN is a formalism for writing parsing grammars that would be much used in artificial intelligence and computational linguistics.[29]
1970 Patrick Winston's PhD program called ARCH, which is conducted at MIT, focuses on teaching computers to learn concepts from examples in the context of children's building blocks.[8]
1971 Terry Winograd's MIT PhD thesis showcases computers' capacity to comprehend English sentences within a limited context involving children's building blocks. He achieves this by integrating his language comprehension program, SHRDLU, with a robot arm that executes instructions provided in English text.[8]
1972 Expert system Stanford University introduces MYCIN, one of the early expert systems designed for diagnosing severe infections, identifying bacteria responsible, and recommending suitable antibiotics. MYCIN represents a pioneering application of artificial intelligence in the medical field, serving as an expert system that utilized rules, formulas, and a knowledge database to assist in diagnosing and treating illnesses.[4][3][23]
1972 The WABOT-1 becomes the first full-scale humanoid intelligent robot built in the world. It is developed by a team of researchers at Waseda University in Tokyo, Japan, led by Ichiro Kato. The WABOT-1 is able to walk, talk, and interact with people in a limited way. A major breakthrough in the field of robotics it shows that it is possible to build a robot that could interact with humans in a meaningful way. The research that is done on the WABOT-1 would help to pave the way for the development of more advanced humanoid robots, such as the ASIMO robot developed by Honda.[1] Japan
1972 Lierature Hubert Dreyfus publishes What Computers Can't Do.[30]
1972 French computer scientist Alain Colmerauer develops Prolog, a programming language commonly used for artificial intelligence and symbolic reasoning.[8]
1972 Work commences on MYCIN, an expert system designed to diagnose blood infections. Developed at Stanford University, MYCIN aims to diagnose patients by analyzing their reported symptoms and medical test results.[16]
1972 Alan Kay, Dan Ingalls, and Adele Goldberg at Xerox PARC introduce the Smalltalk programming language. Smalltalk is a groundbreaking, purely object-oriented language primarily created for teaching programming to young individuals. It emphasizes the message-passing paradigm, marking a significant development in object-oriented programming and icon-oriented interfaces.[8][31]
1973 James Lighthill is commissioned by the head of the British Science Research Council, Brian Flowers, to evaluate requests for support in AI research. His report, "Artificial Intelligence: A General Survey," published in 1973, concludes that the discoveries made in the field of AI research had not lived up to the earlier promises of major impact. This pessimistic prognosis by Lighthill would result in reduced government funding for AI research, and his report would be commonly referred to as the "Lighthill report."[4][6]
1973 Alain Colmerauer at the University of Aix-Marseille, France, conceive the logic programming language PROLOG (Programmation en Logique), which is first implemented that same year. PROLOG would be further developed by Robert Kowalski, a logician at the University of Edinburgh. This language employs a powerful theorem-proving technique called resolution, which was invented in 1963 by British logician Alan Robinson. PROLOG is capable of determining the logical validity of statements, making it widely used in AI research, particularly in Europe and Japan.[16]
1973 DARPA initiates the development of protocols known as TCP/IP.[9]
1974 Conference European Conference on Artificial Intelligence[32]
1974 Ted Shortliffe's PhD dissertation at Stanford University showcases the effectiveness of rule-based systems in the realm of medical diagnosis and treatment, specifically focusing on MYCIN. This work is often regarded as a pioneering example of an expert system in the field of artificial intelligence.[8]
1974 Earl Sacerdoti made significant advancements in the field of artificial intelligence by developing one of the earliest planning programs known as ABSTRIPS. His work also introduced techniques for hierarchical planning, which had a substantial impact on AI planning systems.[8]
1975 Marvin Minsky publishes a highly influential article on Frames as a knowledge representation. This work brings together various ideas related to schemas and semantic links, contributing significantly to the field of artificial intelligence and knowledge representation.[8]
1975 The Meta-Dendral learning program achieves a significant milestone by generating new findings in chemistry, specifically in the realm of mass spectrometry. These results mark the first instance of scientific discoveries made by a computer that are published in a peer-reviewed journal.[8]
1976 Computer scientist Raj Reddy publishes a seminal paper titled Speech Recognition by Machine: A Review in the Proceedings of the IEEE. This paper provides a comprehensive overview of the early developments in Natural Language Processing (NLP) and speech recognition by machines.[10]
1976 AI research faces challenges as processing power fails to match the promising theoretical advancements made by computer scientists. Roboticist Hans Moravec asserts that computers are "still millions of times too weak to exhibit intelligence," highlighting the limitations in computational capabilities during that era.[33]
1976 Doug Lenat's AM program, which is the subject of his Stanford PhD dissertation, showcases the discovery model, involving a loosely-guided search for intriguing conjectures.[8]
1976 Randall Davis demonstrates the significance of meta-level reasoning through his PhD dissertation at Stanford University.[8]
Mid1970s American computer scientist Barbara J. Grosz at SRI sets limits to traditional AI approaches in discourse modeling. Her subsequent work, along with Bonnie Webber and Candace Sidner, introduces the concept of "centering," which would become important in determining discourse focus and managing anaphoric references in Natural Language Processing (NLP).[8] United States
Mid1970s British neuroscientist David Marr and his colleagues at MIT propose a theory of visual perception that includes the concept of the "primal sketch." The primal sketch is a low-level representation of the visual world that is based on the edges and textures of surfaces. It is the first step in Marr's theory of visual perception, which is a hierarchical model that describes how the brain processes visual information.[8]
1977 iLabs[34] Italy
1978 Expert system The XCON (eXpert CONfigurer) program, which is a rule-based expert system, is developed at Carnegie Mellon University. XCON aims to assist in the ordering of DEC's VAX computers by automatically selecting the components based on the customer's specific requirements. This marks an important milestone in the development of expert systems, showcasing their ability to automate complex decision-making processes.[10]
1978 Japan's Ministry of International Trade and Industry (MITI) initiates a study to explore the future of computers. Three years later, MITI would embark on a project to develop fifth-generation computers, aiming to achieve a significant advancement in computer technology. These new computers are intended to surpass existing technology, relying on multiprocessor machines specialized in logic programming instead of standard microprocessors. The goal is to position Japan as a technological leader in information processing and artificial intelligence, betting on high-power logic machines to catalyze these advancements.[6]
1978 Herbert Simon is awarded the Nobel Prize for his pioneering work on the Limited Rationality Theory, a significant contribution to the field of Artificial Intelligence.[9][8]
1978 Tom Mitchell, based at Stanford, introduces the concept of Version Spaces, a framework for describing the search space in concept formation programs.[8]
1978 The MOLGEN program, developed by Mark Stefik and Peter Friedland at Stanford, showcases the utility of an object-oriented knowledge representation for planning gene-cloning experiments.[8]
1979 The Stanford Cart achieves the significant milestone of autonomously navigating a room filled with chairs, completing the task in approximately five hours. This accomplishment marks one of the early instances of an autonomous vehicle demonstrating its capabilities.[10]
1979 The Association for the Advancement of Artificial Intelligence is founded.[35] United States
1979 The MYCIN program, initially developed as Ted Shortliffe's Ph.D. dissertation at Stanford, is demonstrated to perform at the level of experts. Another significant development is Bill VanMelle's Ph.D. dissertation at Stanford, which showcases the generality of MYCIN's knowledge representation and reasoning style in his EMYCIN program. EMYCIN serves as a model for many commercial expert system "shells," marking a milestone in the field of artificial intelligence and expert systems.[8]
1979 Jack Myers and Harry Pople at the University of Pittsburgh develop INTERNIST, a knowledge-based medical diagnosis program that leveraged Dr. Myers' clinical expertise. This program represents a significant advancement in the application of artificial intelligence to the field of medical diagnosis.[8]
1979 Cordell Green, David Barstow, Elaine Kant, and their team at Stanford demonstrate the CHI system, which is designed for automatic programming. This system marks a notable development in the field of artificial intelligence and its applications in automating programming tasks.[8]
1979 Drew McDermott and Jon Doyle at MIT, along with John McCarthy at Stanford, begin publishing research on non-monotonic logics and formal aspects of truth maintenance. Their work in this area would contribute to advancing the understanding and development of logic-based systems in artificial intelligence.[8]
Late 1970s Stanford's SUMEX-AIM resource, led by Ed Feigenbaum and Joshua Lederberg, showcases the potential of the ARPAnet for facilitating scientific collaboration, highlighting the impact of computer networks on research and information sharing in the field of artificial intelligence and beyond.[8]
1980 Computer scientist Edward Feigenbaum plays a pivotal role in rekindling AI research by championing the development of "expert systems." These systems learn by consulting experts in a particular domain to gather responses for various situations. Once these expert responses are collected and compiled for a wide range of scenarios in that domain, the expert system can offer specialized guidance to non-experts in that field, marking a significant advancement in AI research.[33]
1980 Expert system After the AI winter period, AI experiences a resurgence with the introduction of "Expert Systems." These systems are designed to replicate the decision-making capabilities of human experts, signifying a significant revival in the field of artificial intelligence.[1] AI research experiences a resurgence with increased funding and the development of algorithmic tools, including deep learning techniques, which allow computers to learn from user experiences.[36]
1980 Waseda University in Japan develops Wabot-2, a humanoid musician robot. This robot has the ability to interact with humans, read musical scores, and play moderately complex tunes on an electronic organ.[10]
1980 Expert system Digital Equipment Corporation (DEC) implements an Expert System called XCON to assist its sales team in placing customer orders. DEC, a company selling various computer components, utilized XCON because their sales force lacked in-depth knowledge about the products they were selling. This move helps streamline the ordering process and improve customer service.[2]
1980 The American Association of Artificial Intelligence (AAAI) held its first national conference at Stanford University.[1]
1980 Lee Erman, Rick Hayes-Roth, Victor Lesser, and Raj Reddy publish the first description of the blackboard model, which serves as the framework for the HEARSAY-II speech understanding system.[8]
1980 The first National Conference of the American Association of Artificial Intelligence (AAAI) is held at Stanford University.[8]
1980 The term "strong AI" is introduced by philosopher John Searle of the University of California at Berkeley to categorize a specific area of AI research.[16]
1981 An expert system called SID (Synthesis of Integral Design) is able to design 93% of the VAX 9000 CPU logic gates. This system, consisting of 1,000 hand-written rules, completes the CPU design in just 3 hours, surpassing human experts in various aspects. For instance, it produces a faster 64-bit adder than the manually designed one and achieves a significantly lower bug rate, reducing it from approximately 1 bug per 200 gates in human-designed systems to about 1 bug per 20,000 gates in the final output of the SID system.[37]
1981 Danny Hillis designs the Connection Machine, a massively parallel architecture that significantly boosts the capabilities of artificial intelligence and computing in general. This development ultimately led to the founding of Thinking Machines, Inc.[8]
1981 The Japanese Ministry of International Trade and Industry allocates a substantial budget of $850 million for the Fifth Generation Computer project. This ambitious project aims to develop computers capable of engaging in conversations, translating languages, interpreting images, and reasoning like human beings.[10]
1981 Japan's Ministry of International Trade and Industry (MITI) commissions a study to explore the future of computers. Three years later, MITI launches the ambitious Fifth Generation Computer project with a budget of $850 million. The project aims to create a new generation of computers that would represent a significant leap in technology. These machines would not rely on standard microprocessors but would be multiprocessor systems specialized in logic programming. The goal is to propel Japan to the forefront of technology by catalyzing advancements in information processing and realizing artificial intelligence capabilities.[6] Japan
1982 European Association for Artificial Intelligence
1983 Organization The Turing Institute is founded in Glasgow, Scotland as an Artificial Intelligence laboratory. The company would undertake basic and applied research, working directly with large companies across Europe, the United States, and Japan developing software as well as providing training, consultancy and information services.[38] From 1989 onwards, the company would face financial difficulties and would close in 1994.[39] United Kingdom
1983 John Laird and Paul Rosenbloom, under the guidance of Allen Newell, complete their dissertations at Carnegie Mellon University on the SOAR project.[8]
1983 James Allen invents the later called Allen's interval algebra, the first widely used formalization of temporal events.[8][40][41] Alsocalled Allen's Interval Calculus, it is certainly the most well-known qualitative temporal calculus in artificial intelligence.[42]
1984 The film "Electric Dreams" was released, depicting a love triangle between a man, a woman, and a personal computer.[10]
1984 At the annual meeting of AAAI (American Association for Artificial Intelligence), Roger Schank and Marvin Minsky warn of the impending "AI Winter." They predict a downturn in AI investment and research funding, similar to the reduction that had occurred in the mid-1970s. This prediction would indeed materialize three years later when AI research faces a decline in support and interest.[10]
1984 The CYC project is initiated as a significant endeavor in symbolic AI. This project is launched under the sponsorship of the Microelectronics and Computer Technology Corporation, a consortium consisting of computer, semiconductor, and electronics manufacturers.[16]
1985 Harold Cohen demonstrates the autonomous drawing program called Aaron at the AAAI National Conference. Aaron, which was developed over more than a decade, showcases significant advancements in autonomous drawing capabilities.[8]
1986 A team of researchers at the Bundeswehr University Munich, Germany, led by Ernst Dickmanns, builds the first driverless car, a Mercedes-Benz van equipped with cameras and sensors that allow it to navigate empty streets at speeds of up to 55 mph. The car is able to follow the road markings, avoid obstacles, and even change lanes. This is a major milestone in the development of self-driving cars, and it shows that it is possible to build a car that could drive itself safely on public roads. The research that is done on this car would help to pave the way for the development of the self-driving cars that we see today.[10]
1986 Literature Hubert Dreyfus publishes Mind over Machine.
1986 A notable connectionist experiment at the University of California in San Diego, led by David Rumelhart and James McClelland, involves training a neural network comprising 920 artificial neurons arranged in two layers (460 neurons each) to generate past tenses for English verbs. The root forms of verbs, like "come," "look," and "sleep," are fed into the input layer. A supervisory computer program observes the output layer's response and the desired response (e.g., "came") and adjustes network connections accordingly. After approximately 400 verb presentations, repeated 200 times, the network can correctly generate past tenses for both familiar and unfamiliar verbs.[16] United States
1986 (October) Organization The Centre for Artificial Intelligence and Robotics is founded in Bangalore as a laboratory of the Defence Research & Development Organization.[43] India
1986 (October) David Rumelhart, Geoffrey Hinton, and Ronald Williams publish a groundbreaking paper titled "Learning representations by back-propagating errors." This paper introduces a novel learning procedure known as back-propagation, designed for networks of neuron-like units. Back-propagation would later become a fundamental technique in training artificial neural networks, contributing significantly to the success of deep learning in subsequent decades.[10]
1986 Terrence J. Sejnowski and Charles Rosenberg introduce the 'NETtalk' program, a significant achievement in the development of artificial intelligence. 'NETtalk' is capable of speech synthesis, which allows a computer to speak for the first time. It learns to speak by processing sample sentences and phoneme chains. Moreover, 'NETtalk' has the ability to read and correctly pronounce words, and it can apply its learning to words it has never encountered before. This program is one of the early examples of artificial neural networks, which functions similarly to the human brain and learns from extensive datasets.[23]
1986 Conference International Conference on User Modeling, Adaptation, and Personalization
1987 A video titled "Knowledge Navigator" is presented during Apple CEO John Sculley's keynote speech at Educom. This video depicts a futuristic vision in which "knowledge applications would be accessed by smart agents working over networks connected to massive amounts of digitized information."[10]
1987 Literature AI & Society
1987 Literature Applied Artificial Intelligence
1987 Literature International Journal of Pattern Recognition and Artificial Intelligence
1987 Marvin Minsky publishes "The Society of Mind," a theoretical work that describes the mind as a collection of cooperating agents.[8]
1988 Judea Pearl publishes Probabilistic Reasoning in Intelligent Systems, laying the foundation for processing information under uncertainty. His pioneering work includes the invention of Bayesian networks and algorithms for inference in these models, which revolutionized artificial intelligence and found applications in various engineering and scientific fields. He would be later awarded the Turing Award for his contributions.[10]
1988 Dalle Molle Institute for Artificial Intelligence Research Switzerland
1988 Rollo Carpenter develops Jabberwacky, a chat-bot aimed at simulating natural human chat in an entertaining and humorous manner. This marks an early attempt at using human interaction to create artificial intelligence.[10]
1988 Members of the IBM T.J. Watson Research Center publish a paper titled A statistical approach to language translation. This marks a shift from rule-based to probabilistic methods of machine translation. It reflects a broader transition towards "machine learning" based on statistical analysis of known examples rather than a deep understanding of the task. IBM's project Candide, which successfully translates between English and French, relies on a massive dataset of 2.2 million pairs of sentences, primarily from the bilingual proceedings of the Canadian parliament.[10]
1988 German Research Centre for Artificial Intelligence Germany
1989 Marvin Minsky and Seymour Papert publish an expanded edition of their 1969 book Perceptrons. In a prologue added to the 1988 edition, they point out that progress in the field of artificial intelligence has been slow due to researchers repeating past mistakes, often because they were unaware of the field's history.[10]
1989 Yann LeCun and a team of researchers at AT&T Bell Labs achieve success by applying a backpropagation algorithm to a multi-layer neural network. This network is used to recognize handwritten ZIP codes. Despite hardware limitations at the time, the training of the network takes approximately three days, marking a significant improvement compared to earlier efforts.[10] United States
1989 Literature Journal of Experimental and Theoretical Artificial Intelligence
1989 (November 9) Literature The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics
1989 Dean Pomerleau at Carnegie Mellon University develops ALVINN (An Autonomous Land Vehicle in a Neural Network). This system would evolve into the technology that enables a car to be driven across the United States under computer control, with human intervention only required for about 50 of the 2850 miles of the journey.[8]
1990 Rodney Brooks publishes Elephants Don't Play Chess, advocating a novel approach to AI. His idea is to construct intelligent systems, particularly robots, by starting from the basics and allowing them to learn through continuous physical interaction with their environment. This approach emphasizes the importance of the real world as a model for intelligence and highlighted the need for effective and frequent sensory perception.[10]
1991 European Neural Network Society[44][45]
1991 American philanthropist Hugh Loebner starts the annual Loebner Prize competition, promising a $100,000 payout to the first computer to pass the Turing test and awarding $2,000 each year to the best effort. However, no AI program would come close to passing an undiluted Turing test.[16]
1992 Literature International Journal on Artificial Intelligence Tools[46]
1993 Journal of Artificial Intelligence Research[47]
1993 Vernor Vinge publishes "The Coming Technological Singularity," in which he forecasts that within thirty years, humanity would possess the technology to generate superhuman intelligence. He further anticipates that shortly after achieving this, the era of human dominance would come to an end.[10]
1994 (September) Conference The first Artificial Evolution Conference is held in Toulouse, France. It is the first international conference dedicated to the field of artificial evolution.[48] The conference is organized by the French Artificial Life Society (Société Française d'Évolution Artificielle) and the European Neural Networks Society (ESANN). The main topics of the conference were genetic algorithms, evolutionary programming, and evolutionary strategies. France
1995 Richard Wallace develops the chatbot A.L.I.C.E (Artificial Linguistic Internet Computer Entity), inspired by Joseph Weizenbaum's ELIZA program. A.L.I.C.E incorporates natural language sample data collected on an unprecedented scale, made possible by the advent of the World Wide Web.[10]
1995 A computer program called Chinook defeates the world checkers champion, Marion Tinsley, in a series of matches. Chinook uses a brute-force approach to checkers, evaluating all possible moves and selecting the best one. This approach is very computationally expensive, but becomes ultimately successful.[49]
1995 AltaVista becomes the first search engine to incorporate natural language processing into its functionality, enabling users to search for information using more human-like language and queries.[20]
1996 The EQP theorem prover at Argonne National Labs successfully proves the Robbins Conjecture in mathematics.[8]
1997 Sepp Hochreiter and Jürgen Schmidhuber propose Long Short-Term Memory (LSTM), a type of recurrent neural network that is widely used today in applications such as handwriting recognition and speech recognition.[10]
1997 IBM's Deep Blue chess computer defeates the reigning world chess champion, Garry Kasparov, in a six-game match. This is a major milestone in the field of artificial intelligence, as it shows that machines can now compete with humans at the highest level of chess.[49][4][10]
1997 Speech recognition software developed by Dragon Systems is implemented on Windows, marking significant progress in the field of spoken language interpretation.[4]
1998 Furby, the first domestic or pet robot, is created by Dave Hampton and Caleb Chung.[4]
1998 Literature Autonomous Agents and Multi-Agent Systems[50]
1998 Yann LeCun, Yoshua Bengio, and other researchers published papers on the application of neural networks to handwriting recognition and the optimization of backpropagation. These contributions were instrumental in advancing the field of neural network-based handwriting recognition.[4]
1998 Amazon introduces "collaborative filtering" to provide recommendations for millions of customers, a significant advancement in personalized recommendation systems.[51]
1998 Tiger Electronics releases Furby, marking the first successful introduction of AI technology into a domestic environment.[14]
Late 1990s Web crawlers and other AI-based information extraction programs become essential tools for the widespread use of the World Wide Web.[14]
1990s MIT's AI Lab demonstrates an Intelligent Room and Emotional Agents, showcasing advancements in intelligent environments and emotionally responsive agents. This period also marks the initiation of work on the Oxygen Architecture, which aims to connect mobile and stationary computers in an adaptive network, contributing to the development of pervasive computing.[7]
2000 MIT researcher Cynthia Breazeal develops Kismet, a robot capable of recognizing and simulating emotions, marking a significant advancement in emotional AI and human-robot interaction.[4][7]
2000 Honda's ASIMO robot, a humanoid robot endowed with artificial intelligence, achieves the capability to walk at a human-like speed and serve trays to customers in a restaurant setting, demonstrating significant progress in robotics and AI technology.[4]
2000 Conference Mexican International Conference on Artificial Intelligence[52] Mexico
2001 Artificial General Intelligence Research Institute[53] United States
2002 AI technology enters people's homes with the introduction of Roomba, an autonomous robotic vacuum cleaner. This marked a significant development in the application of AI to consumer products for everyday use.[1]
2002 Conference RuleML Symposium[54]
2003 Geoffrey Hinton, Yoshua Bengio, and Yann LeCun initiate a research program aimed at advancing neural networks. Experiments conducted in collaboration with Microsoft, Google, and IBM, with support from the Toronto laboratory led by Hinton, demonstrate significant improvements in speech recognition, effectively reducing error rates by half. Similar progress is achieved by Hinton's team in the field of image recognition. This marks a significant milestone in the development of neural network-based AI technologies.[3]
2003 MIT Computer Science and Artificial Intelligence Laboratory[55] United States
2004 The first DARPA Grand Challenge takes place, featuring a prize competition for autonomous vehicles. Unfortunately, none of the autonomous vehicles are able to complete the challenging 150-mile route in the Mojave Desert.[4]
2004 Conference International Conference on Computational Intelligence Methods for Bioinformatics and Biostatistics[56] Italy
2006 Oren Etzioni, Michele Banko, and Michael Cafarella introduce the term "machine reading," defining it as the autonomous understanding of text without the need for human supervision.[4]
2006 Geoffrey Hinton publishes a paper titled "Learning Multiple Layers of Representation," which summarizes ideas related to multilayer neural networks with top-down connections. This work represents a new approach to deep learning, focusing on training networks to generate sensory data rather than just classifying it.[4]
2006 AI begins to make its presence felt in the business world, with companies like Facebook, Twitter, and Netflix starting to utilize AI technologies for various purposes.[1]
2006 The first AI doctor-conducted unassisted robotic surgery is on a 34-year-old male to correct heart arrythmia. The results are rated as better than an above-average human surgeon. The machine has a database of 10,000 similar operations, and so, in the words of its designers, is "more than qualified to operate on any patient".[57][58]
2006 Conference AI@50, also known as the Dartmouth Artificial Intelligence Conference: The Next Fifty Years, takes place, marking the 50th anniversary of the Dartmouth workshop that initiated AI history. It features five of the original ten attendees, including Marvin Minsky and John McCarthy. The conference, sponsored by Dartmouth College, General Electric, and the Frederick Whittemore Foundation, receives a $200,000 grant from DARPA. Its goals include assessing AI's progress, identifying future challenges, and relating these to other fields. Conference topics range from emotion in machines to machine learning, vision, reasoning, and ethics.[59] United States
2007 Fei Fei Li and her team at Princeton University initiate the creation of ImageNet, a substantial database of annotated images intended to support research in visual object recognition software.[4] United States
2008 Eliezer Yudkowsky calls for the creation of “friendly AI” to mitigate existential risk from advanced artificial intelligence. Yudkowsky explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[60] United States
2008 Conference Conference on Artificial General Intelligence[61]
2009 Rajat Raina, Anand Madhavan, and Andrew Ng publish Large-scale Deep Unsupervised Learning using Graphics Processors. They assert that modern graphics processors had significantly greater computational power compared to multicore CPUs and had the potential to revolutionize the use of deep unsupervised learning methods.[4]
2009 Google initiates the development of a driverless car project, which is kept confidential. By 2014, it would achieve a significant milestone by becoming the first to pass a self-driving test in the U.S. state of Nevada.[4]
2009 Computer scientists at Northwestern University's Intelligent Information Laboratory develop Stats Monkey, a program capable of autonomously generating sports news articles without any human involvement.[4]
2010 The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is launched as an annual competition focused on AI object recognition.[4]
2010 DeepMind is established in the United Kingdom, focusing on developing cutting-edge AI technologies and advancing the field through research and innovation.[62] Known for its significant contributions to AI, such as the creation of AlphaGo, an AI program that would defeat a world champion Go player, DeepMind would be positioned itself at the forefront of AI research. It would be acquired by Google in 2014.[63][64] United Kingdom
2011 A convolutional neural network (CNN) achieved a remarkable victory in the German Traffic Sign Recognition competition by achieving an accuracy rate of 99.46%, surpassing the performance of human participants who scored 99.22%.[4]
2011 IBM's question-answering system, Watson, achieves a significant milestone by winning the quiz show "Jeopardy!" This victory occurs when Watson defeates the reigning champions, Brad Rutter and Ken Jennings.[5][4]
2011 A talking computer chatbot named Eugene Goostman gains attention for successfully deceiving judges into believing it was a genuine human during a Turing test.[5]
2011 Researchers at the IDSIA in Switzerland report a 0.27% error rate in handwriting recognition using convolutional neural networks in 2011. This is a significant improvement over the 0.35%-0.40% error rate in previous years.[4]
2011 A study published in the journal Nature Medicine shows that a machine learning algorithm called BioMind is able to outperform radiologists in interpreting breast cancer scans. The algorithm is trained on a dataset of over 100,000 scans, and is able to identify cancer with a 99% accuracy rate, compared to 96% for radiologists.[49]
2011 Apple's Siri is first released as part of the iPhone 4S. It is a major breakthrough in the field of artificial intelligence, as it is the first voice-activated personal assistant that is widely available.[23]
2012 (June) Jeff Dean and Andrew Ng conduct an experiment where they expose a massive neural network to 10 million unlabeled images randomly sourced from YouTube videos. Surprisingly, during this experiment, one of the artificial neurons within the network learns to respond strongly to images of cats, leading to an unexpected and amusing result.[4]
2012 (July 13) Literature The Machine Question: Critical Perspectives on AI, Robots, and Ethics
2012 Researchers at the University of Toronto develop a convolutional neural network that achieves a remarkable error rate of only 16% in the ImageNet Large Scale Visual Recognition Challenge. This marks a significant improvement compared to the previous year's best entry, which has an error rate of 25%.[10] Canada
2012 The secutiry market is flooded by computer vision start-ups.[65]
2013 Boston Dynamics unveils Atlas, an advanced humanoid robot designed for various search-and-rescue tasks. The robot is developed for the DARPA Robotics Challenge, a competition to develop robots that can perform tasks in disaster zones.[33][66] United States
2013 Automated Insights publish 300 million pieces of content, which Mashable reports is greater than the output of all major media companies combined. In 2014, the company's software would generate one billion stories. In 2016, Automated Insights would publish over 1.5 billion pieces of content.[51]
2014 Google starts developing a self-driving car in secret. The project is called "Project Chauffeur". In 2014, the project would be renamed to "Waymo".[4]
2014 Allen Institute for AI[67][68] United States
2014 A research team from the Chinese University of Hong Kong (CUHK) develops a facial recognition system that is able to achieve a human-level accuracy of 97.53%. This system is able to identify faces from a variety of angles and lighting conditions, and it is even able to identify faces that has been obscured by sunglasses or a mask.[49] China (Hong Kong)
2014 Microsoft introduces Cortana, a virtual assistant software. Cortana is first released for Windows Phone 8.1, and it is later released for Windows 10, Windows 10 Mobile, Xbox One, and Android.[23] United States
2014 Future of Life Institute[69] United States
2014 Kiev Laboratory for Artificial Intelligence[70] Ukraine
2014 Ian Goodfellow introduces Generative Adversarial Networks (GAN), a revolutionary concept in artificial intelligence that involves two neural networks, a generator, and a discriminator, engaged in a competitive learning process to generate realistic data.[71]
2014 The rise of programmatic ad buying popularizes artificial intelligence-based ad purchasing. This innovation eliminates the need for time-consuming manual tasks such as market research, budgeting, insertion orders, and complex analytics tracking, making the ad buying process more efficient and cost-effective.[51]
2015 Amazon introduces the Alexa service. The first device to use Alexa is the Amazon Echo, a smart speaker that is released in June. Alexa is a cloud-based voice service that can be used to control smart home devices, play music, get news and weather updates, set alarms, and more. It would since become one of the most popular voice assistants in the world, with over 300 million active users.[23]
2015 (march) The algorithm for diffusion that would later serve as the foundation for text-to-image tools is initially introduced by researchers from Stanford and Berkeley.
2015 The Chinese Congress on Artificial Intelligence 2015 takes place in Beijing, giving the direction of AI-related industries in China.[65] China
2015 Open Letter on Artificial Intelligence[72]
2015 (September 22) The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
2015 Google launches RankBrain, an advanced artificial intelligence algorithm. RankBrain would revolutionize search query interpretation by effectively understanding the user's search intent, resulting in more relevant search results.[51]
2016 (March) Google DeepMind's AlphaGo defeates Go champion Lee Sedol. This is a major milestone in the development of artificial intelligence, as Go is a much more complex game than chess.[4]
2016 (March) Microsoft releases the Tay chatbot, but quickly takes it offline after it becomes Holocaust denying.
2016 A team of researchers from Google AI and the University of Washington develops a machine learning model that can transcribe telephone calls with 97% accuracy. This is a significant improvement over previous methods, which has an accuracy of around 85%.[49]
2016 A team of researchers from the University of Oxford develops a machine learning model that can lipread with 94% accuracy. This is a significant improvement over previous methods, which has an accuracy of around 80%.[49]
2016 Center for Human-Compatible Artificial Intelligence[73] United States
2016 (February 16) Active Intelligence Pte Ltd[74] Singapore
2016 (September 28) Partnership on AI (full name Partnership on Artificial Intelligence to Benefit People and Society) is established. It is a non-profit partnership of academic, civil society, industry, and media organizations creating solutions so that AI advances positive outcomes for people and society.[75][76] Its founding members are Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, with interim co-chairs Eric Horvitz of Microsoft Research and Mustafa Suleyman of DeepMind.[77][78] Apple would join the consortium as a founding member in January 2017.[79] By 2019, more than 100 partners from academia, civil society, industry, and nonprofits would be member organizations.[80]
2016 A real-time online tool called Swarm AI successfully predicts the winner of the Kentucky Derby horse race. This demonstrates the potential of collective intelligence and real-time collaboration among users to make accurate predictions.[33]
2017 OpenAI Five[81] United States
2017 DeepMind releases AI Safety Gridworlds, which evaluate AI algorithms on nine safety features, such as whether the algorithm wants to turn off its own kill switch. DeepMind confirms that existing algorithms perform poorly, which is "unsurprising" because the algorithms "are not designed to solve these problems"; solving such problems might require "potentially building a new generation of algorithms with safety considerations at their core".[82][83][84]
2017 Conference The Asilomar Conference on Beneficial AI isn held, focusing on discussing the potential risks and benefits associated with artificial intelligence (AI) and how to ensure the development of AI in a way that benefits humanity.[85]
2017 The first AI for Good Global summit takes place.[86]
2017 Organization AI Now Institute is founded. It is an American research institute studying the social implications of artificial intelligence.[87] United States
2017 The AI market, including both hardware and software, reaches a total value of $8 billion.[37]
2017 A convolutional neural network (CNN) achieves a remarkable victory in the German Traffic Sign Recognition competition by achieving an accuracy rate of 99.46%, surpassing the performance of human participants who scored 99.22%.[33]
2017 Google's DeepMind AI achieves the remarkable feat of teaching itself how to walk autonomously.[33]
2017 AI is included in the Chinese government report as a national strategy in China.[65]
2018 Artificial intelligence showcases its abilities in different ways. IBM's 'Project Debater' engaged in complex debates with human master debaters and performed impressively. Meanwhile, Google's 'Duplex' AI demonstrated its conversational skills by making a hairdressing appointment over the phone without the recipient realizing they were talking to a machine. These examples illustrated AI's capacity to tackle advanced tasks and engage in natural conversations.[23]
2018 A machine learning algorithm called BioMind is able to outperform radiologists in interpreting breast cancer scans. The algorithm is trained on a dataset of over 100,000 scans, and is able to identify cancer with a 99% accuracy rate, compared to 96% for radiologists.[49]
2018 European Laboratory for Learning and Intelligent Systems[88]
2018 (April 26) Innovation Center for Artificial Intelligence[89][90] Netherlands
2018 The artificial intelligence market in China amounts to 33.9 billion RMB.[65] China
2018 Astronomers harness the power of AI to identify and locate approximately 6,000 new craters on the moon's surface, enhancing our understanding of lunar geology.[33][91]
2018 Paul Rad, assistant director of the University of Texas-San Antonio Open Cloud Institute, and Nicole Beebe, director of the university's Cyber Center for Security and Analytics, introduce a novel cloud-based learning platform for AI. This platform aims to teach machines to learn in a manner similar to human learning processes.[33][92]
2018 Google showcases Duplex AI, a digital assistant capable of making appointments via telephone calls with live humans. Duplex utilizes natural language understanding, deep learning, and text-to-speech technologies to grasp conversational context and nuances, achieving a level of sophistication unmatched by other digital assistants.[33]
2018 AI ushers in the first year of commercial applications in China. There are more than 1,000 AI-related companies in the country by the time.[65] China
2018 The AI Now Report finds harmful inaccuracies in AI-driven technology, plus an alarming lack of accountability and, in some cases, systems built on racial discrimination or used for human rights violations.[93]
2019 Center for Security and Emerging Technology[94][95] United States
2019 Google AI Centre in Ghana[96][97] Ghana
2019 A team of five AI bots developed by OpenAI called OpenAI Five defeates a team of professional Dota 2 players in a best-of-three match. This is a significant achievement, as Dota 2 is a complex multiplayer game that requires a high degree of teamwork and strategy.[49]
2019 AI Artathon[98][99] Saudi Arabia
2020 An AI called Agent57 developed by DeepMind is able to beat humans at all 57 Atari 2600 games. This is a significant achievement, as the Atari 2600 is a classic console with a wide range of challenging games.[49]
2020 (June) OpenAI reveals GPT-3, but releases it only to a small pool of users.

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by User:Sebastian.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing

Timeline update strategy

See also

External links

References

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 1.12 1.13 1.14 "History of Artificial Intelligence". javatpoint.com. Retrieved 7 February 2020. 
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 "A Brief History of Artificial Intelligence". dataversity.net. Retrieved 7 February 2020. 
  3. 3.0 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Cite error: Invalid <ref> tag; no text was provided for refs named coe.intf
  4. 4.00 4.01 4.02 4.03 4.04 4.05 4.06 4.07 4.08 4.09 4.10 4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24 4.25 "The History of Artificial Intelligence". harvard.edu. Retrieved 7 February 2020. 
  5. 5.0 5.1 5.2 5.3 5.4 5.5 "A Brief History of Artificial Intelligence". livescience.com. Retrieved 7 February 2020. 
  6. 6.0 6.1 6.2 6.3 6.4 6.5 6.6 6.7 "The History of Artificial Intelligence" (PDF). washington.edu. Retrieved 7 February 2020. 
  7. 7.0 7.1 7.2 "Tema 1 Brief History of Artificial Intelligence". ocw.uc3m.es. Retrieved 21 March 2020. 
  8. 8.00 8.01 8.02 8.03 8.04 8.05 8.06 8.07 8.08 8.09 8.10 8.11 8.12 8.13 8.14 8.15 8.16 8.17 8.18 8.19 8.20 8.21 8.22 8.23 8.24 8.25 8.26 8.27 8.28 8.29 8.30 8.31 8.32 8.33 8.34 8.35 8.36 8.37 8.38 8.39 8.40 8.41 8.42 8.43 8.44 8.45 8.46 8.47 8.48 8.49 8.50 8.51 8.52 8.53 8.54 8.55 8.56 8.57 8.58 8.59 8.60 8.61 "A Brief History of AI". aitopics.org. Retrieved 20 March 2020. 
  9. 9.00 9.01 9.02 9.03 9.04 9.05 9.06 9.07 9.08 9.09 9.10 9.11 9.12 Mijwil, Maad M. "History of Artificial Intelligence". Retrieved 9 March 2020. 
  10. 10.00 10.01 10.02 10.03 10.04 10.05 10.06 10.07 10.08 10.09 10.10 10.11 10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.19 10.20 10.21 10.22 10.23 10.24 10.25 10.26 10.27 10.28 10.29 10.30 10.31 10.32 10.33 10.34 10.35 10.36 10.37 10.38 10.39 10.40 10.41 10.42 10.43 10.44 10.45 10.46 10.47 10.48 10.49 10.50 10.51 10.52 10.53 10.54 10.55 10.56 "A Very Short History Of Artificial Intelligence (AI)". forbes.com. Retrieved 7 February 2020. 
  11. Mehta, Dhaval; Ranadive, Dr Amol (31 January 2021). What Gamers Want: A Framework to Predict Gaming Habits. OrangeBooks Publication. 
  12. 12.0 12.1 Bloch, Laurent. "Informatics in the light of some Leibniz's works" (PDF). laurentbloch.net. Retrieved 9 March 2022. 
  13. Kumar, Ajitesh (17 September 2021). "12 Bayesian Machine Learning Applications Examples". Data Analytics. Retrieved 7 March 2022. 
  14. 14.0 14.1 14.2 "The History Of Artificial Intelligence". sutori.com. Retrieved 20 March 2020. 
  15. "Artificial Intelligence". people.idsia.ch. Retrieved 21 March 2020. 
  16. 16.00 16.01 16.02 16.03 16.04 16.05 16.06 16.07 16.08 16.09 16.10 "Artificial intelligence". britannica.com. Retrieved 21 March 2020. 
  17. "A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE". atariarchives.org. Retrieved 21 March 2020. 
  18. "History of Artificial Intelligence". researchgate.net. Retrieved 9 March 2020. 
  19. 19.0 19.1 19.2 "1.2 A Brief History of Artificial Intelligence". artint.info. Retrieved 21 March 2020. 
  20. 20.0 20.1 "A SHORT HISTORY OF ARTIFICIAL INTELLIGENCE: MAKING MYTHOLOGY A REALITY". omnius.com. Retrieved 20 March 2020. 
  21. Engineers: From the Great Pyramids to the Pioneers of Space Travel. Penguin. 16 April 2012. ISBN 978-1-4654-0682-8. 
  22. "7 phases of the history of Artificial intelligence". historyextra.com. Retrieved 21 March 2020. 
  23. 23.0 23.1 23.2 23.3 23.4 23.5 23.6 "The history of artificial intelligence". bosch.com. Retrieved 7 February 2020. 
  24. "AIC Timeline". ai.sri.com. Retrieved 6 March 2020. 
  25. "Semantic Network - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved 5 March 2022. 
  26. "Notes on Semantic Nets and Frames" (PDF). eecs.qmul.ac.uk. Retrieved 5 March 2022. 
  27. "Artificial Intelligence Journal Division of IJCAI". ijcai.org. Retrieved 6 March 2020. 
  28. Harris, Randy Allen (31 December 2004). Voice Interaction Design: Crafting the New Conversational Speech Systems. Elsevier. ISBN 978-0-08-047480-9. 
  29. Shapiro, Stuart C. (1 January 1982). "Generalized augmented transition network grammars for generation from semantic networks". Computational Linguistics. 8 (1): 12–25. ISSN 0891-2017. doi:10.5555/972923.972925. 
  30. Dreyfus, Hubert L. (30 October 1992). "What Computers Still Can't Do: A Critique of Artificial Reason". mitpress.mit.edu. MIT Press. Retrieved 21 March 2022. 
  31. Eng, Richard Kenneth (23 July 2022). "Celebrating 50 Years of Smalltalk". Medium. Retrieved 9 September 2023. 
  32. "ECAI 2010". iospress.nl. Retrieved 6 March 2020. 
  33. 33.0 33.1 33.2 33.3 33.4 33.5 33.6 33.7 33.8 "The History of Artificial Intelligence". futureoftech.org. Retrieved 9 March 2020. 
  34. "ILabs". semanticscholar.org. Retrieved 6 March 2020. 
  35. "The Association for the Advancement of Artificial Intelligence (AAAI)". www.omicsonline.org. Retrieved 21 March 2022. 
  36. "History of Artificial Intelligence – AI of the past, present and the future!". data-flair.training. Retrieved 4 March 2020. 
  37. 37.0 37.1 "A Short History of Artificial Intelligence". dev.to. Retrieved 9 March 2020. 
  38. Lamb, John (August 1985). Making Friends with Intelligence. The New Scientist. Retrieved 10 December 2013. 
  39. "Column 468: The Turing Institute". UK Parliament. Retrieved 2 March 2022. 
  40. Aydin, Berkay; Angryk, Rafal A. (15 October 2018). Spatiotemporal Frequent Pattern Mining from Evolving Region Trajectories. Springer. ISBN 978-3-319-99873-2. 
  41. Liang-Jie, Zhang; Yishuang, Ning (19 October 2018). Innovative Solutions and Applications of Web Services Technology. IGI Global. ISBN 978-1-5225-7269-5. 
  42. "Qualitative Spatio-Temporal Reasoning with RCC-8 and Allen's Interval Calculus: Computational Complexity" (PDF). gki.informatik.uni-freiburg.de. Retrieved 12 March 2022. 
  43. "Centre for Artificial Intelligence and Robotics (CAIR)". epicos.com. Retrieved 6 March 2020. 
  44. Taylor, J.G. The Promise of Neural Networks. 
  45. Artificial Neural Networks and Machine Learning – ICANN 2017: 26th International Conference on Artificial Neural Networks, Alghero, Italy, September 11-14, 2017, Proceedings, Part 1 (Alessandra Lintas, Stefano Rovetta, Paul F.M.J. Verschure, Alessandro E.P. Villa ed.). 
  46. "International Journal on Artificial Intelligence Tools". letpub.com. Retrieved 6 March 2020. 
  47. "Journal of Artificial Intelligence Research". jair.org. Retrieved 6 March 2020. 
  48. "Artificial Evolution 2019 (EA-2019)". iscpif.fr. Retrieved 6 March 2020. 
  49. 49.0 49.1 49.2 49.3 49.4 49.5 49.6 49.7 49.8 Leigh, Andrew (9 November 2021). What's the Worst That Could Happen?: Existential Risk and Extreme Politics. MIT Press. ISBN 978-0-262-36661-8. 
  50. "Autonomous Agents and Multi-Agent Systems". springer.com. Retrieved 6 March 2020. 
  51. 51.0 51.1 51.2 51.3 "A brief history of artificial intelligence in advertising". econsultancy.com. Retrieved 20 March 2020. 
  52. "MICAI 2000: Advances in Artificial Intelligence". springer.com. Retrieved 6 March 2020. 
  53. "Artificial General Intelligence Research Institute". morebooks.de. Retrieved 6 March 2020. 
  54. Bikakis, Antonis; Fodor, Paul; Roman, Dumitru. Rules on the Web: From Theory to Applications: 8th International Symposium, RuleML 2014, Co-located with the 21st European Conference on Artificial Intelligence, ECAI 2014, Prague, Czech Republic, August 18-20, 2014, Proceedings. 
  55. "Mission & History". csail.mit.edu. Retrieved 6 March 2020. 
  56. "INTERNATIONAL MEETING ON COMPUTATIONAL INTELLIGENCE METHODS FOR BIOINFORMATICS AND BIOSTATISTICS". person.dibris.unige.it. Retrieved 6 March 2020. 
  57. "Autonomous Robotic Surgeon performs surgery on first live human". Engadget. 19 May 2006. 
  58. "Robot surgeon carries out 9-hour operation by itself". Phys.Org. 
  59. "Dartmouth Artificial Intelligence Conference". dartmouth.edu. Retrieved 6 March 2020. 
  60. Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk
  61. "Artificial General Intelligence 2008". iospress.nl. Retrieved 6 March 2020. 
  62. "Expanding our knowledge, finding new answers". deepmind.com. Retrieved 6 March 2020. 
  63. Bray, Chad (27 January 2014). "Google Acquires British Artificial Intelligence Developer". DealBook. Retrieved 17 June 2024. 
  64. "A Brief History of Artificial Intelligence". kdnuggets.com. Retrieved 9 March 2020. 
  65. 65.0 65.1 65.2 65.3 65.4 "The history of Artificial Intelligence (AI) in China". daxueconsulting.com. Retrieved 21 March 2020. 
  66. "Atlas". bostondynamics.com. Retrieved 9 March 2020. 
  67. "Allen Institute for AI". glassdoor.com.ar. Retrieved 6 March 2020. 
  68. "Allen Institute for AI (AI2)". linkedin.com. Retrieved 6 March 2020. 
  69. "Future of Life Institute". linkedin.com. Retrieved 6 March 2020. 
  70. "Kiev Laboratory for Artificial Intelligence". semanticscholar.org. Retrieved 6 March 2020. 
  71. "History of Artificial Intelligence". qbi.uq.edu.au. Retrieved 9 March 2020. 
  72. "Elon Musk, Stephen Hawking warn of artificial intelligence dangers". mashable.com. Retrieved 6 March 2020. 
  73. "UC Berkeley launches Center for Human-Compatible Artificial Intelligence". news.berkeley.edu. Retrieved 6 March 2020. 
  74. "ACTIVE INTELLIGENCE PTE. LTD.". sgpbusiness.com. Retrieved 6 March 2020. 
  75. "Exploring The Partnership on AI". medium.com. Retrieved 6 March 2020. 
  76. "About". Partnership on AI. Retrieved 3 March 2022. 
  77. "About". Partnership on AI. Retrieved 3 March 2022. 
  78. "'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft". the Guardian. 28 September 2016. Retrieved 3 March 2022. 
  79. "Partnership on AI Update". Partnership on AI. Retrieved 3 March 2022. 
  80. "New Partners To Bolster Perspective For Responsible AI". Partnership on AI. 24 September 2019. Retrieved 3 March 2022. 
  81. "OpenAI Five". openai.com. Retrieved 6 March 2020. 
  82. "DeepMind Has Simple Tests That Might Prevent Elon Musk's AI Apocalypse". Bloomberg.com. 11 December 2017. Retrieved 5 March 2020. 
  83. "Alphabet's DeepMind Is Using Games to Discover If Artificial Intelligence Can Break Free and Kill Us All". Fortune. Retrieved 5 March 2020. 
  84. "Specifying AI safety problems in simple environments | DeepMind". DeepMind. Retrieved 5 March 2020. 
  85. "Video: Superintelligence Panel at Beneficial AI 2017 (FLI)". medium.com. Retrieved 6 March 2020. 
  86. "AI for Good Global Summit 2017". ITU. Retrieved 10 March 2023. 
  87. "NYU Law and NYU's AI Now Institute analyze the ways emerging technology imposes upon civil liberties". law.nyu.edu. Retrieved 6 March 2020. 
  88. "European Laboratory for Learning and Intelligent Systems (ELLIS) launched with Informatics researchers on board". ed.ac.uk. Retrieved 9 March 2020. 
  89. "Innovation Center for Artificial Intelligence officially launched". uva.nl. Retrieved 6 March 2020. 
  90. "Ahold Delhaize Helps Launch AI Innovation Center". consumergoods.com. Retrieved 6 March 2020. 
  91. "New technique uses AI to locate and count craters on the moon". phys.org. Retrieved 9 March 2020. 
  92. "UTSA researchers want to teach computers to learn like humans". utsa.edu. Retrieved 9 March 2020. 
  93. "Rise of the Machines: The History of Artificial Intelligence". looklisten.com. Retrieved 21 March 2020. 
  94. "Center for Security and Emerging Technology". cset.georgetown.edu. Retrieved 6 March 2020. 
  95. "Center for Security and Emerging Technology". linkedin.com. Retrieved 6 March 2020. 
  96. "Google takes on 'Africa's challenges' with first AI centre in Ghana". thestar.com.my. Retrieved 6 March 2020. 
  97. "How Google is driving artificial intelligence for Africa by Africans". techpoint.africa. Retrieved 6 March 2020. 
  98. "About the Global AI Summit". theglobalaisummit.com. Retrieved 6 March 2020. 
  99. "Riyadh to host AI art competition". arabnews.jp. Retrieved 6 March 2020.