Timeline of AI timelines

From Timelines
Jump to navigation Jump to search

This is a timeline of AI timelines, the study of advances in artificial intelligence, in particular when artificial general intelligence will be created.

Big picture

Time period Development summary More details

Full timeline

Year of prediction Predicted year Concept Predictor Details
1950 2000 Strong AI Alan Turing In his paper, often regarded as an AI manifesto, English mathematician Alan Turing foresees a future in which computers, by the year 2000, would be able to respond to questions posed by humans in a manner indistinguishable from human responses.[1]
1956 The Dartmouth Summer Research Project on Artificial Intelligence marks the formal birth of artificial intelligence as a research field. Organized by John McCarthy, the meeting brings together a small group of scientists who propose that every aspect of learning and intelligence can, in principle, be precisely described and simulated by machines. The conference establishes a shared research agenda, introduces the term “artificial intelligence,” and lays conceptual foundations for studying machine cognition and human intelligence systematically.[2]
1965 1985 Strong AI Herbert A. Simon AI pioneer Herbert A. Simon writes: "machines will be capable, within twenty years, of doing any work a man can do."[3]
1965 2000 Superintelligence Irving John Good Irving John Good predicts an ultraintelligent machine by 2000.
1988 2010 Strong AI Hans Moravec Hans Moravec writes:

Behold my book Mind Children. Within, I project that, in 2010 or thereabouts, we shall achieve strong AI. I am not calling it “Artificial General Intelligence” because this term will not be coined for another 15 years or so.[4]

1997 2020s Strong AI Hans Moravec Hans Moravec argues that continued exponential growth in computing power would make inexpensive hardware comparable to the human brain by the 2020s, based on extrapolations of hardware performance trends. The prediction does not rely on detailed models of human cognition, and comparisons between hardware capability and human intelligence remain uncertain due to wide variation in estimates of the brain’s computational capacity and differences between training and operational compute requirements in artificial intelligence systems.[5]
1997 Artificial general intelligence Mark Gubrud The term "artificial general intelligence" is probably first used by Mark Gubrud in a discussion of the implications of fully automated military production and operations.[6]
2001 2023, 2049, 2059 Artificial general intelligence Ray Kurzweil Ray Kurzweil writes:

I have calculated that matching the intelligence of a human brain requires 2 * 10^16 ops/sec* and this will become available in a $1000 computer in 2023. 26 years after that, in 2049, a $1000 computer will have ten billion times more computing power than a human brain; and in 2059, that computer will cost one cent.[4]

2011 (January) 2050 High-level machine intelligence (HLMI) Future of Humanity Institute (survey) The Future of Humanity Institute conducts the Winter Intelligence Survey during its AGI impacts conference. Participants estimate a median 50% probability of human-level machine intelligence by 2050, with responses indicating approximately a 10% chance in the 2015–2030 range, a 50% chance around 2040–2080, and a 90% chance between 2100 and 2250. The participants are primarily experts in AI and related fields, and their responses reflect a wide range of expectations regarding AI development timelines.[7][8]
2011 (August) <2030 Artificial general intelligence (AGI) AGI-11 Conference participants (survey) The AGI-11 survey is conducted at the AGI-11 conference with 60 participants, indicating that nearly half of respondents believe artificial general intelligence would be achieved before 2030. The survey also finds that almost 90% anticipate AGI appearing before 2100 and that around 85% expect it to benefit humanity. The survey consists of two questions and is conducted by James Barrat and Ben Goertzel.[9][10]
2012–2013 2040 High-level machine intelligence (HLMI) Vincent Müller and Nick Bostrom (survey) Vincent Müller and Nick Bostrom of the Future of Humanity Institute conduct a survey of four groups of AI experts between 2012 and 2013. The respondents estimate a 10% probability of achieving human-level machine intelligence by 2022 and a 50% probability by 2040.[11]
2016 (March) >2041 Superintelligence Association for the Advancement of Artificial Intelligence fellows (survey) Oren Etzioni conducts a survey among 193 fellows of the Association for the Advancement of Artificial Intelligence, asking about the likelihood and timing of achieving superintelligence, defined as intellect surpassing human capabilities across multiple fields. Of the 80 respondents (a 41% response rate), 67.5% expect superintelligence to be achieved, but not within 25 years; none expect it within the next 10 years, 7.5% estimate a timeframe of 10–25 years, and 25% believe it would never occur.[12]
2016 (June 9) High-level machine intelligence (HLMI) Bill Gates Bill Gates states that achieving human-level artificial intelligence would take at least five times longer than the timeline proposed by Ray Kurzweil.[13]
2017 (January–February) 2026 High-level machine intelligence (HLMI) Toby Walsh (survey) Toby Walsh conducts a survey involving 849 participants, including AI experts, robotics experts, and non-experts. Respondents classify 70 occupations by automation risk and estimate timelines for the arrival of high-level machine intelligence, defined as computers performing human professions as well as humans. Non-expert respondents estimate a 10% probability of HLMI by 2026.[14]
2017 (January–February) 2026 High-level machine intelligence (HLMI) Toby Walsh (survey) Toby Walsh conducts a survey of 849 participants, including AI experts, robotics experts, and non-experts, examining automation risk across 70 occupations and estimating timelines for the arrival of high-level machine intelligence, defined as computers performing human professions as well as humans. Non-expert respondents estimate a 10% probability of HLMI by 2026.[14]
2017 (January–February) 2109 High-level machine intelligence (HLMI) AI experts (Walsh 2017 survey) AI expert respondents in the Toby Walsh 2017 survey estimate a 90% probability of achieving high-level machine intelligence by 2109.[14]
2017 (January–February) 2118 High-level machine intelligence (HLMI) IEEE Robotics & Automation Society fellows (survey) Robotics expert respondents in the Toby Walsh 2017 survey estimate a 90% probability of achieving high-level machine intelligence by 2118. The group comprises 101 participants who are either Fellows of the Institute of Electrical and Electronics Engineers Robotics & Automation Society or contributors to the 2016 IEEE International Conference on Robotics and Automation (ICRA).[14]
2017 High-level machine intelligence (HLMI) AI researchers (survey) A study finds that 50% of AI researchers assign a 50% probability to achieving high-level machine intelligence by 2040, while 20% estimate that this probability would not be reached until 2100 or later. The aggregated probability distribution places the 25%–75% confidence interval between 2040 and well past 2100.[15][16]
2020 2050 Around 2050, Ajeya Cotra’s compute-based forecasting framework predicts a median probability point at which some actor is willing and able to train a transformative AI (TAI) system—defined as AI with economic and societal impact comparable to the Industrial Revolution. This estimate results from modeling trends in compute cost, algorithmic efficiency, and willingness to spend, anchored to biological and machine-learning analogies. Cotra emphasizes substantial uncertainty: TAI could arrive earlier via cheaper pathways or later due to non-compute bottlenecks, regulation, or deployment delays.[17][18]
2020 2020s Scaling hypothesis / pathway to AGI Gwern Branwen In a detailed analysis following the release of GPT-3, Gwern Branwen articulates the “scaling hypothesis,” arguing that increasingly large neural networks trained on massive datasets naturally acquire more general, human-like capabilities. Drawing on empirical scaling laws and historical forecasts by Hans Moravec, he contends that sub-human general intelligence becomes feasible in the 2020s, with continued scaling plausibly leading toward artificial general intelligence without fundamentally new algorithms, though with substantial uncertainty regarding exact timelines.[19]
2020 (August) 2047 Artificial general intelligence (AGI) LessWrong community (aggregate) An aggregation of probabilistic forecasts from the LessWrong “Forecasting Thread: AI Timelines,” weighted by normalized community votes, yields a median estimate of June 20, 2047 for the arrival of human-level AGI. The distribution reflects a wide range of views, with substantial probability mass both before 2040 and after 2100, highlighting deep uncertainty and methodological disagreement among forecasters.[20]
2021 (August 17) 2036, 2060, 2100 Transformative AI (PASTA-like) Holden Karnofsky Holden Karnofsky predicts a greater than 10% chance of PASTA-like transformative AI within 15 years (by 2036), an approximately 50% chance within 40 years (by 2060), and an estimated two-thirds probability within the 21st century (by 2100).[21]
2020 (September) 2031–2036 Transformative AI (TAI) Ajeya Cotra Based on her probabilistic model of training compute affordability, Ajeya Cotra estimates that there is roughly a 10% probability of transformative artificial intelligence occurring by the early-to-mid 2030s, approximately between 2031 and 2036, depending on assumptions about algorithmic efficiency, model architecture, and training horizon length.[22]
2022 (May 30) 2029 Artificial general intelligence (AGI) Elon Musk Elon Musk states via a tweet to Jack Dorsey that artificial general intelligence would be achieved by 2029.[23] The prediction is criticized by Gary Marcus, who challenges Musk to a $100,000 bet.[24]
2022 (June–August) 2059 High-level machine intelligence (HLMI) AI Impacts (survey) The 2022 Expert Survey on Progress in AI (ESPAI), conducted by AI Impacts, analyzes responses from 738 machine learning researchers regarding progress toward high-level machine intelligence. The median estimate places a 50% probability of HLMI at approximately 37 years in the future, around 2059, representing a decrease of about eight years compared with the 2016 survey. Respondents also estimate a median 5% probability of extremely negative outcomes, such as human extinction, and 69% indicate that AI safety research should be prioritized more than current efforts.[25]
2023 (October) 2040 Artificial general intelligence (AGI) AI Impacts (survey) AI Impacts conducts a large expert survey of 2,778 AI researchers to estimate timelines for achieving artificial general intelligence. Using questions closely aligned with its 2022 survey, the results indicate a median estimate placing the arrival of AGI around 2040, reflecting expert assessments of progress toward high-level machine intelligence and remaining technical challenges.[26]
2023 (November 29) 2026 Superintelligence Elon Musk In an interview, Elon Musk states that artificial intelligence would surpass the intelligence of the smartest human within approximately three years.[27]
2024 (February 17) 2029 Superintelligence An article examines a rising current of AI scepticism, from existential-risk theorist Eliezer Yudkowsky to neo-Luddite critics of labour and surveillance technologies. Yudkowsky argues that humanity’s remaining timeline may be closer to five years than 50, warning that superintelligent AI could rapidly escape human control and cause extinction. Others reject apocalyptic urgency and instead emphasise concrete harms: job displacement, workplace monitoring, environmental costs, and corporate power. Contemporary neo-Luddites frame their position not as anti-technology, but as a demand to scrutinise technological progress, resist disempowering uses of AI, and reassert social and democratic control over its development.[28]

Meta information on the timeline

How the timeline was built

The initial version of the timeline was written by Sebastian.

Funding information for this timeline is available.

Feedback and comments

Feedback for the timeline can be provided at the following places:

  • FIXME

What the timeline is still missing



Timeline update strategy

See also

References

  1. "3 Things Alan Turing Never Imagined". CMSWire.com. Retrieved 26 October 2023.
  2. "Artificial Intelligence (AI) Coined at Dartmouth". Dartmouth College. Retrieved 15 January 2026.
  3. Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  4. 4.0 4.1 "Biology-Inspired AGI Timelines: The Trick That Never Works". www.greaterwrong.com. Retrieved 16 July 2022.
  5. Henshall, Will (January 19, 2024). "When Might AI Outsmart Us? It Depends Who You Ask". TIME. Retrieved January 19, 2024.
  6. Gubrud 1997
  7. "FHI Winter Intelligence Survey". AI Impacts. 29 December 2014. Retrieved 26 October 2023.
  8. "Machine Intelligence Survey" (PDF). fhi. Retrieved 12 August 2022.
  9. "AGI-11 survey". AI Impacts. 10 November 2018. Retrieved 26 October 2023.
  10. "The Fourth Conference on Artificial General Intelligence". agi-conference.org. Retrieved 26 October 2023.
  11. "Müller and Bostrom AI Progress Poll". AI Impacts. 29 December 2014. Retrieved 12 August 2022.
  12. "The AAAI Fellows Program". AAAI. Retrieved 26 October 2023.
  13. "Bill Gates on AI timelines". lukemuehlhauser.com. Retrieved 12 August 2022.
  14. 14.0 14.1 14.2 14.3 "Walsh 2017 survey". AI Impacts. 24 December 2019. Retrieved 26 October 2023.
  15. "When will the first weakly general AI system be devised, tested, and publicly announced?". www.metaculus.com. 18 January 2020. Retrieved 12 August 2022.
  16. "Expert and Non-Expert Opinion about Technological Unemployment" (PDF). arxiv.org. Retrieved 12 August 2022.
  17. Cotra, Ajeya (18 September 2020). "Draft report on AI timelines". GreaterWrong. Retrieved 15 January 2026.
  18. Ho, Anson (6 June 2022). "Grokking "Forecasting TAI with biological anchors"". GreaterWrong. Retrieved 15 January 2026.
  19. Branwen, Gwern. "The Scaling Hypothesis". gwern.net. Retrieved 15 January 2026.
  20. "Forecasting Thread: AI Timelines". GreaterWrong. 21 August 2020. Retrieved 15 January 2026.
  21. "A public prediction by Holden Karnofsky". www.metaculus.com. 2 January 2022. Retrieved 12 August 2022.
  22. Cotra, Ajeya (18 September 2020). "Draft report on AI timelines". GreaterWrong. Retrieved 15 January 2026.
  23. "https://twitter.com/elonmusk/status/1531328534169493506". Twitter. Retrieved 12 August 2022. {{cite web}}: External link in |title= (help)
  24. Daws, Ryan (1 June 2022). "Gary Marcus criticises Elon Musk's AGI prediction". AI News. Retrieved 12 August 2022.
  25. "2022 Expert Survey on Progress in AI". AI Impacts. 4 August 2022. Retrieved 12 August 2022.
  26. AI Multiple. "Artificial General Intelligence (AGI) & Singularity Timing: Expert Surveys and Predictions". AI Multiple. Retrieved 15 January 2026.
  27. Kastrenakes, Jacob (29 November 2023). "Musk thinks we're three years from super intelligent AI". The Verge. Retrieved 20 April 2024.
  28. Lamont, Tom (17 February 2024). "'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-luddites warning of an AI apocalypse". The Guardian. Retrieved 15 January 2026.