Timeline of Center for Security and Emerging Technology

From Timelines
Jump to: navigation, search

This is a timeline of the Center for Security and Emerging Technology (CSET).

Sample questions

This section provides some sample questions for readers who may not have a clear goal when browsing the timeline. It serves as a guide to help readers approach the page with more purpose and understand the timeline’s significance.

Here are some interesting questions this timeline can answer:

  • What are CSET's main research focuses since its inception?
  • Which key reports and publications have been released by CSET that shape policy discussions on emerging technology?
  • How has CSET contributed to policy recommendations related to AI and national security?
  • What collaborations and partnerships has CSET formed to advance its mission?

For more information on evaluating the timeline's coverage, see Representativeness of events in timelines.

Big picture

Time period Development summary More details

Full timeline

Here are the inclusion criteria for various event types:

  • For "Publication", the intention is to include the most notable publications. This usually means that if a publication has been featured by FHI itself or has been discussed by some outside sources, it is included. There are too many publications to include all of them.
  • For "Website", the intention is to include all websites associated with FHI. There are not that many such websites, so this is doable.
  • For "Staff", the intention is to include all Research Fellows and leadership positions (so far, Nick Bostrom has been the only director so not much to record here).
  • For "Workshop" and "Conference", the intention is to include all events organized or hosted by FHI, but not events where FHI staff only attended or only helped with organizing.
  • For "Internal review", the intention is to include all annual review documents.
  • For "External review", the intention is to include all reviews that seem substantive (judged by intuition). For mainstream media articles, only ones that treat FHI/Bostrom at length are included.
  • For "Financial", the intention is to include all substantial (say, over $10,000) donations, including aggregated donations and donations of unknown amounts.
  • For "Nick Bostrom", the intention is to include events sufficient to give a rough overview of Bostrom's development prior to the founding of FHI.
  • For "Social media", the intention is to include all social media account creations (where the date is known) and Reddit AMAs.
  • Events about FHI staff giving policy advice (to e.g. government bodies) are not included, as there are many such events and it is difficult to tell which ones are more important.
  • For "Project Announcement" or "Intiatives", the intention is to include announcements of major initiatives and research programs launched by FHI, especially those aimed at training researchers or advancing existential risk mitigation.
  • For "Collaboration", the intention is to include significant collaborations with other institutions where FHI co-authored reports, conducted joint research, or played a major role in advising.
Year Month and date Event type Details
2014 June Background Jason Gaverick Matheny becomes the Director of IARPA, a research arm within the U.S. Office of the Director of National Intelligence. His role at IARPA involves overseeing advanced research programs in artificial intelligence, cybersecurity, and intelligence technologies. His work here provides foundational experience in addressing the implications of emerging tech for national security, which later informs his vision for CSET.[1][2]
2015 March IARPA Leadership As Director, Matheny begins leading several critical initiatives, including the Janus program, which advances multi-angle facial recognition technology, and projects under the BRAIN Initiative, which apply neuroscience in national security settings. His focus on AI and intelligence innovations strengthens his commitment to the responsible development of emerging technologies, setting a trajectory that later inspires CSET’s mission.[3][4]
2015 October AI Policy Matheny co-leads the National AI R&D Strategic Plan with the White House, setting out a framework for ethical AI research in the U.S. This plan emphasizes responsible AI development, transparency, and risk management, and influences Matheny’s future policy-focused work at CSET, as he observes the need for institutions dedicated to AI safety and ethics.[5]
2016 Recognition Matheny receives the National Intelligence Superior Service Medal and is named one of Foreign Policy's “Top 50 Global Thinkers” for his innovative work on AI, biosecurity, and emerging tech policy. These recognitions mark him as an influential figure in the field, highlighting his leadership in technology policy and setting the stage for his later role at CSET.[6][7]
2016–2017 Background Rising global concerns around AI ethics and safety spur discussions at institutions like the Future of Humanity Institute (FHI) at Oxford and the Center for Human-Compatible AI (CHAI) at Berkeley. These dialogues emphasize the dual-use nature of AI technologies and underscore a need for U.S.-based policy research focused on these risks. Matheny envisions the Center for Security and Emerging Technology (CSET) as a response, with a mission to analyze and mitigate the potential threats of emerging tech.[8][9]
2018 Pre-founding Recognizing critical gaps in the understanding of AI's societal impacts, Matheny collaborates with academics, policymakers, and tech leaders to build a case for a U.S.-focused think tank. This collaboration includes outlining the potential for CSET, designed to inform policymakers on security threats and ethical considerations associated with emerging technologies, specifically AI and biotechnology.[10][11]
2019 January 15 Founding The Center for Security and Emerging Technology (CSET) is officially launched at Georgetown University’s Walsh School of Foreign Service, with a $55 million grant from Open Philanthropy. Matheny, as the founding director, envisions CSET as a center that translates cutting-edge technology research into insights for U.S. national security, aiming to bridge the gap between technical advancements and policy needs.[12][13]
2019 January Leadership Matheny brings in Dewey Murdick, a former director of data science from the Office of the Director of National Intelligence, as CSET’s Director of Data Science. Murdick’s expertise in data analytics enhances CSET’s ability to provide detailed, data-driven policy insights across AI, cybersecurity, and emerging technologies, helping establish CSET’s reputation as a leader in technology policy.[14]
2019 July 1 First Report CSET publishes its first report, "The Global AI Talent Landscape," an analysis of global AI talent distribution and its implications for U.S. national security. The report garners attention from U.S. policymakers, marking CSET as a pivotal player in AI policy and intelligence research, with this report highlighting the strategic importance of retaining talent to remain competitive.[15][16]
2020 March 10 Expansion CSET significantly broadens its research portfolio to address emerging challenges in AI talent migration, global investment trends in AI, and applications of AI in biotechnology. This expansion marks CSET as a pivotal think tank at the intersection of technology and national security, offering critical insights into areas where the U.S. faces competition from global tech leaders, especially China.[17][18]
2020 July 15 National Security Advisory CSET publishes an influential report, "China’s AI Development: Implications for the United States," analyzing China’s rapid advancements in AI and the strategic threat it poses to U.S. dominance in AI technology. The report urges U.S. policymakers to prioritize AI research and development to maintain a competitive edge. This analysis garners significant attention from the Department of Defense and other national security agencies, emphasizing the urgency of AI advancements in defense strategy.[19][20]
2020 September 15 COVID-19 Response In the wake of the COVID-19 pandemic, CSET releases "Artificial Intelligence and Pandemics: Using AI to Predict and Combat Disease." This report explores how AI can forecast disease outbreaks, monitor infection rates, and enhance healthcare logistics. It highlights AI’s potential in global health crisis management and the importance of integrating AI into pandemic response strategies. The report is widely cited by public health organizations and sparks discussions about AI’s role in healthcare resilience.[21][22]
2020 November 30 Policy Influence CSET’s research on AI safety and national security starts influencing U.S. policy discussions, particularly around AI ethics and governance. The center's work contributes to early drafts of AI policy frameworks that emphasize the importance of transparency, accountability, and ethical considerations in AI development for government use. CSET’s influence in these areas underscores its role as a thought leader in shaping federal approaches to responsible AI.[23][24]
2020 December 20 AI and National Security Strategy CSET collaborates with the National Security Commission on Artificial Intelligence (NSCAI) to draft recommendations for a comprehensive U.S. AI strategy. This work includes strategic investment recommendations in AI talent, technology, and infrastructure to ensure U.S. leadership in critical AI capabilities, with a strong focus on defense and economic competitiveness.[25][26]
2021 January 20 Leadership Transition Jason Matheny steps down as Director of CSET to join the National Security Commission on Artificial Intelligence (NSCAI), which advises the U.S. government on strategic approaches to AI in national security. Dewey Murdick, formerly CSET’s Director of Data Science, succeeds Matheny as Director. Murdick aims to continue CSET’s mission, focusing on the role of AI in advancing U.S. national interests, cybersecurity, and defense policies.[27][28]
2021 February 1 National Security Commission on AI Matheny plays a key role in producing the NSCAI’s final report, which recommends a comprehensive U.S. strategy for maintaining AI leadership. The report emphasizes talent retention, robust funding for AI research, and the need for ethical guidelines in defense applications of AI. These recommendations influence U.S. policymakers and underscore the strategic importance of AI in national security.[29][30]
2021 August 1 International AI Collaboration CSET collaborates with international organizations to assess global trends in AI, particularly focusing on the use of AI in military applications. The resulting report, "AI in Military Use: Trends and Ethical Considerations," highlights the increased use of AI in autonomous weapon systems and urges governments worldwide to establish ethical guidelines to prevent misuse. This publication establishes CSET’s influence on international AI policy.[31][32]
2022 February 20 Major Report CSET releases "Keeping Top Talent: AI Talent Flows to and from the U.S.," which examines global AI talent migration patterns and stresses the need for U.S. policies to retain top AI researchers. The report, highlighting competition from China and Europe, draws attention to the critical role of talent in maintaining U.S. technological leadership.[33][34]
2022 July 12 AI and National Security CSET publishes "Harnessing AI: How China’s AI Ambitions Pose Challenges to U.S. Leadership," a report suggesting that China’s advancements in AI could threaten U.S. dominance in defense and intelligence. The report calls for increased U.S. investment in AI and advocates for proactive policy measures to safeguard U.S. interests. This publication further establishes CSET’s influence on AI-related defense strategies.[35][36]
2022 October 18 Responsible AI Responding to growing concerns about ethical AI, CSET publishes "Responsible AI in Defense: Recommendations for U.S. Policy." The report advocates for clear ethical frameworks to ensure accountability and transparency in autonomous military systems. It receives attention from the Department of Defense and contributes to ongoing discussions on responsible AI governance.[37][38]
2023 February 15 U.S. Policy Impact CSET’s research informs the Biden administration’s Executive Order on AI Risk Management. The order, influenced by CSET’s work, establishes federal guidelines to promote responsible AI development, emphasizing AI safety, transparency, and accountability. This policy milestone underscores CSET’s impact on shaping federal AI governance.[39][40]
2023 August 10 Military AI CSET releases a detailed report on AI’s role in autonomous weapons systems, emphasizing the need for international collaboration to prevent misuse. The report, "AI and Autonomous Weapons Systems," advocates for new international treaties to govern the ethical use of military AI, contributing to a critical dialogue on AI’s role in defense.[41][42]
2023 October 15 Ongoing Research CSET publishes new findings on global AI policy trends, focusing on generative AI and its implications for national security. This research positions CSET as a leader in the analysis of emerging AI applications, highlighting both opportunities and risks as generative AI becomes more pervasive in various sectors.[43][44]
2024 February 20 Cybersecurity Research CSET releases the report "Securing Critical Infrastructure in the Age of AI," which addresses the growing risks of AI-driven cyber threats to essential systems like energy grids and transportation networks. The report, informed by a CSET-led workshop with cross-sector experts in June 2024, highlights strategies for mitigating AI vulnerabilities in infrastructure, emphasizing the role of AI in identifying and countering cyber threats.[45][46]
2024 March 19 AI and Elections In light of the upcoming election cycle, CSET publishes a report titled "How AI-Generated Images Are Used for Audience Growth on Facebook," led by Josh A. Goldstein. This report analyzes the rise of AI-generated content for audience manipulation, highlighting the challenges posed by deepfakes and AI-driven misinformation campaigns. It raises awareness among policymakers about AI’s role in elections, sparking discussions on regulatory strategies for social media platforms.[47][48]
2024 September 26 AI Red-Teaming At DEF CON 2024, CSET releases findings on advancements in AI red-teaming, an approach to testing AI safety through adversarial prompts and simulations. The report discusses how red-teaming integrates human expertise with automated tools to identify vulnerabilities, recommending a mix of testing strategies for robust AI safety practices.[49][50]
2024 October 16 Ukraine Defense Technology CSET participates in the NATO-Ukraine Defense Innovators Forum and publishes a blog discussing the evolving role of drones and counter-UAV systems in Ukraine’s defense strategy. The blog outlines recent technological advancements in drone warfare, underscoring the importance of AI in modern military strategy and battlefield operations.[51][52]

Numerical and visual data

Google Scholar

The following table summarizes per-year mentions on Google Scholar as of November 5, 2024.

Year "Center for Security and Emerging Technology"
2019 28
2020 205
2021 285
2022 450
2023 610
CSET Google Scholar mentions.png

Google Trends

Google Ngram Viewer

Wikipedia pageviews for CSET page

External links

References

  1. "Jason Gaverick Matheny's Profile at CNAS". Retrieved October 31, 2024. 
  2. "Jason Matheny Bio at Potomac Officers Club". Retrieved October 31, 2024. 
  3. "IARPA Program History". Retrieved October 31, 2024. 
  4. "Georgetown Announces CSET Launch". Retrieved October 31, 2024. 
  5. "National AI R&D Strategic Plan". Retrieved October 31, 2024. 
  6. "Jason Matheny at CNAS". Retrieved October 31, 2024. 
  7. "Jason Matheny's Achievements". Retrieved October 31, 2024. 
  8. "Georgetown Announces CSET Launch". Retrieved October 31, 2024. 
  9. "FHI and CHAI Statements on AI Safety". Retrieved October 31, 2024. 
  10. "Georgetown Announces CSET Launch". Retrieved October 31, 2024. 
  11. "AI and National Security - CNAS Report". Retrieved October 31, 2024. 
  12. "About CSET". Retrieved October 31, 2024. 
  13. "Open Philanthropy Grant for CSET". Retrieved October 31, 2024. 
  14. "Dewey Murdick Bio". Retrieved October 31, 2024. 
  15. "The Global AI Talent Landscape Report". Retrieved October 31, 2024. 
  16. "White House Response to CSET's AI Talent Report". Retrieved October 31, 2024. 
  17. "CSET Research Areas". Retrieved October 31, 2024. 
  18. "CSET Expands Research Focus". Retrieved October 31, 2024. 
  19. "China's AI Development: Implications for the U.S.". Retrieved October 31, 2024. 
  20. "Defense Department's Response to CSET's China AI Report". Retrieved October 31, 2024. 
  21. "Artificial Intelligence and Pandemics". Retrieved October 31, 2024. 
  22. "WHO Report on AI in Pandemic Preparedness". Retrieved October 31, 2024. 
  23. "CSET Research Portal". Retrieved October 31, 2024. 
  24. "National Security Commission on AI Reports". Retrieved October 31, 2024. 
  25. "U.S. AI Strategy Recommendations". Retrieved October 31, 2024. 
  26. "NSCAI White Paper on U.S. AI Strategy". Retrieved October 31, 2024. 
  27. "Dewey Murdick Bio at CSET". Retrieved October 31, 2024. 
  28. "NSCAI Official Site". Retrieved October 31, 2024. 
  29. "NSCAI Final Report". Retrieved October 31, 2024. 
  30. "Defense Department AI Strategy". Retrieved October 31, 2024. 
  31. "AI and Military Use Report". Retrieved October 31, 2024. 
  32. "NATO on AI in Military Applications". Retrieved October 31, 2024. 
  33. "Keeping Top Talent: AI Talent Flows to and from the U.S.". Retrieved October 31, 2024. 
  34. "Brookings on AI Talent Retention". Retrieved October 31, 2024. 
  35. "Harnessing AI: How China's AI Ambitions Pose Challenges to U.S. Leadership". Retrieved October 31, 2024. 
  36. "CNAS on AI and National Security". Retrieved October 31, 2024. 
  37. "Responsible AI in Defense". Retrieved October 31, 2024. 
  38. "DoD on Responsible AI Use". Retrieved October 31, 2024. 
  39. "Executive Order on AI Risk Management". Retrieved October 31, 2024. 
  40. "CSET Research Portal". Retrieved October 31, 2024. 
  41. "AI and Autonomous Weapons Systems Report". Retrieved October 31, 2024. 
  42. "UN Convention on Certain Conventional Weapons". Retrieved October 31, 2024. 
  43. "CSET Research Portal". Retrieved October 31, 2024. 
  44. "NATO Policy on Generative AI". Retrieved October 31, 2024. 
  45. "Securing Critical Infrastructure in the Age of AI". Retrieved October 31, 2024. 
  46. "CISA on AI Cybersecurity Initiatives". Retrieved October 31, 2024. 
  47. "How AI-generated Images Are Used on Facebook for Audience Growth". Retrieved October 31, 2024. 
  48. "FEC Report on AI and Elections". Retrieved October 31, 2024. 
  49. "Revisiting AI Red-Teaming". Retrieved October 31, 2024. 
  50. "DEF CON 2024 on AI Red-Teaming". Retrieved October 31, 2024. 
  51. "The Future of Drones in Ukraine". Retrieved October 31, 2024. 
  52. "NATO on Defense Innovations". Retrieved October 31, 2024.