Timeline of AI policy

From Timelines
Revision as of 11:00, 4 September 2024 by Olivia (talk | contribs)
Jump to: navigation, search

This is a timeline of AI policy and legislation, which attempts to overview the changes in international and local laws around AI and AI safety.

Full timeline

Year Month and date Region Event type Details
2018 May 5 European Union International Policy The European Union effects the General Data Protection Regulation (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing development in the tech world.[1] Although the GDPR is focused on privacy, it states that individuals have the right to a human review of results from automated decision-making systems.[2] The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.[1]
2019 February 11 United States National Policy President Trump signs Executive Order 13859 to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.[3] The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.[4]
2023 November 1 – 2 International Conference International Policy The first AI Safety Summit is held at Bletchley Park, Milton Keynes in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.[5] It receives some commentary on LessWrong, viewing it as a partial step in the right direction,[6] including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.[7]
2023 November 1 United States Organization United States Vice President Kamala Harris announces the U.S. AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).[8]
2023 November 2 United Kingdom Organization The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.[9]
2024 February 8 United States Organization U.S. Secretary of Commerce Gina Raimondo announces "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the U.S. AI Safety Institute, and includes over 200 member organizations.[10] The member organizations were recruited through a notice published the Federal Register asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).[11][12]
2024 March 7 (anticipation), April 16 (official announcement) United States Organization U.S. Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the U.S. AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.[13] A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity."[14][15]

See also


References

  1. 1.0 1.1 "What is GDPR|". GDPR.EU. Retrieved 28 August 2024. 
  2. "The EU General Data Protection Regulation|". HRW.org. 6 June 2018. Retrieved 28 August 2024. 
  3. "Maintaining American Leadership in Artificial Intelligence|". Federalregister.gov. Retrieved 30 August 2024. 
  4. Metz, Cade (11 February 2019). "Trump Signs Executive Order Promoting Artificial Intelligence". nytimes.com. Retrieved 30 August 2024. 
  5. "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023". GOV.UK. November 1, 2023. Retrieved May 19, 2024. 
  6. Soares, Nate (October 31, 2023). "Thoughts on the AI Safety Summit company policy requests and responses". LessWrong. Retrieved May 19, 2024. 
  7. Mowshowitz, Zvi (November 7, 2023). "On the UK Summit". LessWrong. Retrieved May 19, 2024. 
  8. "FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence". White House. November 1, 2023. Retrieved July 6, 2024. 
  9. "Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.". GOV.UK. November 2, 2023. Retrieved July 6, 2024. 
  10. "Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety". U.S. Department of Commerce. February 8, 2024. Retrieved July 6, 2024. 
  11. "Artificial Intelligence Safety Institute Consortium". Federal Register. November 2, 2023. Retrieved July 6, 2024. 
  12. "Artificial Intelligence Safety Institute Consortium (AISIC)". National Institute of Standards and Technology. Retrieved July 6, 2024. 
  13. "U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team". U.S. Department of Commerce. April 16, 2024. Retrieved July 6, 2024. 
  14. Goldman, Sharon (March 7, 2024). "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute". VentureBeat. Retrieved July 6, 2024. 
  15. "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute (linkpost)". Effective Altruism Forum. March 8, 2024. Retrieved July 6, 2024.