Difference between revisions of "Timeline of AI policy"

From Timelines
Jump to: navigation, search
Line 4: Line 4:
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
! Year !! Month and date !! Region !! Event type !! Details
+
! Year !! Month and date !! Region !! Name !! Event type !! Details
 
|-
 
|-
| 2018 || {{dts|May 5}} || European Union || International Policy || The European Union effects the {{w|General Data Protection Regulation}} (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing development in the tech world.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref> Although the GDPR is focused on privacy, it states that individuals have the right to a human review of results from automated decision-making systems.<ref name="HRW">{{cite web |title=The EU General Data Protection Regulation{{!}} |url=https://www.hrw.org/news/2018/06/06/eu-general-data-protection-regulation?gad_source=1&gclid=CjwKCAjwuMC2BhA7EiwAmJKRrBN_g5ZGkeki0aGCIe8R3eVgFxEl8jsIzE9NIngd__KZ_P8vpiYV7RoC4qYQAvD_BwE |website=HRW.org |date=6 June 2018|access-date=28 August 2024 |language=en}}</ref> The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref>  
+
| 2018 || {{dts|May 5}} || European Union || GDPR || International Policy || The European Union effects the {{w|General Data Protection Regulation}} (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing development in the tech world.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref> Although the GDPR is focused on privacy, it states that individuals have the right to a human review of results from automated decision-making systems.<ref name="HRW">{{cite web |title=The EU General Data Protection Regulation{{!}} |url=https://www.hrw.org/news/2018/06/06/eu-general-data-protection-regulation?gad_source=1&gclid=CjwKCAjwuMC2BhA7EiwAmJKRrBN_g5ZGkeki0aGCIe8R3eVgFxEl8jsIzE9NIngd__KZ_P8vpiYV7RoC4qYQAvD_BwE |website=HRW.org |date=6 June 2018|access-date=28 August 2024 |language=en}}</ref> The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref>  
 
|-
 
|-
| 2018 || {{dts|June 28}} || United States || State Policy || The {{w|California Consumer Privacy Act}} is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information<ref name="Office of the Attorney General">{{cite web |title=California Consumer Privacy Act (CCPA){{!}} |url=https://oag.ca.gov/privacy/ccpa#:~:text=The%20California%20Consumer%20Privacy%20Act,how%20to%20implement%20the%20law|website=oag.ca.gov |access-date=30 August 2024 |language=en}}</ref>. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information<ref>{{cite web |last1=Ocampo |first1=Danielle |title=CCPA and the EU AI ACT|url=https://calawyers.org/privacy-law/ccpa-and-the-eu-ai-act/#:~:text=The%20CCPA%20would%20give%20individuals,and%20the%20purposes%20of%20processing |website=calawyers.org |access-date=30 August 2024 |language=en |date= June 2024}}</ref>. These regulations would require businesses to disclose if and how they use personal information for AI training.
+
| 2018 || {{dts|June 28}} || United States || {{w|California Consumer Privacy Act}} || State Policy || The {{w|California Consumer Privacy Act}} is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information<ref name="Office of the Attorney General">{{cite web |title=California Consumer Privacy Act (CCPA){{!}} |url=https://oag.ca.gov/privacy/ccpa#:~:text=The%20California%20Consumer%20Privacy%20Act,how%20to%20implement%20the%20law|website=oag.ca.gov |access-date=30 August 2024 |language=en}}</ref>. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information<ref>{{cite web |last1=Ocampo |first1=Danielle |title=CCPA and the EU AI ACT|url=https://calawyers.org/privacy-law/ccpa-and-the-eu-ai-act/#:~:text=The%20CCPA%20would%20give%20individuals,and%20the%20purposes%20of%20processing |website=calawyers.org |access-date=30 August 2024 |language=en |date= June 2024}}</ref>. These regulations would require businesses to disclose if and how they use personal information for AI training.
 
|-
 
|-
| 2019 || {{dts|February 11}} || United States || National Policy || President Trump signs {{w|Executive Order 13859}} to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.<ref name="Federal Register">{{cite web |title=Maintaining American Leadership in Artificial Intelligence{{!}} |url=https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence|website=Federalregister.gov |access-date=30 August 2024 |language=en}}</ref> The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.<ref>{{cite web |last1=Metz |first1=Cade |title=Trump Signs Executive Order Promoting Artificial Intelligence|url=https://www.nytimes.com/2019/02/11/business/ai-artificial-intelligence-trump.html |website=nytimes.com |access-date=30 August 2024 |language=en |date=11 February 2019}}</ref>
+
| 2019 || {{dts|February 11}} || United States || {{w|Executive Order 13859}} || National Policy || President Trump signs {{w|Executive Order 13859}} to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.<ref name="Federal Register">{{cite web |title=Maintaining American Leadership in Artificial Intelligence{{!}} |url=https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence|website=Federalregister.gov |access-date=30 August 2024 |language=en}}</ref> The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.<ref>{{cite web |last1=Metz |first1=Cade |title=Trump Signs Executive Order Promoting Artificial Intelligence|url=https://www.nytimes.com/2019/02/11/business/ai-artificial-intelligence-trump.html |website=nytimes.com |access-date=30 August 2024 |language=en |date=11 February 2019}}</ref>
 
|-
 
|-
| 2019 || {{dts|April 10}} || United States || National Policy || The Algorithmic Accountability Act bill is introduced into the house. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.”<ref name="H.R.2231">{{cite web |title=116th Congress{{!}} |url=https://www.congress.gov/bill/116th-congress/house-bill/2231|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the {{w|Federal Trade Commission}} to establish rules for evaluating highly sensitive automated systems.<ref>{{cite web |last1=Robertson |first1=Adi |title=A new bill would force companies to check their algorithms for bias|url=https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate |website=theverge.com |access-date=30 August 2024 |language=en |date=10 April 2019}}</ref> The legislation would be introduced into the senate in 2022<ref name="S.3572">{{cite web |title=117th Congress{{!}} |url=https://www.congress.gov/bill/117th-congress/senate-bill/3572|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> but would still not be law through 2024.
+
| 2019 || {{dts|April 10}} || United States || Algorithmic Accountability Act || National Policy || The Algorithmic Accountability Act bill is introduced into the house. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.”<ref name="H.R.2231">{{cite web |title=116th Congress{{!}} |url=https://www.congress.gov/bill/116th-congress/house-bill/2231|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the {{w|Federal Trade Commission}} to establish rules for evaluating highly sensitive automated systems.<ref>{{cite web |last1=Robertson |first1=Adi |title=A new bill would force companies to check their algorithms for bias|url=https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate |website=theverge.com |access-date=30 August 2024 |language=en |date=10 April 2019}}</ref> The legislation would be introduced into the senate in 2022<ref name="S.3572">{{cite web |title=117th Congress{{!}} |url=https://www.congress.gov/bill/117th-congress/senate-bill/3572|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> but would still not be law through 2024.
 
|-
 
|-
| 2020 || {{dts|October 10}} || Europe || International Policy || {{w|European Union}} leaders meet to discuss the digital transition. They invite the {{w|European Commission}}, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
+
| 2020 || {{dts|October 10}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || {{w|European Union}} leaders meet to discuss the digital transition. They invite the {{w|European Commission}}, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
 
|-
 
|-
| 2021 || {{dts|April 21}} || Europe || International Policy || The {{w|European Commission}} proposes the {{w|Artificial Intelligence Act}}. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
+
| 2021 || {{dts|April 21}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Commission}} proposes the {{w|Artificial Intelligence Act}}. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
 
|-  
 
|-  
| 2023 || {{dts|October 30}} || United States || National Policy || Biden signs {{w|Executive Order 14110}} titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures<ref name="The White House">{{cite web |title=FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence{{!}} |url=https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/|website=whitehouse.gov |access-date=30 August 2024 |language=en}}</ref>. The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism<ref name="NIST Statement">{{cite web |title=NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence{{!}} |url=https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-and|website=nist.gov |access-date=30 August 2024 |language=en |date=19 December 2023}}</ref>. As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development<ref>{{cite web |last1=Mowshowitz |first1=Zvi |title=Reactions to the Executive Order|url=https://www.lesswrong.com/posts/G8SsspgAYEHHiDGNP/reactions-to-the-executive-order |website=lesswrong.com |access-date=30 August 2024 |language=en |date=1 November 2023}}</ref>. A complete timeline and outlook of the Executive Order can be found here.<ref name="AI EO Timeline">{{cite web |title=AI Executive Order Timeline{{!}} |url=https://bipartisanpolicy.org/blog/ai-eo-timeline/|website=bipartisanpolicy.org|date=13 December 2023 |access-date=30 August 2024 |language=en}}</ref>
+
| 2023 || {{dts|October 30}} || United States || {{w|Executive Order 14110}} || National Policy || Biden signs {{w|Executive Order 14110}} titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures<ref name="The White House">{{cite web |title=FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence{{!}} |url=https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/|website=whitehouse.gov |access-date=30 August 2024 |language=en}}</ref>. The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism<ref name="NIST Statement">{{cite web |title=NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence{{!}} |url=https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-and|website=nist.gov |access-date=30 August 2024 |language=en |date=19 December 2023}}</ref>. As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development<ref>{{cite web |last1=Mowshowitz |first1=Zvi |title=Reactions to the Executive Order|url=https://www.lesswrong.com/posts/G8SsspgAYEHHiDGNP/reactions-to-the-executive-order |website=lesswrong.com |access-date=30 August 2024 |language=en |date=1 November 2023}}</ref>. A complete timeline and outlook of the Executive Order can be found here.<ref name="AI EO Timeline">{{cite web |title=AI Executive Order Timeline{{!}} |url=https://bipartisanpolicy.org/blog/ai-eo-timeline/|website=bipartisanpolicy.org|date=13 December 2023 |access-date=30 August 2024 |language=en}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 1}}{{snd}}2 || International Conference || International Policy || The first {{w|AI Safety Summit}} is held at {{w|Bletchley Park}}, {{w|Milton Keynes}} in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.<ref>{{cite web|url = https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023|title = The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023|date = November 1, 2023|accessdate = May 19, 2024|publisher = GOV.UK}}</ref> It receives some commentary on LessWrong, viewing it as a partial step in the right direction,<ref>{{cite web|url = https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and|title = Thoughts on the AI Safety Summit company policy requests and responses|last = Soares|first = Nate|date = October 31, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref> including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.<ref>{{cite web|url = https://www.lesswrong.com/posts/zbrvXGu264u3p8otD/on-the-uk-summit|title = On the UK Summit|last = Mowshowitz|first = Zvi|date = November 7, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref>
+
| 2023 || {{dts|November 1}}{{snd}}2 || International Conference ||{{w|AI Safety Summit}} || International Policy || The first {{w|AI Safety Summit}} is held at {{w|Bletchley Park}}, {{w|Milton Keynes}} in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.<ref>{{cite web|url = https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023|title = The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023|date = November 1, 2023|accessdate = May 19, 2024|publisher = GOV.UK}}</ref> It receives some commentary on LessWrong, viewing it as a partial step in the right direction,<ref>{{cite web|url = https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and|title = Thoughts on the AI Safety Summit company policy requests and responses|last = Soares|first = Nate|date = October 31, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref> including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.<ref>{{cite web|url = https://www.lesswrong.com/posts/zbrvXGu264u3p8otD/on-the-uk-summit|title = On the UK Summit|last = Mowshowitz|first = Zvi|date = November 7, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 1}} || United States || Organization || United States Vice President {{w|Kamala Harris}} announces the U.S. AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).<ref>{{cite web|url = https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/|title = FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence|date = November 1, 2023|accessdate = July 6, 2024|publisher = White House}}</ref>
+
| 2023 || {{dts|November 1}} || United States || US {{w|AI Safety Institute}} || Organization || United States Vice President {{w|Kamala Harris}} announces the U.S. AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).<ref>{{cite web|url = https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/|title = FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence|date = November 1, 2023|accessdate = July 6, 2024|publisher = White House}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 2}} || United Kingdom || Organization || The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.<ref>{{cite web|url = https://www.gov.uk/government/news/prime-minister-launches-new-ai-safety-institute|title = Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.|date = November 2, 2023|accessdate = July 6, 2024|publisher = GOV.UK}}</ref>
+
| 2023 || {{dts|November 2}} || United Kingdom || UK {{w|AI Safety Institute}} || Organization || The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.<ref>{{cite web|url = https://www.gov.uk/government/news/prime-minister-launches-new-ai-safety-institute|title = Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.|date = November 2, 2023|accessdate = July 6, 2024|publisher = GOV.UK}}</ref>
 
|-
 
|-
| 2023 || {{dts|December 9}} || Europe || International Policy || The {{w|European Council}} and {{w|European Parliment}} reach a provisional agreement on the {{w|Artificial Intelligence Act}}. The Act should go into effect in 2026.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
+
| 2023 || {{dts|December 9}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Council}} and {{w|European Parliment}} reach a provisional agreement on the {{w|Artificial Intelligence Act}}. The Act should go into effect in 2026.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
 
|-
 
|-
| 2024 || {{dts|February 8}} || United States || Organization || U.S. Secretary of Commerce {{w|Gina Raimondo}} announces "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the U.S. AI Safety Institute, and includes over 200 member organizations.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated|title = Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety|publisher = U.S. Department of Commerce|date = February 8, 2024|accessdate = July 6, 2024}}</ref> The member organizations were recruited through a notice published the ''Federal Register'' asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).<ref>{{cite web|url = https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium|title = Artificial Intelligence Safety Institute Consortium|date = November 2, 2023|accessdate = July 6, 2024|publisher = Federal Register}}</ref><ref>{{cite web|url = https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic|title = Artificial Intelligence Safety Institute Consortium (AISIC)|publisher = National Institute of Standards and Technology|accessdate = July 6, 2024}}</ref>
+
| 2024 || {{dts|February 8}} || United States || {{w|Artificial Intelligence Act}} || Organization || U.S. Secretary of Commerce {{w|Gina Raimondo}} announces "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the U.S. AI Safety Institute, and includes over 200 member organizations.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated|title = Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety|publisher = U.S. Department of Commerce|date = February 8, 2024|accessdate = July 6, 2024}}</ref> The member organizations were recruited through a notice published the ''Federal Register'' asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).<ref>{{cite web|url = https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium|title = Artificial Intelligence Safety Institute Consortium|date = November 2, 2023|accessdate = July 6, 2024|publisher = Federal Register}}</ref><ref>{{cite web|url = https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic|title = Artificial Intelligence Safety Institute Consortium (AISIC)|publisher = National Institute of Standards and Technology|accessdate = July 6, 2024}}</ref>
 
|-
 
|-
| 2024 || {{dts|March 7}} (anticipation), {{dts|April 16}} (official announcement) || United States || Organization || U.S. Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the U.S. AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety|title = U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team|publisher = U.S. Department of Commerce|date = April 16, 2024|accessdate = July 6, 2024}}</ref> A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity."<ref>{{cite web|url = https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute|last = Goldman|first = Sharon|publisher = VentureBeat|date = March 7, 2024|accessdate = July 6, 2024}}</ref><ref>{{cite web|url = https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute (linkpost)|publisher = Effective Altruism Forum|date = March 8, 2024|accessdate = July 6, 2024}}</ref>
+
| 2024 || {{dts|March 7}} (anticipation), {{dts|April 16}} (official announcement) || United States || US {{w|AI Safety Institute}} || Organization || U.S. Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the U.S. AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety|title = U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team|publisher = U.S. Department of Commerce|date = April 16, 2024|accessdate = July 6, 2024}}</ref> A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity."<ref>{{cite web|url = https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute|last = Goldman|first = Sharon|publisher = VentureBeat|date = March 7, 2024|accessdate = July 6, 2024}}</ref><ref>{{cite web|url = https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute (linkpost)|publisher = Effective Altruism Forum|date = March 8, 2024|accessdate = July 6, 2024}}</ref>
 
|-
 
|-
| 2024 || {{dts|May 21}} || Europe || International Policy ||  The {{w|European Council}} approves the {{w|Artificial Intelligence Act}}.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref> This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.
+
| 2024 || {{dts|May 21}} || Europe || {{w|Artificial Intelligence Act}} || International Policy ||  The {{w|European Council}} approves the {{w|Artificial Intelligence Act}}.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref> This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.
 
|}
 
|}
  

Revision as of 09:01, 6 September 2024

This is a timeline of AI policy and legislation, which attempts to overview the changes in international and local laws around AI and AI safety.

Full timeline

Year Month and date Region Name Event type Details
2018 May 5 European Union GDPR International Policy The European Union effects the General Data Protection Regulation (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing development in the tech world.[1] Although the GDPR is focused on privacy, it states that individuals have the right to a human review of results from automated decision-making systems.[2] The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.[1]
2018 June 28 United States California Consumer Privacy Act State Policy The California Consumer Privacy Act is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information[3]. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information[4]. These regulations would require businesses to disclose if and how they use personal information for AI training.
2019 February 11 United States Executive Order 13859 National Policy President Trump signs Executive Order 13859 to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.[5] The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.[6]
2019 April 10 United States Algorithmic Accountability Act National Policy The Algorithmic Accountability Act bill is introduced into the house. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.”[7] The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the Federal Trade Commission to establish rules for evaluating highly sensitive automated systems.[8] The legislation would be introduced into the senate in 2022[9] but would still not be law through 2024.
2020 October 10 Europe Artificial Intelligence Act International Policy European Union leaders meet to discuss the digital transition. They invite the European Commission, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence[10].
2021 April 21 Europe Artificial Intelligence Act International Policy The European Commission proposes the Artificial Intelligence Act. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development[10].
2023 October 30 United States Executive Order 14110 National Policy Biden signs Executive Order 14110 titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures[11]. The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism[12]. As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development[13]. A complete timeline and outlook of the Executive Order can be found here.[14]
2023 November 1 – 2 International Conference AI Safety Summit International Policy The first AI Safety Summit is held at Bletchley Park, Milton Keynes in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.[15] It receives some commentary on LessWrong, viewing it as a partial step in the right direction,[16] including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.[17]
2023 November 1 United States US AI Safety Institute Organization United States Vice President Kamala Harris announces the U.S. AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).[18]
2023 November 2 United Kingdom UK AI Safety Institute Organization The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.[19]
2023 December 9 Europe Artificial Intelligence Act International Policy The European Council and European Parliment reach a provisional agreement on the Artificial Intelligence Act. The Act should go into effect in 2026.[10]
2024 February 8 United States Artificial Intelligence Act Organization U.S. Secretary of Commerce Gina Raimondo announces "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the U.S. AI Safety Institute, and includes over 200 member organizations.[20] The member organizations were recruited through a notice published the Federal Register asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).[21][22]
2024 March 7 (anticipation), April 16 (official announcement) United States US AI Safety Institute Organization U.S. Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the U.S. AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.[23] A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity."[24][25]
2024 May 21 Europe Artificial Intelligence Act International Policy The European Council approves the Artificial Intelligence Act.[10] This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.

See also


References

  1. 1.0 1.1 "What is GDPR|". GDPR.EU. Retrieved 28 August 2024. 
  2. "The EU General Data Protection Regulation|". HRW.org. 6 June 2018. Retrieved 28 August 2024. 
  3. "California Consumer Privacy Act (CCPA)|". oag.ca.gov. Retrieved 30 August 2024. 
  4. Ocampo, Danielle (June 2024). "CCPA and the EU AI ACT". calawyers.org. Retrieved 30 August 2024. 
  5. "Maintaining American Leadership in Artificial Intelligence|". Federalregister.gov. Retrieved 30 August 2024. 
  6. Metz, Cade (11 February 2019). "Trump Signs Executive Order Promoting Artificial Intelligence". nytimes.com. Retrieved 30 August 2024. 
  7. "116th Congress|". Congress.gov. Retrieved 30 August 2024. 
  8. Robertson, Adi (10 April 2019). "A new bill would force companies to check their algorithms for bias". theverge.com. Retrieved 30 August 2024. 
  9. "117th Congress|". Congress.gov. Retrieved 30 August 2024. 
  10. 10.0 10.1 10.2 10.3 "European Council - Council of the European Union|". consilium.europa.eu. Retrieved 30 August 2024. 
  11. "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence|". whitehouse.gov. Retrieved 30 August 2024. 
  12. "NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence|". nist.gov. 19 December 2023. Retrieved 30 August 2024. 
  13. Mowshowitz, Zvi (1 November 2023). "Reactions to the Executive Order". lesswrong.com. Retrieved 30 August 2024. 
  14. "AI Executive Order Timeline|". bipartisanpolicy.org. 13 December 2023. Retrieved 30 August 2024. 
  15. "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023". GOV.UK. November 1, 2023. Retrieved May 19, 2024. 
  16. Soares, Nate (October 31, 2023). "Thoughts on the AI Safety Summit company policy requests and responses". LessWrong. Retrieved May 19, 2024. 
  17. Mowshowitz, Zvi (November 7, 2023). "On the UK Summit". LessWrong. Retrieved May 19, 2024. 
  18. "FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence". White House. November 1, 2023. Retrieved July 6, 2024. 
  19. "Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.". GOV.UK. November 2, 2023. Retrieved July 6, 2024. 
  20. "Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety". U.S. Department of Commerce. February 8, 2024. Retrieved July 6, 2024. 
  21. "Artificial Intelligence Safety Institute Consortium". Federal Register. November 2, 2023. Retrieved July 6, 2024. 
  22. "Artificial Intelligence Safety Institute Consortium (AISIC)". National Institute of Standards and Technology. Retrieved July 6, 2024. 
  23. "U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team". U.S. Department of Commerce. April 16, 2024. Retrieved July 6, 2024. 
  24. Goldman, Sharon (March 7, 2024). "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute". VentureBeat. Retrieved July 6, 2024. 
  25. "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute (linkpost)". Effective Altruism Forum. March 8, 2024. Retrieved July 6, 2024.