Difference between revisions of "Timeline of AI policy"

From Timelines
Jump to: navigation, search
 
(51 intermediate revisions by the same user not shown)
Line 1: Line 1:
This is a timeline of AI policy and legislation, which attempts to overview the changes in international and local laws around AI and {{w|AI safety}}.
+
This is a timeline of AI policy and legislation, which attempts to overview the progression of international and local AI and {{w|AI safety}} policies. Various countries have released National AI strategies, guidelines, and regulations. International organizations focused on AI governance, such as the Global Partnership on Artificial Intelligence (GPAI) and the AI Governance Alliance (AIGA), have also contributed to a growing body of AI regulation.
 +
===Caveats===
 +
 
 +
* It should be noted that the timeline only includes policies and does not include incidents of policy violations or AI-related human rights abuses (see [[Timeline of AI ethics violations]]).
 +
* The timeline has been updated through August 2024.
 +
 
 +
==Big picture==
 +
 
 +
===Overall summary===
 +
 
 +
{| class="wikitable"
 +
! Year !! Details
 +
|-
 +
| {{dts|2017}} || Canada is the first country to release a National AI Strategy. China releases Guidelines on AI development shortly after, and Finland releases a National AI Strategy towards the end of the year.
 +
|-
 +
| {{dts|2018}} || France, India, and Germany all sequentially release National AI Strategies. The European Union enacts the {{w|General Data Protection Regulation}} and The United States the {{w|California Consumer Privacy Act}}, which strengthen personal privacy in the age of AI.
 +
|-
 +
| {{dts|2019}} ||The {{w|Center for Security and Emerging Technology}} (CSET) is established in the United States, followed by {{w|Executive Order 13859}}, a vague directive for the US to become a leading AI economy. Japan and Australia release Principles on ethical AI development. The United States introduces the Algorithmic Accountability Act to combat bias and discrimination in automated decision-making systems. The {{w|OECD}} releases AI Principles to shape global AI policies. Singapore, South Korea, and the Netherlands release AI Strategies.
 +
|-
 +
| {{dts|2020}} || The {{w|Global Partnership on Artificial Intelligence}}, hosted by {{w|OECD}}, is established to foster international AI policy collaboration. {{w|European Union}} leaders discuss the {{w|Artificial Intelligence Act}}. Finland and Germany update their National AI Strategies, and Switzerland releases National AI Guidelines.
 +
|-
 +
| {{dts|2021}} || The EU releases the {{w|Artificial Intelligence Act}}. China releases an AI Ethics Code. The UK and Brazil publish National AI Strategies.
 +
|-
 +
| {{dts|2022}} || Japan releases AI Governance Guidelines. China releases provisions on algorithmic recommendations. The {{w|Quadrilateral Security Dialogue}} releases a collaborative AI report.
 +
|-
 +
| {{dts|2023}} || China releases generative AI Measures and Provisions on deepsynthesis technologies. The {{w|World Economic Forum}} launches the AI Governance Alliance (AIGA) to guide responsible AI development. The Whitehouse hosts leading AI companies to discuss a voluntary AI safety agreement. {{w|Anthropic}} releases Responsible Scaling Policy in the United States. The US releases {{w|Executive Order 14110}} establishing AI safety standards. The first international {{w|AI Safety Summit}} is held in the UK. The US and the UK both establish {{w|AI Safety Institute}}s. Singapore updates its National AI Strategy. The EU reaches a provisional agreement on the {{w|Artificial Intelligence Act}}. Israel releases an AI Ethics Policy. {{w|OpenAI}} publishes their Preparedness Framework.
 +
|-
 +
| {{dts|2024}} || The AIGA releases AI Guidelines to guide international AI policies. The United States establishes the US AI Safety Institute Consortium to unite AI leaders. The US bolsters the {{w|AI Safety Institute}} leadership. France publishes recommendations on AI policy in line with the {{w|General Data Protection Regulation}}. The EU approves the {{w|Artificial Intelligence Act}}. The African Union endorses a Continental AI Strategy.
 +
|}
  
 
==Full timeline==
 
==Full timeline==
 +
 +
===Inclusion criteria===
 +
 +
Here is a list of criteria on what rows were included:
 +
 +
* Flagship policies of countries that are ranked in the top 10 of various AI readiness ranking indexes (Government AI Readiness Index,<ref name="Government AI Readiness Index 2023">{{cite web |title=Government AI Readiness Index 2023{{!}} |url=https://oxfordinsights.com/ai-readiness/ai-readiness-index/|website=oxfordinsights.com |access-date=23 October 2024 |language=en |date=2023}}</ref>The Global AI Index 2024,<ref name="Global AI Index 2024">{{cite web |title=Global AI Index 2024{{!}} |url=https://www.tortoisemedia.com/intelligence/global-ai/#rankings|website=tortoisemedia.com |access-date=23 October 2023 |language=en |date=2024}</ref> Techopedia<ref name="Techopedia">{{cite web |title=Top 10 Countries Leading in AI Research & Technology in 2024{{!}} |url=https://www.techopedia.com/top-10-countries-leading-in-ai-research-technology|website=techopedia.com |access-date=23 October 2023 |language=en |date=2024}</ref>)
 +
* Representation from each continent to ensure a diverse range of perspectives.
 +
* Notable international AI agreements and conferences.
 +
* National policies and regulations on AI development, deployment, and governance.
 +
* Key milestones in the development of AI technologies, such as the release of new AI frameworks or significant advancements in areas like natural language processing.
 +
 +
 +
===Timeline of AI policy===
  
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Year !! Month and date !! Region !! Name !! Event type !! Details
 
! Year !! Month and date !! Region !! Name !! Event type !! Details
 
|-
 
|-
| 2018 || {{dts|May 5}} || European Union || GDPR || International Policy || The European Union effects the {{w|General Data Protection Regulation}} (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing development in the tech world.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref> Although the GDPR is focused on privacy, it states that individuals have the right to a human review of results from automated decision-making systems.<ref name="HRW">{{cite web |title=The EU General Data Protection Regulation{{!}} |url=https://www.hrw.org/news/2018/06/06/eu-general-data-protection-regulation?gad_source=1&gclid=CjwKCAjwuMC2BhA7EiwAmJKRrBN_g5ZGkeki0aGCIe8R3eVgFxEl8jsIzE9NIngd__KZ_P8vpiYV7RoC4qYQAvD_BwE |website=HRW.org |date=6 June 2018|access-date=28 August 2024 |language=en}}</ref> The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref>  
+
| 2017 || {{dts|June}} || {{w|Canada}} || Pan-Canadian Artificial Intelligence Strategy || National Policy || Canada releases the world’s first National AI Strategy, aiming to have the most robust AI ecosystem in the world by 2030.<ref name="CIFAR">{{cite web |title=Canada is a global AI leader{{!}} |url=https://cifar.ca/ai/|website=cifar.ca |access-date=18 September 2024 |language=en |date=2017}}</ref> The Strategy is a collaborative effort, spanning across government, academia, and industry sectors and headed by the {{w|Canadian Institute for Advanced Research}} (CIFAR).<ref name="Pan-Canadian AI Strategy">{{cite web |title=Pan-Canadian Artificial Intelligence Strategy{{!}} |url=https://dig.watch/resource/pan-canadian-artificial-intelligence-strategy#:~:text=The%20Pan%2DCanadian%20Artificial%20Intelligence,(AI)%20research%20and%20innovation|website=dig.watch |access-date=18 September 2024 |language=en |date=June 2017}}</ref> Canada names the {{w|Vector Institute (Canada)}}, {{w|Mila (research institute)}}, and {{w|Amii (research institute)}} as national AI institutes and contributors to the nation’s AI progress.<ref name="CIFAR">{{cite web |title=Canada is a global AI leader{{!}} |url=https://cifar.ca/ai/|website=cifar.ca |access-date=18 September 2024 |language=en |date=2017}}</ref> This strategy would go on to enhance Canada’s global standing in AI research and innovation.
 
|-
 
|-
| 2018 || {{dts|June 28}} || United States || {{w|California Consumer Privacy Act}} || State Policy || The {{w|California Consumer Privacy Act}} is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information<ref name="Office of the Attorney General">{{cite web |title=California Consumer Privacy Act (CCPA){{!}} |url=https://oag.ca.gov/privacy/ccpa#:~:text=The%20California%20Consumer%20Privacy%20Act,how%20to%20implement%20the%20law|website=oag.ca.gov |access-date=30 August 2024 |language=en}}</ref>. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information<ref>{{cite web |last1=Ocampo |first1=Danielle |title=CCPA and the EU AI ACT|url=https://calawyers.org/privacy-law/ccpa-and-the-eu-ai-act/#:~:text=The%20CCPA%20would%20give%20individuals,and%20the%20purposes%20of%20processing |website=calawyers.org |access-date=30 August 2024 |language=en |date= June 2024}}</ref>. These regulations would require businesses to disclose if and how they use personal information for AI training.
+
| 2017 || {{dts|July 20}} || {{w|China}} || Guidelines on AI Development || National Policy || The {{w|State Council of the People's Republic of China}} issues guidelines on developing AI by embedding AI into the socioeconomic landscape and the country’s basic functioning. The council lays out plans to be a world leader in AI by 2030, aiming for the total output of the AI industry to be 1 trillion yuan ($147.8 billion).<ref name="China Issues ">{{cite web |title=China issues guideline on artificial intelligence development{{!}} |url=https://english.www.gov.cn/policies/latest_releases/2017/07/20/content_281475742458322.htm|website=english.gov.cn |access-date=6 September 2024 |language=en |date=20 July 2017}}</ref>  
 
|-
 
|-
| 2019 || {{dts|February 11}} || United States || {{w|Executive Order 13859}} || National Policy || President Trump signs {{w|Executive Order 13859}} to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.<ref name="Federal Register">{{cite web |title=Maintaining American Leadership in Artificial Intelligence{{!}} |url=https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence|website=Federalregister.gov |access-date=30 August 2024 |language=en}}</ref> The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.<ref>{{cite web |last1=Metz |first1=Cade |title=Trump Signs Executive Order Promoting Artificial Intelligence|url=https://www.nytimes.com/2019/02/11/business/ai-artificial-intelligence-trump.html |website=nytimes.com |access-date=30 August 2024 |language=en |date=11 February 2019}}</ref>
+
| 2017 || {{dts|October}} || {{w|Finland}} || National AI Strategy || National Policy || The Finnish {{w|Ministry of Economic Affairs and Employment}} releases Finland’s Age of Artificial Intelligence, providing policy recommendations, laying out the current state of AI, and possible ways AI will transform society.<ref name="Finnish AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/finland-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref> The Strategy outlines adopting an open data policy and creating adequate conditions for prosperous AI.<ref name="Finnish AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/finland-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref> The goals are to increase the competitiveness of Finnish AI industry, provide high-quality public services, improve public sector efficiency, and ensure a well-functioning society.<ref name="Finnish AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/finland-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref>
 
|-
 
|-
| 2019 || {{dts|April 10}} || United States || Algorithmic Accountability Act || National Policy || The Algorithmic Accountability Act bill is introduced into the house. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.<ref name="H.R.2231">{{cite web |title=116th Congress{{!}} |url=https://www.congress.gov/bill/116th-congress/house-bill/2231|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the {{w|Federal Trade Commission}} to establish rules for evaluating highly sensitive automated systems.<ref>{{cite web |last1=Robertson |first1=Adi |title=A new bill would force companies to check their algorithms for bias|url=https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate |website=theverge.com |access-date=30 August 2024 |language=en |date=10 April 2019}}</ref> The legislation would be introduced into the senate in 2022<ref name="S.3572">{{cite web |title=117th Congress{{!}} |url=https://www.congress.gov/bill/117th-congress/senate-bill/3572|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> but would still not be law through 2024.
+
| 2018 || {{dts|March 29}} || {{w|France}} || National AI Strategy || National Policy || {{w|Emmanuel Macron}} announces the National French AI Strategy, planning to spend 1.5 billion euros on AI during his term as president.<ref>{{cite web |last1=Bareis |first1=Jascha |last2=Katzenbach |first2=Christian |title=Global AI race: States aiming for the top|url=https://www.hiig.de/en/global-ai-race-nations-aiming-for-the-top/ |website=hiig.de |access-date=11 October 2024 |language=en |date=29 November 2018}}</ref> The Strategy states France’s intent to strengthen public research institutes, double the number of students trained in AI, and bolster data protection and confidentiality.<ref>{{cite web |last1=Bareis |first1=Jascha |last2=Katzenbach |first2=Christian |title=Global AI race: States aiming for the top|url=https://www.hiig.de/en/global-ai-race-nations-aiming-for-the-top/ |website=hiig.de |access-date=11 October 2024 |language=en |date=29 November 2018}}</ref> The proposed sectors to benefit from AI are health (specifically disease detection and prevention), transportation, environmental policies, and defense.<ref>{{cite web |last1=Bareis |first1=Jascha |last2=Katzenbach |first2=Christian |title=Global AI race: States aiming for the top|url=https://www.hiig.de/en/global-ai-race-nations-aiming-for-the-top/ |website=hiig.de |access-date=11 October 2024 |language=en |date=29 November 2018}}</ref> The 5-year plan aims to improve AI education, attract AI talent, establish an open data policy for AI implementation, and create an ethical framework for the transparent and fair use of AI.<ref name="France AI Strategy">{{cite web |title=France AI Strategy Report"{{!}} |url=https://ai-watch.ec.europa.eu/countries/france/france-ai-strategy-report_en|website=ai-watch.ec.europa.eu |access-date=11 October 2024 |language=en |date=1 September 2021}}</ref> The regulatory proposal is to create a digital technology and AI ethics committee to lead discussions on AI transparently.<ref name="France AI Strategy">{{cite web |title=France AI Strategy Report"{{!}} |url=https://ai-watch.ec.europa.eu/countries/france/france-ai-strategy-report_en|website=ai-watch.ec.europa.eu |access-date=11 October 2024 |language=en |date=1 September 2021}}</ref>
 
|-
 
|-
| 2020 || {{dts|October 10}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || {{w|European Union}} leaders meet to discuss the digital transition. They invite the {{w|European Commission}}, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
+
| 2018 || {{dts|May 5}} || {{w|European Union}} || {{w|General Data Protection Regulation}} || International Policy || The European Union effects the {{w|General Data Protection Regulation}} (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing technological development.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref> Although the GDPR is focused on privacy, it states that individuals have the right to human review of automated decision-making systems results from .<ref name="HRW">{{cite web |title=The EU General Data Protection Regulation{{!}} |url=https://www.hrw.org/news/2018/06/06/eu-general-data-protection-regulation?gad_source=1&gclid=CjwKCAjwuMC2BhA7EiwAmJKRrBN_g5ZGkeki0aGCIe8R3eVgFxEl8jsIzE9NIngd__KZ_P8vpiYV7RoC4qYQAvD_BwE |website=HRW.org |date=6 June 2018|access-date=28 August 2024 |language=en}}</ref> The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.<ref name="What is GDPR">{{cite web |title=What is GDPR{{!}} |url=https://gdpr.eu/what-is-gdpr/ |website=GDPR.EU |access-date=28 August 2024 |language=en}}</ref>  
 
|-
 
|-
| 2021 || {{dts|April 21}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Commission}} proposes the {{w|Artificial Intelligence Act}}. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>.
+
| 2018 || {{dts|June}} || {{w|India}} || National AI Strategy || National Policy ||  {{w| NITI Aayog}}, India’s public policy thinktank, releases a National AI Strategy (#AIforAll).<ref name="India AI Strategy Overview">{{cite web |title=National Strategy for Artificial Intelligence #AIForAll (2018) Overview of the strategy{{!}} |url=https://datagovhub.elliott.gwu.edu/india-ai-strategy/|website=datagovhub.elliott.gwu |access-date=11 October 2024 |language=en |date=2018}}</ref> The Strategy suggests that India harness the power of AI through research, application, training, acceleration of its adoption, and responsible development.<ref name="India AI Strategy Overview">{{cite web |title=National Strategy for Artificial Intelligence #AIForAll (2018) Overview of the strategy{{!}} |url=https://datagovhub.elliott.gwu.edu/india-ai-strategy/|website=datagovhub.elliott.gwu |access-date=11 October 2024 |language=en |date=2018}}</ref> The sectors predicted to benefit the most from advancing AI are healthcare, agriculture, education, infrastructure, and mobility.<ref name="India’s National Strategy for AI">{{cite web |title=National Strategy for AI{{!}} |url=https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf|website=niti.gov.in |access-date=11 October 2024 |language=en |date= June 2018}}</ref> The barriers to actualizing the Strategy’s goals are a lack of AI expertise, a deficiency in data ecosystems, and limited collaboration.<ref name="India’s National Strategy for AI">{{cite web |title=National Strategy for AI{{!}} |url=https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf|website=niti.gov.in |access-date=11 October 2024 |language=en |date= June 2018}}</ref>
 +
|-
 +
| 2018 || {{dts|June 28}} || {{w|United States}} || {{w|California Consumer Privacy Act}} || Regional Policy || The {{w|California Consumer Privacy Act}} is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information<ref name="Office of the Attorney General">{{cite web |title=California Consumer Privacy Act (CCPA){{!}} |url=https://oag.ca.gov/privacy/ccpa#:~:text=The%20California%20Consumer%20Privacy%20Act,how%20to%20implement%20the%20law|website=oag.ca.gov |access-date=30 August 2024 |language=en}}</ref>. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information.<ref>{{cite web |last1=Ocampo |first1=Danielle |title=CCPA and the EU AI ACT|url=https://calawyers.org/privacy-law/ccpa-and-the-eu-ai-act/#:~:text=The%20CCPA%20would%20give%20individuals,and%20the%20purposes%20of%20processing |website=calawyers.org |access-date=30 August 2024 |language=en |date= June 2024}}</ref> These regulations require businesses to disclose if and how they use personal information for AI training.
 +
|-
 +
| 2018 || {{dts|November 15}} || {{w|Germany}} || National AI Strategy || National Policy ||  Germany releases a National AI Strategy developed by the {{w|Federal Ministry of Education and Research (Germany)}}, {{w|Federal Ministry for Economic Affairs and Climate Action}}, and Federal Mistry of Labour and Social Affairs.<ref name="Germany AI Strategy Report 2018">{{cite web |title=Germany AI Strategy Report{{!}}|url=https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en|website=ai-watch.ec |access-date=11 October 2024 |language=en |date=21 September 2024}}</ref> The stated goals are to increase Germany’s competitiveness, become an international leading AI entity, ensure responsible development and deployment of AI for human good, and ethically integrate AI.<ref name="Germany AI Strategy Report 2018">{{cite web |title=Germany AI Strategy Report{{!}}|url=https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en|website=ai-watch.ec |access-date=11 October 2024 |language=en |date=21 September 2024}}</ref> The action items are to strengthen research, streamline result into industry, increase the accessibility of experts, create data infrastructure, encourage EU cooperation, and to foster AI dialogue in society.<ref name="Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy AI - a brand for Germany{{!}} |url=https://www.bundesregierung.de/breg-en/service/archive/ai-a-brand-for-germany-1551432|website=bundesregierung.de |access-date=11 October 2024 |language=en |date=15 November 2018}}</ref> The government is set to provide 3 billion euros to implement the strategy until 2025.<ref name="Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy AI - a brand for Germany{{!}} |url=https://www.bundesregierung.de/breg-en/service/archive/ai-a-brand-for-germany-1551432|website=bundesregierung.de |access-date=11 October 2024 |language=en |date=15 November 2018}}</ref>
 +
|-
 +
| 2019 || {{dts|January}} || {{w|United States}} || CSET Formation || National Organization || {{w|Open Philanthropy}} grants the {{w| Walsh School of Foreign Service}} at {{w|Georgetown University}} $55,000 to establish the {{w|Center for Security and Emerging Technology}} (CSET), a think tank dedicated to the policy analysis of international security and emerging tech.<ref name="Open Philanthropy Grant for CSET">{{cite web |title=Georgetown University — Center for Security and Emerging {{!}} |url=https://www.openphilanthropy.org/grants/georgetown-university-center-for-security-and-emerging-technology/|website=openphilanthropy.org |access-date=27 September 2024 |language=en |date=January 2019}}</ref> CSET will provide high-quality advice to policymakers to combat AI risks by  by assessing global technological developments with a focus on the USA and related policy communities, generating written products for policymakers, and training people for roles in the policy community.<ref name="Open Philanthropy Grant for CSET">{{cite web |title=Georgetown University — Center for Security and Emerging {{!}} |url=https://www.openphilanthropy.org/grants/georgetown-university-center-for-security-and-emerging-technology/|website=openphilanthropy.org |access-date=27 September 2024 |language=en |date=January 2019}}</ref> Silicon Valley entrepreneur {{w|Dustin Moskovitz}}, co-founder of {{w|Facebook}}, primarily funds the grant after recognizing a demand for policy analysis.<ref>{{cite web |last1=Anderson |first1=Nick |title=Georgetown launches think tank on security and emerging technology|url=https://www.washingtonpost.com/local/education/georgetown-launches-think-tank-on-security-and-emerging-technology/2019/02/27/d6dabc62-391f-11e9-a2cd-307b06d0257b_story.html|website=washingtonpost.com |access-date=8 March 2019 |language=en |date=28 February 2019}}</ref> CSET would go on to influence AI policy and be named a member of the Biden Administrations AI Safety Consortium in 2024. <ref name="CSET News Release">{{cite web |title=CSET Named Member of Biden Administration’s AI Safety Institute Consortium{{!}} |url=https://cset.georgetown.edu/article/cset-named-member-of-biden-administrations-ai-safety-institute-consortium/|website=cset.georgetown.edu |access-date=27 September 2024 |language=en |date=8 February 2024}}</ref>
 +
|-
 +
| 2019 || {{dts|February 11}} || {{w|United States}} || {{w|Executive Order 13859}} || National Policy || President Trump signs {{w|Executive Order 13859}} to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.<ref name="Federal Register">{{cite web |title=Maintaining American Leadership in Artificial Intelligence{{!}} |url=https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence|website=Federalregister.gov |access-date=30 August 2024 |language=en}}</ref> The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.<ref>{{cite web |last1=Metz |first1=Cade |title=Trump Signs Executive Order Promoting Artificial Intelligence|url=https://www.nytimes.com/2019/02/11/business/ai-artificial-intelligence-trump.html |website=nytimes.com |access-date=30 August 2024 |language=en |date=11 February 2019}}</ref>
 +
|-
 +
| 2019 || {{dts|March 29}} || {{w|Japan}} || Social Principles of Human-Centered AI || National Policy || The Japanese government releases the Social Principles of Human-Centered AI, a set of guidelines for implementing AI in society with the philosophies of human dignity, diversity, inclusion, and sustainability which the government will continuously revise.<ref name="Social Principles of Human-Centric AI">{{cite web |title=Human Centric AI{{!}} |url=https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf|website=cas.co |access-date=15 September 2024 |language=en |date=29 March 2019}}</ref> The Social Principles are a broad ethical framework of Japan's vision for AI in society. Japan provides nonbinding guidelines on AI and imposes transparency obligations on some large digital platforms.<ref>{{cite web |last1=Habuka |first1=Hiroki |title=Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency|url=https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency#:~:text=In%202019%2C%20the%20Japanese%20government,diversity%20and%20inclusion%2C%20and%20sustainability |website=CSIS.org |access-date=15 September 2024 |language=en |date=14 February 2023}}</ref> Japan aims to achieve social goals through the use of AI rather than restriction.
 +
|-
 +
| 2019 || {{dts|April 10}} || {{w|United States}} || Algorithmic Accountability Act || National Policy || The Algorithmic Accountability Act is introduced into the House of Representatives. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.”<ref name="H.R.2231">{{cite web |title=116th Congress{{!}} |url=https://www.congress.gov/bill/116th-congress/house-bill/2231|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the {{w|Federal Trade Commission}} to establish rules for evaluating highly sensitive automated systems.<ref>{{cite web |last1=Robertson |first1=Adi |title=A new bill would force companies to check their algorithms for bias|url=https://www.theverge.com/2019/4/10/18304960/congress-algorithmic-accountability-act-wyden-clarke-booker-bill-introduced-house-senate |website=theverge.com |access-date=30 August 2024 |language=en |date=10 April 2019}}</ref> The legislation would be introduced into the senate in 2022<ref name="S.3572">{{cite web |title=117th Congress{{!}} |url=https://www.congress.gov/bill/117th-congress/senate-bill/3572|website=Congress.gov |access-date=30 August 2024 |language=en}}</ref> but would still not be signed into law through 2024.
 +
|-
 +
| 2019 || {{dts|May 29}} || International || {{w|OECD}} AI Principles || International Policy || The Organisation for Economic Co-operation and Development ({{w|OECD}}) issues AI principles to shape policies, create an AI risk framework, and to foster global communication and understanding across jurisdictions. The European Union, Council of Europe, United Nations, and the United States would use these principles in their AI legislation.<ref name="OECD AI Principles overview>{{cite web |title=OECD AI Principles overview{{!}} |url=https://oecd.ai/en/ai-principles|website=oecd.ai |access-date=11 September 2024 |language=en}}</ref> The Principles aim to be values-based and include the following categories: sustainable development, human rights, transparency and explainability, security, and accountability.<ref name="OECD AI Principles">{{cite web |title=AI Principles{{!}} |url=https://www.oecd.org/en/topics/sub-issues/ai-principles.html|website=oecd.org |access-date=11 September 2024 |language=en}}</ref> The principles would be updated again in May 2024 in consideration of new technology and policy developments.<ref name="OECD AI Principles overview>{{cite web |title=OECD AI Principles overview{{!}} |url=https://oecd.ai/en/ai-principles|website=oecd.ai |access-date=11 September 2024 |language=en}}</ref>
 +
|-
 +
| 2019 || {{dts|October 8}} || {{w|Netherlands}} || National AI Strategy || National Policy || The Dutch government releases a strategic action for AI. The Strategy includes a list of initiatives to foster AI economic growth through education, research and innovation, and policy development.<ref name="Netherlands AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/netherlands-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref> The Strategy’s three pillars include capitalizing on social and economic opportunities (e.g., adopting and using AI across sectors), creating the right conditions for AI to thrive, and strengthening ethical foundations.<ref name="Netherlands AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/netherlands-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref> The annual government budget for AI innovation and research is around 45 million euros and the Strategy will be reviewed yearly.<ref name="Netherlands AI Watch">{{cite web |title=National strategies on Artificial Intelligence A European perspective in 2019{{!}} |url=https://knowledge4policy.ec.europa.eu/sites/default/files/netherlands-ai-strategy-report.pdf|website=knowledge4policy.ec.europa.eu |access-date=30 October 2024 |language=en |date=25 February 2020}}</ref>
 +
|-
 +
| 2019 || {{dts|November}} || {{w|Singapore}} || National AI Strategy || National Policy ||  Singapore releases a National AI Strategy produced by {{w|Smart Nation}} and the Digital Government Office’s National AI Office that aims to complete the digital transformation across multiple sectors of urban life.<ref name="Singapore NAIS">{{cite web |title=Raising Standards: Data and Artificial Intelligence in Southeast Asia{{!}} |url=https://asiasociety.org/policy-institute/raising-standards-data-ai-southeast-asia/ai/singapore|website=asiasociety.org |access-date=18 October 2024 |language=en |date=July 2022}}</ref> Singapore hopes to develop into a global AI hub, generate new business models, deliver life-improving services, and equip the workforce to adapt to an AI economy.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> The strategy asserts five national AI projects in healthcare, municipal solutions, education, customs, and logistics.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> The strategy also lists five enablers for a thriving AI ecosystem, including multi-stakeholder partnerships across sectors, data architecture, trusted environment, talent and education, and international collaboration.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> The document also provides private sector organizations with voluntary guidance on key ethical and governance issues, including ensuring AI decisions are explainable, human involvement in AI-augmented decision-making, and stakeholder communication.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref>
 +
|-
 +
| 2019 || {{dts|November 7}} || {{w|Australia}} || Artificial Intelligence Ethics Framework || National Policy || The Australian Government releases an artificial intelligence Ethics Framework to ensure safe, secure, and reliable AI. The framework includes eight voluntary, nonbinding principles to complement existing AI practices: human well-being, human-centered values, fairness, privacy protection, reliability, transparency and explainability, contestability, and accountability.<ref name="Australia Government on AI Ethics">{{cite web |title=Australia’s AI Ethics Principles{{!}} |url=https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework/australias-ai-ethics-principles#:~:text=Principles%20at%20a%20glance&text=Fairness%3A%20AI%20systems%20should%20be,ensure%20the%20security%20of%20data.|website=industry.gov.au |access-date=18 September 2024 |language=en |date=7 November 2019}}</ref> The principles are on par with those set forth by {{w|OECD}} and the {{w|World Economic Forum}}, and set to be trialed by {{w|National Australia Bank}}, {{w|Commonwealth Bank}}, {{w|Telstra}}, {{w|Microsoft}}, and Flamigo AI.<ref>{{cite web |last1=Tonkin |first1=Casey |title=AI ethics framework being put to the test|url=https://ia.acs.org.au/article/2019/ai-ethics-framework-being-put-to-the-test.html |website=ia.acs.org.au |access-date=18 September 2024 |language=en |date=7 November 2019}}</ref>
 +
|-
 +
| 2019 || {{dts|December 17}} || {{w|South Korea}} || National AI Strategy || National Policy ||  South Korea establishes its National Strategy for Artificial Intelligence. It outlines Korea’s vision and strategy for the AI era, aiming to grow from an IT leader to an AI-focused industry.<ref name="OECD: Korea National AI Strategy">{{cite web |title=NATIONAL STRATEGY FOR AI{{!}}|url=https://oecd.ai/en/dashboards/policy-initiatives/http:%2F%2Faipo.oecd.org%2F2021-data-policyInitiatives-26497|website=oecd.ai |access-date=20 September 2024 |language=en |date=5 July 2024}}</ref> All of Korea’s ministries jointly develop the Strategy, significant in its focus on providing a direction for the government’s AI policies.<ref>{{cite web |last1=Kyul |first1=Han |title=Korean Government Announces the “National AI Strategy,” Jointly Developed by All Ministries|url=https://www.kimchang.com/en/insights/detail.kc?sch_section=4&idx=20865 |website=kimchang.com |access-date=20 September 2024 |language=en |date=13 January 2020}}</ref> The goals outlined include ranking third in global digital competitiveness by 2030, grossing 455 trillion Korean won in AI profit, and reaching the top 10 countries for quality of life.<ref name="Korean National Strategy for AI">{{cite web |title=National Strategy for Artificial Intelligence{{!}} |url=https://doc.msit.go.kr/SynapDocViewServer/viewer/doc.html?key=343cb1a8f42c452e8a18ec9f89fbfca0&convType=img&convLocale=ko_KR&contextPath=/SynapDocViewServer|website=msit.go.kr |access-date=20 September 2024 |language=en |date=17 December 2024}}</ref>
 +
|-
 +
| 2020 || {{dts|June}} || International || {{w|Global Partnership on Artificial Intelligence}} || International Organization ||  The {{w|Global Partnership on Artificial Intelligence}} (GPAI) is established to share multidisciplinary research, identify key issues in AI, and facilitate international collaboration.<ref name="GPAI">{{cite web |title=About GPAI{{!}} |url=https://www.gpai.ai/about/#:~:text=Launched%20in%20June%202020%2C%20GPAI,research%20and%20applied%20activities%20on|website=gpai.ai |access-date=20 September 2024 |language=en |date=June 2020}}</ref> The {{w|OECD}} hosts the GPAI secretariat and the partnership is based on the shared commitment of {{w|G7}} countries to the {{w|OECD}} AI Principles.<ref name="GPAI">{{cite web |title=About GPAI{{!}} |url=https://www.gpai.ai/about/#:~:text=Launched%20in%20June%202020%2C%20GPAI,research%20and%20applied%20activities%20on|website=gpai.ai |access-date=20 September 2024 |language=en |date=June 2020}}</ref> The Alliance is a multi-stakeholder initiative to foster international cooperation over AI and includes the working groups Responsible AI, Data Governance, Future of Work, and Innovation and Commercialization.<ref name="GPAI">{{cite web |title=About GPAI{{!}} |url=https://www.gpai.ai/about/#:~:text=Launched%20in%20June%202020%2C%20GPAI,research%20and%20applied%20activities%20on|website=gpai.ai |access-date=20 September 2024 |language=en |date=June 2020}}</ref> The United States, wary to join any international AI panel due to overregulation concerns, joined the partnership to counter China’s increasing international AI presence.<ref>{{cite web |last1=O’Brien |first1=Matt |title=US joins G7 artificial intelligence group to counter China|url=https://www.defensenews.com/global/the-americas/2020/05/29/us-joins-g7-artificial-intelligence-group-to-counter-china/ |website=defensenews.com |access-date=20 September 2024 |language=en |date=29 May 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|October 10}} || {{w|European Union}} || {{w|Artificial Intelligence Act}} || International Policy || {{w|European Union}} leaders meet to discuss the digital transition. They invite the {{w|European Commission}}, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
 +
|-
 +
| 2020 || {{dts|November}} || {{w|Finland}} || National AI Strategy Update || National Policy || Finland updates its 2017 national strategy as part of the Artificial Intelligence 4.0 program, promoting the digitalization of Finland.<ref name="Finland AI 4.0">{{cite web |title=Artificial Intelligence 4.0 programme accelerates business digitalisation{{!}} |url=https://tem.fi/en/-/artificial-intelligence-4.0-programme-to-speed-up-digitalisation-of-business|website=tem.fi |access-date=30 October 2024 |language=en |date=November 2020}}</ref> The program aims for Finland to be sustainable, clean, and digitally efficient using AI by 2030.<ref name="Finland AI 4.0">{{cite web |title=Artificial Intelligence 4.0 programme accelerates business digitalisation{{!}} |url=https://tem.fi/en/-/artificial-intelligence-4.0-programme-to-speed-up-digitalisation-of-business|website=tem.fi |access-date=30 October 2024 |language=en |date=November 2020}}</ref>
 +
|-
 +
| 2020 || {{dts|November 25}} || {{w|Switzerland}} || National AI Guidelines || National Policy || Switzerland releases Guidelines on Artificial Intelligence, intended to act as a general frame of reference on the use of AI in the Federal Administration.<ref name="Guidelines on Swiss AI">{{cite web |title=Guidelines on Artificial Intelligence for the Confederation{{!}}|url=https://www.bakom.admin.ch/bakom/en/homepage/digital-switzerland-and-internet/strategie-digitale-schweiz/data-policy/ai.html|website=bakom.admin.ch |access-date=30 October 2024 |language=en |date=November 2020}}</ref> The Swiss Federal Council adopts the guidelines, developed by the interdepartmental Working Group on AI under the leadership of the {{w|Federal Department of Economic Affairs, Education and Research}} (EAER).<ref name="Swiss Government Council">{{cite web |title="Artificial Intelligence" – Adoption of Guidelines for the Federal Administration{{!}}|url=https://www.admin.ch/gov/fr/accueil/documentation/communiques.msg-id-81319.html|website=admin.ch |access-date=30 October 2024 |language=en |date=25 November 2020}}</ref> The Guidelines include prioritizing people, defining regulatory conditions for developing and applying AI, establishing transparency/traceability/explainability, ensuring accountability and safety, shaping AI governance, and involving all relevant stakeholders.<ref name="Guidelines on Swiss AI">{{cite web |title=Guidelines on Artificial Intelligence for the Confederation{{!}}|url=https://www.bakom.admin.ch/bakom/en/homepage/digital-switzerland-and-internet/strategie-digitale-schweiz/data-policy/ai.html|website=bakom.admin.ch |access-date=30 October 2024 |language=en |date=November 2020}}</ref> The parameters guide government tasks such as developing AI strategies, introducing regulations for sectors affected by AI policy, developing AI systems in the Federal Administration, and helping shape AI regulation.<ref name="Guidelines on Swiss AI">{{cite web |title=Guidelines on Artificial Intelligence for the Confederation{{!}}|url=https://www.bakom.admin.ch/bakom/en/homepage/digital-switzerland-and-internet/strategie-digitale-schweiz/data-policy/ai.html|website=bakom.admin.ch |access-date=30 October 2024 |language=en |date=November 2020}}</ref> Switzerland’s general approach to AI is informed by the {{w|OECD}} Principles and forming legislation that doesn’t directly regulate AI but instead informs the direction of safe AI development. <ref name="Switzerland AI Watch">{{cite web |title=Switzerland AI Strategy Report{{!}}|url=https://ai-watch.ec.europa.eu/countries/switzerland/switzerland-ai-strategy-report_en|website=ai-watch.ec.europa.eu |access-date=30 October 2024 |language=en |date=1 September 2021}}</ref>
 +
|-
 +
| 2020 || {{dts|December}} || {{w|Germany}} || National AI Strategy Update || National Policy ||  Germany releases an updated, more detailed National AI Strategy, focusing on integrating the technological developments since the 2018 Strategy. Germany plans to train more AI specialists, establish a robust research structure and AI ecosystem, create a human-centric regulatory framework, and support civil society AI networking for the common good.<ref name="2020 Update: Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy for the German Government: 2020 Update{{!}} |url=https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf|website=ki-strategie-deutschland.de |access-date=11 October 2024 |language=en |date=December 2020}}</ref> The AI priorities include favorable working conditions for science and increasing AI expertise.<ref name="2020 Update: Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy for the German Government: 2020 Update{{!}} |url=https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf|website=ki-strategie-deutschland.de |access-date=11 October 2024 |language=en |date=December 2020}}</ref> The research priorities are to strengthen national centers, encourage international research cooperation, and incentivize interdisciplinary AI research in healthcare, mobility, environmentalism, and aerospace.<ref name="2020 Update: Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy for the German Government: 2020 Update{{!}} |url=https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf|website=ki-strategie-deutschland.de |access-date=11 October 2024 |language=en |date=December 2020}}</ref> The regulatory priorities are to create solid conditions for safe and trustworthy AI applications, adaptively regulate AI in work settings, strengthen information security, and protect the public against AI misuse.<ref name="2020 Update: Germany’s AI Strategy">{{cite web |title=Artificial Intelligence Strategy for the German Government: 2020 Update{{!}} |url=https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf|website=ki-strategie-deutschland.de |access-date=11 October 2024 |language=en |date=December 2020}}</ref>
 +
|-
 +
| 2021 || {{dts|April 21}} || {{w|European Union}} || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Commission}} proposes the {{w|Artificial Intelligence Act}}. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
 +
|-
 +
| 2021 || {{dts|September 21}} || {{w|China}} || New Generation Artificial Intelligence Code of Ethics || National Policy || The {{w|Ministry of Science and Technology (China)}} publishes the New Generation AI Code of Ethics. Its three main AI provisions are the improvement of human well-being, the promotion of fairness and justice, and the protection of privacy and security. The Ministry encourages organizations to build upon the code.<ref>{{cite web |last1=Kachra |first1=Ashyana-Jasmine |title=Making Sense of China’s AI Regulations|url=https://www.holisticai.com/blog/china-ai-regulation |website=holisticai.com |access-date=28 October 2024 |language=en |date=12 February 2024}}</ref>
 +
|-
 +
| 2021 || {{dts|September 22}} || {{w|United Kingdom}} || National AI Strategy || National Policy || The UK government releases its National AI Strategy - a 10-year plan that outlines how to invest in and plan for long-term AI ecosystem needs, support the transition to an AI-enabled economy, and ensure the UK succeeds in AI governance.<ref name="UK National AI Strategy">{{cite web |title=National AI Strategy{{!}} |url=https://assets.publishing.service.gov.uk/media/614db4d1e90e077a2cbdf3c4/National_AI_Strategy_-_PDF_version.pdf|website=gov.uk |access-date=15 September 2024 |language=en |date=22 September 2021}}</ref> This strategy comes a few months after the EU’s {{w|Artificial Intelligence Act}}. The {{w|Alan Turing Institute}}, established in 2015, is the national research center for AI and one of the organizations that will help implement the AI strategy.<ref>{{cite web |last1=Marr |first1=Bernard |title=The Future Role Of AI And The UK National AI Strategy – Insights From Professor Mark Girolami|url=https://www.forbes.com/sites/bernardmarr/2021/11/03/the-future-role-of-ai-and-the-uk-national-ai-strategy--insights-from-professor-mark-girolami/ |website=forbes.com |access-date=15 September 2024 |language=en |date=3 November 2021}}</ref>
 +
|-
 +
| 2021 || {{dts|September 30}} || {{w|Brazil}} || National AI Strategy || National Policy || The Brazilian Government approves the Brazilian Strategy for Artificial Intelligence, a document to guide research, innovation, and development of ethical AI solutions.<ref>{{cite web |last1=Roman |first1=Juliana |title=Artificial Intelligence in Brazil: the Brazilian Strategy for Artificial Intelligence (BSAI/EBIA) and Bill No. 21/2020|url=https://irisbh.com.br/en/artificial-intelligence-in-brazil-the-brazilian-strategy-for-artificial-intelligence-bsai-ebia-and-bill-no-21-2020/#:~:text=21%2F2020%2C%20approved%20by%20the,for%20the%20development%20and%20use |website=irisbh.com.br |access-date=18 September 2024 |language=en |date=4 October 2021}}</ref> The strategy is based on the {{w|OECD}} AI Principles, aiming to develop ethical AI principles, guide AI use, remove barriers to innovation, improve cross-sector collaboration, develop AI skills, promote AI investment, and advance Brazilian technology overseas.<ref>{{cite web |last1=Lowe |first1=Josh |title=Brazil launches national AI strategy |url=https://www.globalgovernmentforum.com/brazil-launches-national-ai-strategy/|website=globalgovernmentforum.com |access-date=18 September 2024 |language=en |date=13 April 2021}}</ref> The strategy faced criticism for its lack of specifics on regulation.<ref>{{cite web |last1=Roman |first1=Juliana |title=Artificial Intelligence in Brazil: the Brazilian Strategy for Artificial Intelligence (BSAI/EBIA) and Bill No. 21/2020|url=https://irisbh.com.br/en/artificial-intelligence-in-brazil-the-brazilian-strategy-for-artificial-intelligence-bsai-ebia-and-bill-no-21-2020/#:~:text=21%2F2020%2C%20approved%20by%20the,for%20the%20development%20and%20use |website=irisbh.com.br |access-date=18 September 2024 |language=en |date=4 October 2021}}</ref>
 +
|-
 +
| 2022 || {{dts|January 28}} || {{w|Japan}} || AI Governance Guidelines || National Policy ||  Japan’s {{w|Ministry of Economy, Trade and Industry}} (METI) releases Governance Guidelines for Implementation of AI Principles Ver. 1.1.<ref name="OneTrust DataGuidance">{{cite web |title=Japan: METI releases updated version of Governance Guidelines on AI Principles{{!}} |url=https://www.dataguidance.com/news/japan-meti-releases-updated-version-governance|website=dataguidance.com |access-date=25 September 2024 |language=en |date=31 January 2023}}</ref> The principles include guidelines on AI such as conditions and risk analysis, goal setting, implementation, and evaluation.<ref name="OneTrust DataGuidance">{{cite web |title=Japan: METI releases updated version of Governance Guidelines on AI Principles{{!}} |url=https://www.dataguidance.com/news/japan-meti-releases-updated-version-governance|website=dataguidance.com |access-date=25 September 2024 |language=en |date=31 January 2023}}</ref> They consider the social acceptance of AI system development and operation, company AI proficiency, and suggest reducing incident related harms on users by emphasizing prevention and early response.<ref name="OneTrust DataGuidance">{{cite web |title=Japan: METI releases updated version of Governance Guidelines on AI Principles{{!}} |url=https://www.dataguidance.com/news/japan-meti-releases-updated-version-governance|website=dataguidance.com |access-date=25 September 2024 |language=en |date=31 January 2023}}</ref> These guidelines are practical and action-oriented, following Japan’s Social Principles on AI from 2019 which focus on ethics. In January 2021, METI released Ver. 1.0 of the guidelines, outlining AI trends overseas and locally. The current guidelines are the result of METI receiving public comment and holding meetings discussing Japan’s AI governance and how to operationalize the Social Principles.<ref name="METI">{{cite web |title=Call for Public Comments on "AI Governance Guidelines for Implementation of AI Principles Ver. 1.0" Opens{{!}} |url=https://www.meti.go.jp/english/press/2021/0709_004.html|website=meti.go.jp |access-date=25 September 2024 |language=en |date=9 July 2021}}</ref> Japan maintains the ethos that with a rapidly changing AI landscape, regulation can hamper innovation. METI concludes that the government should respect companies’ voluntary efforts for AI governance by providing nonbinding guidance.<ref>{{cite web |last1=Habuka |first1=Hiroki |title=Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency|url=https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency |website=CSIS.org |access-date=25 September 2024 |language=en |date=14 February 2023}}</ref>
 +
|-
 +
| 2022 || {{dts|March 1}} || {{w|China}} || Internet Information Service Algorithmic Recommendation Management Provisions || National Policy ||  The Internet Information Service Algorithmic Recommendation Management Provisions goes into effect in China, regulating AI in the context of content recommendation technologies. The tech is defined as generation and synthesis, personalized push, sorting and selection, retrieval and filtering, and scheduling-related decision-making of content.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref> The regulations apply to service providers, who are now prohibited from offering different prices based on personal characteristics, promoting addictive content, manipulating traffic numbers, or pushing fake news.<ref>{{cite web |last1=Kachra |first1=Ashyana-Jasmine |title=Making Sense of China’s AI Regulations|url=https://www.holisticai.com/blog/china-ai-regulation |website=holisticai.com |access-date=28 October 2024 |language=en |date=12 February 2024}}</ref> Companies using AI-based personalized recommendations must now uphold user rights, protect minors, elders, and workers from harm, maintain transparency, and present information in line with mainstream socialist values.<ref>{{cite web |last1=Kachra |first1=Ashyana-Jasmine |title=Making Sense of China’s AI Regulations|url=https://www.holisticai.com/blog/china-ai-regulation |website=holisticai.com |access-date=28 October 2024 |language=en |date=12 February 2024}}</ref> Companies will face a warning and possibly a fine for noncompliance.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref>
 +
|-
 +
| 2022 || {{dts|May}} || International || Quad AI || International Organization ||  The {{w|Quadrilateral Security Dialogue}}, known as the Quad, releases the report “Assessing AI-related Collaboration between the United States, Australia, India, and Japan” as an effort to cooperate on critical and emerging technology and as an alternative to China’s techno-authoritarian development model (including surveillance and censorship).<ref name="Quad AI">{{cite web |title=Assessing AI-related Collaboration between the United States, Australia, India, and Japan{{!}} |url=https://cset.georgetown.edu/publication/quad-ai/|website=cset.georgetown.edu |access-date=9 October 2024 |language=en |date=May 2022}}</ref> The document aims to ensure that tech innovation is aligned with the Quad members’ shared democratic values and respect for human rights.<ref name="Quad AI">{{cite web |title=Assessing AI-related Collaboration between the United States, Australia, India, and Japan{{!}} |url=https://cset.georgetown.edu/publication/quad-ai/|website=cset.georgetown.edu |access-date=9 October 2024 |language=en |date=May 2022}}</ref> The Quad began as a loose partnership between the United States, Australia, India, and Japan after the 2004 Indian Ocean tsunami to provide humanitarian aid to the affected region. It fell dormant after Australian concerns about irritation to China.<ref>{{cite web |last1=Madhani |first1=Aamer |last2=Miller |first2=Zeke |title=EXPLAINER: What’s the 4-nation Quad, where did it come from?|url=https://apnews.com/article/nato-shinzo-abe-japan-india-australia-c579b7eb5ea53fb8cc50097de85e6b14 |website=apnews.com |access-date=27 September 2024 |language=en |date=24 May 2022}}</ref> The Quad was resurrected in 2017 and held its first formal summit in 2021. <ref>{{cite web |last1=Madhani |first1=Aamer |last2=Miller |first2=Zeke |title=EXPLAINER: What’s the 4-nation Quad, where did it come from?|url=https://apnews.com/article/nato-shinzo-abe-japan-india-australia-c579b7eb5ea53fb8cc50097de85e6b14 |website=apnews.com |access-date=27 September 2024 |language=en |date=24 May 2022}}</ref>
 +
|-
 +
| 2023 || {{dts|January 10}} || {{w|China}} || Deep Synthesis Provisions || National Policy || China implements Deep Synthesis Provisions to increase government supervision over those technologies, becoming one of the first countries to regulate deepfakes. The government defines Deep Synthesis as technology that utilizes generative and/or synthetic algorithms to produce text, audio, video, or scenes.<ref>{{cite web |last1=Kachra |first1=Ashyana-Jasmine |title=Making Sense of China’s AI Regulations|url=https://www.holisticai.com/blog/china-ai-regulation |website=holisticai.com |access-date=28 October 2024 |language=en |date=12 February 2024}}</ref> The Provisions define the redline for deepfake services, prohibiting companies with deepfake tech from disseminating illegal information and requiring a “Generated by AI” label.<ref>{{cite web |last1=Li |first1=Barbara |last2=Zhou |first2=Amaya |title=Navigating the Complexities of AI Regulation in China|url=https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china|website=reedsmith.com |access-date=28 October 2024 |language=en |date=7 August 2024}}</ref> The Provisions apply to service providers, tech supporters, users, or other entities involved in deepfake services, such as online app distribution platforms.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref> The services must also adhere to China’s political ideology. However, penalties for noncompliance are not explicitly stated.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref>
 +
|-
 +
| 2023 || {{dts|June 15}} || International || AI Governance Alliance (AIGA) Established || International Organization ||  The {{w|World Economic Forum}} launches the AI Governance Alliance (AIGA) to guide responsible development and deployment of AI systems.<ref>{{cite web |last1=Tedeneke |first1=Alem |title=World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI|url=https://www.weforum.org/press/2023/06/world-economic-forum-launches-ai-governance-alliance-focused-on-responsible-generative-ai/ |website=weforum.org |access-date=20 September 2024 |language=en |date=15 June 2023}}</ref> The Alliance prioritizes safe systems and technology, promoting sustainable applications and transformation, and contributing to resilient governance and regulation.<ref>{{cite web |last1=Tedeneke |first1=Alem |title=World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI|url=https://www.weforum.org/press/2023/06/world-economic-forum-launches-ai-governance-alliance-focused-on-responsible-generative-ai/ |website=weforum.org |access-date=20 September 2024 |language=en |date=15 June 2023}}</ref> Its members would be from industry, government, and civil society worldwide.
 +
|-
 +
| 2023 || {{dts|July 12}} || {{w|United States}} || Whitehouse Meets with AI Companies || National Policy || President {{w|Joe Biden}} and Vice President {{w|Kamala Harris}} host leading AI companies {{w|Amazon (company)}}, {{w|Anthropic}}, {{w|Google}}, {{w|Inflection AI}}, {{w|Meta Platforms}}, {{w|Microsoft}}, and {{w|OpenAI}} at the Whitehouse and secure their voluntary commitments to prioritize safe, secure, and transparent AI development.<ref name="White House Fact Sheet on AI Leader Meeting">{{cite web |title=FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI{{!}} |url=https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/|website=whitehouse.gov |access-date=11 September 2024 |language=en |date=21 July 2023}}</ref> The companies promise to ensure product safety before public introduction, build secure systems, and earn public trust. Congress has yet to pass contemporary AI bills, so this voluntary, nonbinding agreement is the primary guidance around AI concerns.<ref>{{cite web |last1=Chatterjee |first1=Mohar |title=White House notches AI agreement with top tech firms|url=https://www.politico.com/news/2023/07/21/biden-notches-voluntary-deal-with-7-ai-developers-00107509 |website=politico.com |access-date=11 September 2024 |language=en |date=21 July 2023}}</ref>
 +
|-
 +
| 2023 || {{dts|August 15}} || {{w|China}} || Generative AI Measures || National Policy ||  China affects its final version of Generative AI Measures, becoming one of the first countries to regulate generative artificial intelligence technology.<ref>{{cite web |last1=Li |first1=Barbara |last2=Zhou |first2=Amaya |title=Navigating the Complexities of AI Regulation in China|url=https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china|website=reedsmith.com |access-date=28 October 2024 |language=en |date=7 August 2024}}</ref> They require service transparency and focus on the privacy of pre-training data. These requirements apply to services offered to the people of China regardless of the provider’s location.<ref>{{cite web |last1=Li |first1=Barbara |last2=Zhou |first2=Amaya |title=Navigating the Complexities of AI Regulation in China|url=https://www.reedsmith.com/en/perspectives/2024/08/navigating-the-complexities-of-ai-regulation-in-china|website=reedsmith.com |access-date=28 October 2024 |language=en |date=7 August 2024}}</ref> China also requires GenAI content to reflect socialist values, prohibiting content that can harm national interests or discriminate against Chinese citizens.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref> The Measures do not require users to provide their real identities while creating using GenAI, and they only apply to services offered to the public (as opposed to privately used tech).<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref> The penalty for violating the Measures is a warning followed by a possible fine.<ref>{{cite web |last1=Xu |first1=Hui |last2=Lee |first2=Bianca |title=China’s New AI Regulations|url=https://www.lw.com/admin/upload/SiteAttachments/Chinas-New-AI-Regulations.pdf |website=lw.com |access-date=28 October 2024 |language=en |date=16 August 2023}}</ref>
 +
|-
 +
| 2023 || {{dts|September 19}} || {{w|United States}} || Anthropic’s Responsible Scaling Policy || Company Policy ||  {{w|Anthropic}} publishes its Responsible Scaling Policy (RSP) - a series of technical and organizational protocols to guide risk management and development of increasingly powerful AI systems.<ref name="Anthropic’s RSP">{{cite web |title=Anthropic's Responsible Scaling Policy {{!}} |url=https://www.anthropic.com/news/anthropics-responsible-scaling-policy|website=anthropic.com |access-date=11 September 2024 |language=en |date=19 September 2023}}</ref> The RSP delineates AI Safety Levels 1-4, loosely based on the US Governments biosafety levels, to address catastrophic risk.<ref name="Anthropic’s RSP">{{cite web |title=Anthropic's Responsible Scaling Policy {{!}} |url=https://www.anthropic.com/news/anthropics-responsible-scaling-policy|website=anthropic.com |access-date=11 September 2024 |language=en |date=19 September 2023}}</ref> Anthropic aims in part to create competition in the AI safety space by publishing the policy.<ref name="Anthropic’s RSP">{{cite web |title=Anthropic's Responsible Scaling Policy {{!}} |url=https://www.anthropic.com/news/anthropics-responsible-scaling-policy|website=anthropic.com |access-date=11 September 2024 |language=en |date=19 September 2023}}</ref> The Institute for AI Policy and Strategy offers critiques of Anthropic’s RSP. The institute states the risk thresholds should be based on absolute risk rather than relative risk, the risk level thresholds should be lower than Anthropic defines them, and Anthropic should outline when it will alert authorities of identified risks and commit to outside scrutiny and evaluations.<ref name="IAPS on Responsible Scaling">{{cite web |title=Responsible Scaling: Comparing Government Guidance and Company Policy {{!}} |url=https://www.iaps.ai/research/responsible-scaling|website=iaps.ai |access-date=11 September 2024 |language=en |date=11 March 2024}}</ref>
 
|-  
 
|-  
| 2023 || {{dts|October 30}} || United States || {{w|Executive Order 14110}} || National Policy || Biden signs {{w|Executive Order 14110}} titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures<ref name="The White House">{{cite web |title=FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence{{!}} |url=https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/|website=whitehouse.gov |access-date=30 August 2024 |language=en}}</ref>. The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism<ref name="NIST Statement">{{cite web |title=NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence{{!}} |url=https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-and|website=nist.gov |access-date=30 August 2024 |language=en |date=19 December 2023}}</ref>. As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development<ref>{{cite web |last1=Mowshowitz |first1=Zvi |title=Reactions to the Executive Order|url=https://www.lesswrong.com/posts/G8SsspgAYEHHiDGNP/reactions-to-the-executive-order |website=lesswrong.com |access-date=30 August 2024 |language=en |date=1 November 2023}}</ref>. A complete timeline and outlook of the Executive Order can be found here.<ref name="AI EO Timeline">{{cite web |title=AI Executive Order Timeline{{!}} |url=https://bipartisanpolicy.org/blog/ai-eo-timeline/|website=bipartisanpolicy.org|date=13 December 2023 |access-date=30 August 2024 |language=en}}</ref>
+
| 2023 || {{dts|October 30}} || {{w|United States}} || {{w|Executive Order 14110}} || National Policy || Biden signs {{w|Executive Order 14110}} titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures.<ref name="The White House">{{cite web |title=FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence{{!}} |url=https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/|website=whitehouse.gov |access-date=30 August 2024 |language=en}}</ref> The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism.<ref name="NIST Statement">{{cite web |title=NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence{{!}} |url=https://www.nist.gov/news-events/news/2023/12/nist-calls-information-support-safe-secure-and-trustworthy-development-and|website=nist.gov |access-date=30 August 2024 |language=en |date=19 December 2023}}</ref> As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development.<ref>{{cite web |last1=Mowshowitz |first1=Zvi |title=Reactions to the Executive Order|url=https://www.lesswrong.com/posts/G8SsspgAYEHHiDGNP/reactions-to-the-executive-order |website=lesswrong.com |access-date=30 August 2024 |language=en |date=1 November 2023}}</ref> A complete timeline and outlook of the Executive Order can be found here.<ref name="AI EO Timeline">{{cite web |title=AI Executive Order Timeline{{!}} |url=https://bipartisanpolicy.org/blog/ai-eo-timeline/|website=bipartisanpolicy.org|date=13 December 2023 |access-date=30 August 2024 |language=en}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 1}}{{snd}}2 || International Conference ||{{w|AI Safety Summit}} || International Policy || The first {{w|AI Safety Summit}} is held at {{w|Bletchley Park}}, {{w|Milton Keynes}} in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.<ref>{{cite web|url = https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023|title = The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023|date = November 1, 2023|accessdate = May 19, 2024|publisher = GOV.UK}}</ref> It receives some commentary on LessWrong, viewing it as a partial step in the right direction,<ref>{{cite web|url = https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and|title = Thoughts on the AI Safety Summit company policy requests and responses|last = Soares|first = Nate|date = October 31, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref> including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.<ref>{{cite web|url = https://www.lesswrong.com/posts/zbrvXGu264u3p8otD/on-the-uk-summit|title = On the UK Summit|last = Mowshowitz|first = Zvi|date = November 7, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref>
+
| 2023 || {{dts|November 1}}{{snd}}2 || International ||{{w|AI Safety Summit}} || International Policy || The first {{w|AI Safety Summit}} is held at {{w|Bletchley Park}}, {{w|Milton Keynes}} in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.<ref>{{cite web|url = https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023|title = The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023|date = November 1, 2023|accessdate = May 19, 2024|publisher = GOV.UK}}</ref> It receives some commentary on LessWrong, viewing it as a partial step in the right direction,<ref>{{cite web|url = https://www.lesswrong.com/posts/ms3x8ngwTfep7jBue/thoughts-on-the-ai-safety-summit-company-policy-requests-and|title = Thoughts on the AI Safety Summit company policy requests and responses|last = Soares|first = Nate|date = October 31, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref> including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.<ref>{{cite web|url = https://www.lesswrong.com/posts/zbrvXGu264u3p8otD/on-the-uk-summit|title = On the UK Summit|last = Mowshowitz|first = Zvi|date = November 7, 2023|accessdate = May 19, 2024|publisher = LessWrong}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 1}} || United States || US {{w|AI Safety Institute}} || Organization || United States Vice President {{w|Kamala Harris}} announces the U.S. AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).<ref>{{cite web|url = https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/|title = FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence|date = November 1, 2023|accessdate = July 6, 2024|publisher = White House}}</ref>
+
| 2023 || {{dts|November 1}} || {{w|United States}} || {{w|AI Safety Institute}} || Organization || United States Vice President {{w|Kamala Harris}} announces the US AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).<ref>{{cite web|url = https://www.whitehouse.gov/briefing-room/statements-releases/2023/11/01/fact-sheet-vice-president-harris-announces-new-u-s-initiatives-to-advance-the-safe-and-responsible-use-of-artificial-intelligence/|title = FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence|date = November 1, 2023|accessdate = July 6, 2024|publisher = White House}}</ref>
 
|-
 
|-
| 2023 || {{dts|November 2}} || United Kingdom || UK {{w|AI Safety Institute}} || Organization || The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.<ref>{{cite web|url = https://www.gov.uk/government/news/prime-minister-launches-new-ai-safety-institute|title = Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.|date = November 2, 2023|accessdate = July 6, 2024|publisher = GOV.UK}}</ref>
+
| 2023 || {{dts|November 2}} || {{w|United Kingdom}} || {{w|AI Safety Institute}} || Organization || The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.<ref>{{cite web|url = https://www.gov.uk/government/news/prime-minister-launches-new-ai-safety-institute|title = Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.|date = November 2, 2023|accessdate = July 6, 2024|publisher = GOV.UK}}</ref>
 
|-
 
|-
| 2023 || {{dts|December 9}} || Europe || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Council}} and {{w|European Parliment}} reach a provisional agreement on the {{w|Artificial Intelligence Act}}. The Act should go into effect in 2026.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
+
| 2023 || {{dts|December 4}} || {{w|Singapore}} || National AI Strategy Update|| National Policy || Singapore releases updated strategy “AI for the Public Good For Singapore and the World” in response to technological developments since it’s 2019 strategy, offering broader coverage, more concrete goals, and a shifting rhetoric from nice to necessary to have.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> The Strategy’s main goals are excellence and empowerment. The core differences from Singapore’s last strategy are moving from opportunity to necessity, local to global, and projects to systems.<ref name="Singapore National AI Strategy">{{cite web |title=National Artificial Intelligence Strategy 2 to uplift Singapore's social and economic potential{{!}} |url=https://www.smartnation.gov.sg/media-hub/press-releases/04122023/|website=smartnation.gov.sg |access-date=18 October 2024 |language=en |date=10 October 2024}}</ref> The strategy suggests directing efforts to three systems via ten enablers: Activity Drivers (industry, government, research), People and Communities (talent, capabilities, placemaking), and Infrastructure and Environment (compute, data, trusted environment, leading thought and action).<ref name="Singapore National AI Strategy">{{cite web |title=National Artificial Intelligence Strategy 2 to uplift Singapore's social and economic potential{{!}} |url=https://www.smartnation.gov.sg/media-hub/press-releases/04122023/|website=smartnation.gov.sg |access-date=18 October 2024 |language=en |date=10 October 2024}}</ref> The Ministry of Communications and Information (MCI), {{w|Smart Nation}}, and The Topos Institute would be hosting the inaugural Singapore Conference on AI in the following days, hosting experts from academia, industry, and government to discuss critical AI questions.<ref name="Singapore National AI Strategy">{{cite web |title=National Artificial Intelligence Strategy 2 to uplift Singapore's social and economic potential{{!}} |url=https://www.smartnation.gov.sg/media-hub/press-releases/04122023/|website=smartnation.gov.sg |access-date=18 October 2024 |language=en |date=10 October 2024}}</ref> Singapore shows little indication of setting hard rules for AI, opting to promote responsible AI practices through collaborative initiatives.<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> Singapore would rank high in leading AI hubs such as the AI Government Readiness Index (2023), The Global AI Index (2023), and the Asia Pacific AI Readiness Index (2023).<ref>{{cite web |last1=Ho |first1=Ming Yin |title=Singapore’s National Strategy in the Global Race for AI|url=https://www.kas.de/en/web/politikdialog-asien/digital-asia/detail/-/content/singapore-s-national-strategy-in-the-global-race-for-ai |website=kas.de |access-date=18 October 2024 |language=en |date=26 February 2024}}</ref> Singapore would later announce that it would invest more than S$1 billion or US$743 million into AI over the next five years in the 2024 National Budget.<ref>{{cite web |last1=Aziz |first1=Muhamad |title=Singapore’s Ambitious AI Investment Plan|url=https://www.aseanbriefing.com/news/singapores-ambitious-ai-investment-plan/ |website=aseanbriefing.com |access-date=18 October 2024 |language=en |date=13 March 2024}}</ref>
 
|-
 
|-
| 2024 || {{dts|February 8}} || United States || {{w|Artificial Intelligence Act}} || Organization || U.S. Secretary of Commerce {{w|Gina Raimondo}} announces "the creation of the U.S. AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the U.S. AI Safety Institute, and includes over 200 member organizations.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated|title = Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety|publisher = U.S. Department of Commerce|date = February 8, 2024|accessdate = July 6, 2024}}</ref> The member organizations were recruited through a notice published the ''Federal Register'' asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).<ref>{{cite web|url = https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium|title = Artificial Intelligence Safety Institute Consortium|date = November 2, 2023|accessdate = July 6, 2024|publisher = Federal Register}}</ref><ref>{{cite web|url = https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic|title = Artificial Intelligence Safety Institute Consortium (AISIC)|publisher = National Institute of Standards and Technology|accessdate = July 6, 2024}}</ref>
+
| 2023 || {{dts|December 9}} || {{w|European Union}} || {{w|Artificial Intelligence Act}} || International Policy || The {{w|European Council}} and {{w|European Parliment}} reach a provisional agreement on the {{w|Artificial Intelligence Act}}. The Act should go into effect in 2026.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref>
 
|-
 
|-
| 2024 || {{dts|March 7}} (anticipation), {{dts|April 16}} (official announcement) || United States || US {{w|AI Safety Institute}} || Organization || U.S. Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the U.S. AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety|title = U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team|publisher = U.S. Department of Commerce|date = April 16, 2024|accessdate = July 6, 2024}}</ref> A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with EA and longtermism could compromise the institute’s objectivity and integrity."<ref>{{cite web|url = https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute|last = Goldman|first = Sharon|publisher = VentureBeat|date = March 7, 2024|accessdate = July 6, 2024}}</ref><ref>{{cite web|url = https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute (linkpost)|publisher = Effective Altruism Forum|date = March 8, 2024|accessdate = July 6, 2024}}</ref>
+
| 2023 || {{dts|December 17}} || {{w|Israel}} || AI Ethics Policy || National Policy || Israel releases its first-ever comprehensive AI ethics policy via the collaborative effort between the Ministry of Innovation, Science, and Technology, The Office of Legal Counsel and Legislative Affairs, and the Ministry of Justice.<ref name="Israeli AI Ethics">{{cite web |title=Israel's Policy on Artificial Intelligence Regulation and Ethics{{!}} |url=https://www.gov.il/en/pages/ai_2023|website=gov.il |access-date=18 October 2024 |language=en |date=17 December 2023}}</ref> The innovation policy identifies challenges in private sector use of AI (discrimination, human oversight, explainability, disclosure of AI interactions, safety, accountability, and privacy). It suggests collaborative development, aligning policy principles with the OECD AI recommendations, and responsible innovation.<ref name="Israeli AI Ethics">{{cite web |title=Israel's Policy on Artificial Intelligence Regulation and Ethics{{!}} |url=https://www.gov.il/en/pages/ai_2023|website=gov.il |access-date=18 October 2024 |language=en |date=17 December 2023}}</ref> The document also recommends fortifying human-centric innovation, policy coordination, internal collaboration, tools for responsible AI, and public participation.<ref name="Israeli AI Ethics">{{cite web |title=Israel's Policy on Artificial Intelligence Regulation and Ethics{{!}} |url=https://www.gov.il/en/pages/ai_2023|website=gov.il |access-date=18 October 2024 |language=en |date=17 December 2023}}</ref> The document outlines a responsible innovation policy. Israel is not expected to enact an AI policy; instead, it will opt for guidelines in hopes that regulation will not impede global positioning.<ref>{{cite web |last1=Or-Hof |first1=Dan |title=Proactive caution: Israel’s approach to AI regulation|url=https://iapp.org/news/a/proactive-caution-israels-approach-to-ai-regulation |website=iapp.org |access-date=18 October 2024 |language=en |date=10 January 2024}}</ref>
 
|-
 
|-
| 2024 || {{dts|May 21}} || Europe || {{w|Artificial Intelligence Act}} || International Policy ||  The {{w|European Council}} approves the {{w|Artificial Intelligence Act}}.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref> This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.
+
| 2023 || {{dts|December 18}} || {{w|United States}} || {{w|OpenAI}} Publishes Preparedness Framework || Company Policy || {{w|OpenAI}} releases its “Preparedness Framework,” a living document positing that a “robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment.”<ref name="OpenAi Preparedness Framework">{{cite web |title=OpenAi Preparedness Framework (Beta){{!}} |url=https://cdn.openai.com/openai-preparedness-framework-beta.pdf|website=openai.com|access-date=11 September 2024 |language=en |date=18 December 2023}}</ref> The elements of the framework include tracking catastrophic risk with evaluations, seeking out unknown-unknowns, establishing safety baselines, tasking preparedness teams with on-the-ground-work, and creating a cross-functional advisory board.<ref name="Safer AI Comparative Analysis">{{cite web |title=Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis{{!}} |url=https://www.safer-ai.org/post/is-openais-preparedness-framework-better-than-its-competitors-responsible-scaling-policies-a-comparative-analysis|website=safer-ai.org |access-date=11 September 2024 |language=en |date=19 January 2024}}</ref> This document is released a few months after Anthropic’s RSP. Safer AI comments on OpenAI’s improvements to Anthropic’s safety document, including calling for more safety tests, allowing the board to veto CEO decisions, adding risk identification and analysis, and forecasting risks.<ref name="Safer AI Comparative Analysis">{{cite web |title=Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis{{!}} |url=https://www.safer-ai.org/post/is-openais-preparedness-framework-better-than-its-competitors-responsible-scaling-policies-a-comparative-analysis|website=safer-ai.org |access-date=11 September 2024 |language=en |date=19 January 2024}}</ref> Elements included in the RSP that were not in the Preparedness Framework are a commitment to publicizing results of evaluation, incident reporting mechanisms, and detailed commitments for infosecurity and cybersecurity.<ref name="Safer AI Comparative Analysis">{{cite web |title=Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis{{!}} |url=https://www.safer-ai.org/post/is-openais-preparedness-framework-better-than-its-competitors-responsible-scaling-policies-a-comparative-analysis|website=safer-ai.org |access-date=11 September 2024 |language=en |date=19 January 2024}}</ref>
 +
|-
 +
| 2024 || {{dts|January 17}} || International || AIGA Releases AI Guidelines || International Policy ||  At the 2024 annual {{w|World Economic Forum}} in {{w|Davos}}, the AIGA, in collaboration with {{w|IBM}} Consulting and {{w|Accenture}}, releases three reports on AI regulation and governance.<ref>{{cite web |last1=Sibahle |first1=Malinga |title=AI Governance Alliance debuts research reports on AI guidelines|url=https://www.itweb.co.za/article/ai-governance-alliance-debuts-research-reports-on-ai-guidelines/WnxpE74YK6oMV8XL |website=itweb.co.za |access-date=20 September 2024 |language=en |date=18 January 2024}}</ref> “Presidio AI Framework: Towards Safe Generative AI Models,” “Unlocking Value from Generative AI: Guidance for Responsible Transformation,” and “Generative AI Governance: Shaping Our Collective Global Future,” aim to address the digital divide and to apply and mobilize AI resources in sectors like healthcare and education.<ref>{{cite web |last1=Sibahle |first1=Malinga |title=AI Governance Alliance debuts research reports on AI guidelines|url=https://www.itweb.co.za/article/ai-governance-alliance-debuts-research-reports-on-ai-guidelines/WnxpE74YK6oMV8XL |website=itweb.co.za |access-date=20 September 2024 |language=en |date=18 January 2024}}</ref>
 +
|-
 +
| 2024 || {{dts|February 8}} || {{w|United States}} || US AI Safety Institute Consortium || Organization || US Secretary of Commerce {{w|Gina Raimondo}} announces "the creation of the US AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the US AI Safety Institute, and includes over 200 member organizations.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/02/biden-harris-administration-announces-first-ever-consortium-dedicated|title = Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety|publisher = U.S. Department of Commerce|date = February 8, 2024|accessdate = July 6, 2024}}</ref> The member organizations were recruited through a notice published the ''Federal Register'' asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).<ref>{{cite web|url = https://www.federalregister.gov/documents/2023/11/02/2023-24216/artificial-intelligence-safety-institute-consortium|title = Artificial Intelligence Safety Institute Consortium|date = November 2, 2023|accessdate = July 6, 2024|publisher = Federal Register}}</ref><ref>{{cite web|url = https://www.nist.gov/aisi/artificial-intelligence-safety-institute-consortium-aisic|title = Artificial Intelligence Safety Institute Consortium (AISIC)|publisher = National Institute of Standards and Technology|accessdate = July 6, 2024}}</ref>
 +
|-
 +
| 2024 || {{dts|March 7}} (anticipation), {{dts|April 16}} (official announcement) || {{w|United States}} || {{w|AI Safety Institute}} Leadership || Organization || US Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the US AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.<ref>{{cite web|url = https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety|title = U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team|publisher = U.S. Department of Commerce|date = April 16, 2024|accessdate = July 6, 2024}}</ref> A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with [effective alturism] and longtermism could compromise the institute’s objectivity and integrity."<ref>{{cite web|url = https://venturebeat.com/ai/nist-staffers-revolt-against-potential-appointment-of-effective-altruist-ai-researcher-to-us-ai-safety-institute/|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute|last = Goldman|first = Sharon|publisher = VentureBeat|date = March 7, 2024|accessdate = July 6, 2024}}</ref><ref>{{cite web|url = https://forum.effectivealtruism.org/posts/9QLJgRMmnD6adzvAE/nist-staffers-revolt-against-expected-appointment-of|title = NIST staffers revolt against expected appointment of ‘effective altruist’ AI researcher to US AI Safety Institute (linkpost)|publisher = Effective Altruism Forum|date = March 8, 2024|accessdate = July 6, 2024}}</ref>
 +
|-
 +
| 2024 || {{dts|April 8}} || {{w|France}} || French Data Protection Publishes Recommendation on AI || National Policy ||  The French data protection authority {{w|Commission nationale de l'informatique et des libertés}} (CNIL), publicizes its recommendation on AI, requesting public consultation and comments.<ref name="France: CNIL Recommendations">{{cite web |title=France: CNIL requests comments on new AI recommendations{{!}} |url=https://www.dataguidance.com/news/france-cnil-requests-comments-new-ai-recommendations|website=dataguidance.com |access-date=11 October 2024 |language=en |date=2 July 2024}}</ref> The recommendations focus on the developmental phase of AI systems, aiming to guide developers who process personal data in the context of the {{w|General Data Protection Regulation}} (GDPR).<ref>{{cite web |last1=Padova |first1=Yann |last2=Burton |first2=Cedric |title=French Data Protection Authority Publishes Recommendations on the Development of AI Systems: Seven Takeaways|url=https://www.wsgr.com/en/insights/french-data-protection-authority-publishes-recommendations-on-the-development-of-ai-systems-seven-takeaways.html |website=wsgr.com |access-date=11 October 2024 |language=en |date=23 April 2024}}</ref> The CNIL encourages developers to understand why they are developing systems, rely on legal bases, ensure their rights to reuse personal data, conduct impact assessments, minimize data, and plan data lifecycles.<ref>{{cite web |last1=Padova |first1=Yann |last2=Burton |first2=Cedric |title=French Data Protection Authority Publishes Recommendations on the Development of AI Systems: Seven Takeaways|url=https://www.wsgr.com/en/insights/french-data-protection-authority-publishes-recommendations-on-the-development-of-ai-systems-seven-takeaways.html |website=wsgr.com |access-date=11 October 2024 |language=en |date=23 April 2024}}</ref> The CINL asserts the importance of caution regarding data scraping for model training, open-source AI models, human rights, and the application of the GDPR to AI models trained with personal data.<ref name="France: CNIL Recommendations">{{cite web |title=France: CNIL requests comments on new AI recommendations{{!}} |url=https://www.dataguidance.com/news/france-cnil-requests-comments-new-ai-recommendations|website=dataguidance.com |access-date=11 October 2024 |language=en |date=2 July 2024}}</ref>
 +
|-
 +
| 2024 || {{dts|May 21}} || {{w|European Union}} || {{w|Artificial Intelligence Act}} || International Policy ||  The {{w|European Council}} approves the {{w|Artificial Intelligence Act}}.<ref name="Timeline - Artificial intelligence">{{cite web |title=European Council - Council of the European Union{{!}} |url=https://www.consilium.europa.eu/en/policies/artificial-intelligence/timeline-artificial-intelligence/|website=consilium.europa.eu |access-date=30 August 2024 |language=en}}</ref> This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.
 +
|-
 +
| 2024 || {{dts|July 19}} || {{w|African Union}} || African Union Executive Council: Continental AI Strategy || International Policy || The {{w|Executive Council of the African Union}} endorses the Continental AI Strategy, a commitment to an Africa-centric development approach to AI.<ref name="African Union AI Strategy">{{cite web |title=African Union committed to developing AI capabilities in Africa{{!}} |url=https://au.int/en/pressreleases/20240828/african-union-committed-developing-ai-capabilities-africa#:~:text=With%20profound%20impacts%20across%20economics,Sustainable%20Development%20Goals%20(SDGs)|website=au.int |access-date=11 October 2024 |language=en |date=28 August 2024}}</ref> The Strategy provides a unified approach while encouraging African countries to develop contextually specific national AI policies.<ref name="African Union AI Strategy">{{cite web |title=African Union committed to developing AI capabilities in Africa{{!}} |url=https://au.int/en/pressreleases/20240828/african-union-committed-developing-ai-capabilities-africa#:~:text=With%20profound%20impacts%20across%20economics,Sustainable%20Development%20Goals%20(SDGs)|website=au.int |access-date=11 October 2024 |language=en |date=28 August 2024}}</ref> The 5-year implementation plan includes supporting African Union member states in creating national AI strategies, fostering AI talent in Africa, nurturing AI partnerships and investments, adopting AI in priority sectors, building AI infrastructure, promoting research, and developing legal frameworks.<ref name="African Union AI Strategy">{{cite web |title=African Union committed to developing AI capabilities in Africa{{!}} |url=https://au.int/en/pressreleases/20240828/african-union-committed-developing-ai-capabilities-africa#:~:text=With%20profound%20impacts%20across%20economics,Sustainable%20Development%20Goals%20(SDGs)|website=au.int |access-date=11 October 2024 |language=en |date=28 August 2024}}</ref> Benin, Egypt, Ghana, Mauritius, Rwanda, Senegal, and Tunisia all have National AI Strategies.<ref>{{cite web |last1=Okolo |first1=Chinasa |title=Reforming data regulation to advance AI governance in Africa|url=https://www.brookings.edu/articles/reforming-data-regulation-to-advance-ai-governance-in-africa/#:~:text=With%20only%20seven%20African%20nations,frameworks%20on%20the%20African%20continent. |website=brookings.edu |access-date=11 October 2024 |language=en |date=15 March 2024}}</ref>
 
|}
 
|}
  
 
==See also==
 
==See also==
  
 +
* [[Timeline of AI ethics violations]]
 
* [[Timeline of AI safety]]
 
* [[Timeline of AI safety]]
 
* [[Timeline of machine learning]]
 
* [[Timeline of machine learning]]
Line 43: Line 153:
 
* [[Timeline of OpenAI]]
 
* [[Timeline of OpenAI]]
 
* [[Timeline of large language models]]
 
* [[Timeline of large language models]]
 
  
 
==References==
 
==References==
  
 
{{reflist|30em}}
 
{{reflist|30em}}

Latest revision as of 10:19, 6 November 2024

This is a timeline of AI policy and legislation, which attempts to overview the progression of international and local AI and AI safety policies. Various countries have released National AI strategies, guidelines, and regulations. International organizations focused on AI governance, such as the Global Partnership on Artificial Intelligence (GPAI) and the AI Governance Alliance (AIGA), have also contributed to a growing body of AI regulation.

Caveats

  • It should be noted that the timeline only includes policies and does not include incidents of policy violations or AI-related human rights abuses (see Timeline of AI ethics violations).
  • The timeline has been updated through August 2024.

Big picture

Overall summary

Year Details
2017 Canada is the first country to release a National AI Strategy. China releases Guidelines on AI development shortly after, and Finland releases a National AI Strategy towards the end of the year.
2018 France, India, and Germany all sequentially release National AI Strategies. The European Union enacts the General Data Protection Regulation and The United States the California Consumer Privacy Act, which strengthen personal privacy in the age of AI.
2019 The Center for Security and Emerging Technology (CSET) is established in the United States, followed by Executive Order 13859, a vague directive for the US to become a leading AI economy. Japan and Australia release Principles on ethical AI development. The United States introduces the Algorithmic Accountability Act to combat bias and discrimination in automated decision-making systems. The OECD releases AI Principles to shape global AI policies. Singapore, South Korea, and the Netherlands release AI Strategies.
2020 The Global Partnership on Artificial Intelligence, hosted by OECD, is established to foster international AI policy collaboration. European Union leaders discuss the Artificial Intelligence Act. Finland and Germany update their National AI Strategies, and Switzerland releases National AI Guidelines.
2021 The EU releases the Artificial Intelligence Act. China releases an AI Ethics Code. The UK and Brazil publish National AI Strategies.
2022 Japan releases AI Governance Guidelines. China releases provisions on algorithmic recommendations. The Quadrilateral Security Dialogue releases a collaborative AI report.
2023 China releases generative AI Measures and Provisions on deepsynthesis technologies. The World Economic Forum launches the AI Governance Alliance (AIGA) to guide responsible AI development. The Whitehouse hosts leading AI companies to discuss a voluntary AI safety agreement. Anthropic releases Responsible Scaling Policy in the United States. The US releases Executive Order 14110 establishing AI safety standards. The first international AI Safety Summit is held in the UK. The US and the UK both establish AI Safety Institutes. Singapore updates its National AI Strategy. The EU reaches a provisional agreement on the Artificial Intelligence Act. Israel releases an AI Ethics Policy. OpenAI publishes their Preparedness Framework.
2024 The AIGA releases AI Guidelines to guide international AI policies. The United States establishes the US AI Safety Institute Consortium to unite AI leaders. The US bolsters the AI Safety Institute leadership. France publishes recommendations on AI policy in line with the General Data Protection Regulation. The EU approves the Artificial Intelligence Act. The African Union endorses a Continental AI Strategy.

Full timeline

Inclusion criteria

Here is a list of criteria on what rows were included:

  • Flagship policies of countries that are ranked in the top 10 of various AI readiness ranking indexes (Government AI Readiness Index,[1]The Global AI Index 2024,[2] Techopedia[3])
  • Representation from each continent to ensure a diverse range of perspectives.
  • Notable international AI agreements and conferences.
  • National policies and regulations on AI development, deployment, and governance.
  • Key milestones in the development of AI technologies, such as the release of new AI frameworks or significant advancements in areas like natural language processing.


Timeline of AI policy

Year Month and date Region Name Event type Details
2017 June Canada Pan-Canadian Artificial Intelligence Strategy National Policy Canada releases the world’s first National AI Strategy, aiming to have the most robust AI ecosystem in the world by 2030.[4] The Strategy is a collaborative effort, spanning across government, academia, and industry sectors and headed by the Canadian Institute for Advanced Research (CIFAR).[5] Canada names the Vector Institute (Canada), Mila (research institute), and Amii (research institute) as national AI institutes and contributors to the nation’s AI progress.[4] This strategy would go on to enhance Canada’s global standing in AI research and innovation.
2017 July 20 China Guidelines on AI Development National Policy The State Council of the People's Republic of China issues guidelines on developing AI by embedding AI into the socioeconomic landscape and the country’s basic functioning. The council lays out plans to be a world leader in AI by 2030, aiming for the total output of the AI industry to be 1 trillion yuan ($147.8 billion).[6]
2017 October Finland National AI Strategy National Policy The Finnish Ministry of Economic Affairs and Employment releases Finland’s Age of Artificial Intelligence, providing policy recommendations, laying out the current state of AI, and possible ways AI will transform society.[7] The Strategy outlines adopting an open data policy and creating adequate conditions for prosperous AI.[7] The goals are to increase the competitiveness of Finnish AI industry, provide high-quality public services, improve public sector efficiency, and ensure a well-functioning society.[7]
2018 March 29 France National AI Strategy National Policy Emmanuel Macron announces the National French AI Strategy, planning to spend 1.5 billion euros on AI during his term as president.[8] The Strategy states France’s intent to strengthen public research institutes, double the number of students trained in AI, and bolster data protection and confidentiality.[9] The proposed sectors to benefit from AI are health (specifically disease detection and prevention), transportation, environmental policies, and defense.[10] The 5-year plan aims to improve AI education, attract AI talent, establish an open data policy for AI implementation, and create an ethical framework for the transparent and fair use of AI.[11] The regulatory proposal is to create a digital technology and AI ethics committee to lead discussions on AI transparently.[11]
2018 May 5 European Union General Data Protection Regulation International Policy The European Union effects the General Data Protection Regulation (GDPR), the strongest and most comprehensive attempt yet to regulate personal data. The GDPR outlines a set of rules that aims to strengthen protection for personal data in response to increasing technological development.[12] Although the GDPR is focused on privacy, it states that individuals have the right to human review of automated decision-making systems results from .[13] The fine for violating the GDPR is high and extends to any organization that offers services to EU citizens.[12]
2018 June India National AI Strategy National Policy NITI Aayog, India’s public policy thinktank, releases a National AI Strategy (#AIforAll).[14] The Strategy suggests that India harness the power of AI through research, application, training, acceleration of its adoption, and responsible development.[14] The sectors predicted to benefit the most from advancing AI are healthcare, agriculture, education, infrastructure, and mobility.[15] The barriers to actualizing the Strategy’s goals are a lack of AI expertise, a deficiency in data ecosystems, and limited collaboration.[15]
2018 June 28 United States California Consumer Privacy Act Regional Policy The California Consumer Privacy Act is signed into law, heightening consumer control over personal information. The law would go into effect January 1, 2020 and grants consumers the right to know about, opt out of the sharing of, and delete personal information[16]. The Act would influence personal data usage by giving consumers the right to opt out of automated decision-making systems and by compelling businesses to inform customers on how and for what purpose they use personal information.[17] These regulations require businesses to disclose if and how they use personal information for AI training.
2018 November 15 Germany National AI Strategy National Policy Germany releases a National AI Strategy developed by the Federal Ministry of Education and Research (Germany), Federal Ministry for Economic Affairs and Climate Action, and Federal Mistry of Labour and Social Affairs.[18] The stated goals are to increase Germany’s competitiveness, become an international leading AI entity, ensure responsible development and deployment of AI for human good, and ethically integrate AI.[18] The action items are to strengthen research, streamline result into industry, increase the accessibility of experts, create data infrastructure, encourage EU cooperation, and to foster AI dialogue in society.[19] The government is set to provide 3 billion euros to implement the strategy until 2025.[19]
2019 January United States CSET Formation National Organization Open Philanthropy grants the Walsh School of Foreign Service at Georgetown University $55,000 to establish the Center for Security and Emerging Technology (CSET), a think tank dedicated to the policy analysis of international security and emerging tech.[20] CSET will provide high-quality advice to policymakers to combat AI risks by by assessing global technological developments with a focus on the USA and related policy communities, generating written products for policymakers, and training people for roles in the policy community.[20] Silicon Valley entrepreneur Dustin Moskovitz, co-founder of Facebook, primarily funds the grant after recognizing a demand for policy analysis.[21] CSET would go on to influence AI policy and be named a member of the Biden Administrations AI Safety Consortium in 2024. [22]
2019 February 11 United States Executive Order 13859 National Policy President Trump signs Executive Order 13859 to maintain American Leadership in Artificial Intelligence. The Order directs federal agencies to prioritize AI research and develop and prompt American leadership in the AI space.[23] The Order does not provide details on how it plans to put the new policies in effect, and does not allocate any federal funding towards executing its vision.[24]
2019 March 29 Japan Social Principles of Human-Centered AI National Policy The Japanese government releases the Social Principles of Human-Centered AI, a set of guidelines for implementing AI in society with the philosophies of human dignity, diversity, inclusion, and sustainability which the government will continuously revise.[25] The Social Principles are a broad ethical framework of Japan's vision for AI in society. Japan provides nonbinding guidelines on AI and imposes transparency obligations on some large digital platforms.[26] Japan aims to achieve social goals through the use of AI rather than restriction.
2019 April 10 United States Algorithmic Accountability Act National Policy The Algorithmic Accountability Act is introduced into the House of Representatives. Commercial entities must “conduct assessments of high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.”[27] The Bill aims to minimize bias, discrimination, and inaccuracy in automated decision systems by compelling companies to assess their impacts. The Act does not establish binding regulations but asks the Federal Trade Commission to establish rules for evaluating highly sensitive automated systems.[28] The legislation would be introduced into the senate in 2022[29] but would still not be signed into law through 2024.
2019 May 29 International OECD AI Principles International Policy The Organisation for Economic Co-operation and Development (OECD) issues AI principles to shape policies, create an AI risk framework, and to foster global communication and understanding across jurisdictions. The European Union, Council of Europe, United Nations, and the United States would use these principles in their AI legislation.[30] The Principles aim to be values-based and include the following categories: sustainable development, human rights, transparency and explainability, security, and accountability.[31] The principles would be updated again in May 2024 in consideration of new technology and policy developments.[30]
2019 October 8 Netherlands National AI Strategy National Policy The Dutch government releases a strategic action for AI. The Strategy includes a list of initiatives to foster AI economic growth through education, research and innovation, and policy development.[32] The Strategy’s three pillars include capitalizing on social and economic opportunities (e.g., adopting and using AI across sectors), creating the right conditions for AI to thrive, and strengthening ethical foundations.[32] The annual government budget for AI innovation and research is around 45 million euros and the Strategy will be reviewed yearly.[32]
2019 November Singapore National AI Strategy National Policy Singapore releases a National AI Strategy produced by Smart Nation and the Digital Government Office’s National AI Office that aims to complete the digital transformation across multiple sectors of urban life.[33] Singapore hopes to develop into a global AI hub, generate new business models, deliver life-improving services, and equip the workforce to adapt to an AI economy.[34] The strategy asserts five national AI projects in healthcare, municipal solutions, education, customs, and logistics.[35] The strategy also lists five enablers for a thriving AI ecosystem, including multi-stakeholder partnerships across sectors, data architecture, trusted environment, talent and education, and international collaboration.[36] The document also provides private sector organizations with voluntary guidance on key ethical and governance issues, including ensuring AI decisions are explainable, human involvement in AI-augmented decision-making, and stakeholder communication.[37]
2019 November 7 Australia Artificial Intelligence Ethics Framework National Policy The Australian Government releases an artificial intelligence Ethics Framework to ensure safe, secure, and reliable AI. The framework includes eight voluntary, nonbinding principles to complement existing AI practices: human well-being, human-centered values, fairness, privacy protection, reliability, transparency and explainability, contestability, and accountability.[38] The principles are on par with those set forth by OECD and the World Economic Forum, and set to be trialed by National Australia Bank, Commonwealth Bank, Telstra, Microsoft, and Flamigo AI.[39]
2019 December 17 South Korea National AI Strategy National Policy South Korea establishes its National Strategy for Artificial Intelligence. It outlines Korea’s vision and strategy for the AI era, aiming to grow from an IT leader to an AI-focused industry.[40] All of Korea’s ministries jointly develop the Strategy, significant in its focus on providing a direction for the government’s AI policies.[41] The goals outlined include ranking third in global digital competitiveness by 2030, grossing 455 trillion Korean won in AI profit, and reaching the top 10 countries for quality of life.[42]
2020 June International Global Partnership on Artificial Intelligence International Organization The Global Partnership on Artificial Intelligence (GPAI) is established to share multidisciplinary research, identify key issues in AI, and facilitate international collaboration.[43] The OECD hosts the GPAI secretariat and the partnership is based on the shared commitment of G7 countries to the OECD AI Principles.[43] The Alliance is a multi-stakeholder initiative to foster international cooperation over AI and includes the working groups Responsible AI, Data Governance, Future of Work, and Innovation and Commercialization.[43] The United States, wary to join any international AI panel due to overregulation concerns, joined the partnership to counter China’s increasing international AI presence.[44]
2020 October 10 European Union Artificial Intelligence Act International Policy European Union leaders meet to discuss the digital transition. They invite the European Commission, the executive branch of the EU, to increase private and public tech investment, ensure elevated coordination between European research centers, and construct a clear definition of Artificial Intelligence.[45]
2020 November Finland National AI Strategy Update National Policy Finland updates its 2017 national strategy as part of the Artificial Intelligence 4.0 program, promoting the digitalization of Finland.[46] The program aims for Finland to be sustainable, clean, and digitally efficient using AI by 2030.[46]
2020 November 25 Switzerland National AI Guidelines National Policy Switzerland releases Guidelines on Artificial Intelligence, intended to act as a general frame of reference on the use of AI in the Federal Administration.[47] The Swiss Federal Council adopts the guidelines, developed by the interdepartmental Working Group on AI under the leadership of the Federal Department of Economic Affairs, Education and Research (EAER).[48] The Guidelines include prioritizing people, defining regulatory conditions for developing and applying AI, establishing transparency/traceability/explainability, ensuring accountability and safety, shaping AI governance, and involving all relevant stakeholders.[47] The parameters guide government tasks such as developing AI strategies, introducing regulations for sectors affected by AI policy, developing AI systems in the Federal Administration, and helping shape AI regulation.[47] Switzerland’s general approach to AI is informed by the OECD Principles and forming legislation that doesn’t directly regulate AI but instead informs the direction of safe AI development. [49]
2020 December Germany National AI Strategy Update National Policy Germany releases an updated, more detailed National AI Strategy, focusing on integrating the technological developments since the 2018 Strategy. Germany plans to train more AI specialists, establish a robust research structure and AI ecosystem, create a human-centric regulatory framework, and support civil society AI networking for the common good.[50] The AI priorities include favorable working conditions for science and increasing AI expertise.[50] The research priorities are to strengthen national centers, encourage international research cooperation, and incentivize interdisciplinary AI research in healthcare, mobility, environmentalism, and aerospace.[50] The regulatory priorities are to create solid conditions for safe and trustworthy AI applications, adaptively regulate AI in work settings, strengthen information security, and protect the public against AI misuse.[50]
2021 April 21 European Union Artificial Intelligence Act International Policy The European Commission proposes the Artificial Intelligence Act. The Commission releases a proposal for AI regulation aiming to improve trust in AI and foster its development.[45]
2021 September 21 China New Generation Artificial Intelligence Code of Ethics National Policy The Ministry of Science and Technology (China) publishes the New Generation AI Code of Ethics. Its three main AI provisions are the improvement of human well-being, the promotion of fairness and justice, and the protection of privacy and security. The Ministry encourages organizations to build upon the code.[51]
2021 September 22 United Kingdom National AI Strategy National Policy The UK government releases its National AI Strategy - a 10-year plan that outlines how to invest in and plan for long-term AI ecosystem needs, support the transition to an AI-enabled economy, and ensure the UK succeeds in AI governance.[52] This strategy comes a few months after the EU’s Artificial Intelligence Act. The Alan Turing Institute, established in 2015, is the national research center for AI and one of the organizations that will help implement the AI strategy.[53]
2021 September 30 Brazil National AI Strategy National Policy The Brazilian Government approves the Brazilian Strategy for Artificial Intelligence, a document to guide research, innovation, and development of ethical AI solutions.[54] The strategy is based on the OECD AI Principles, aiming to develop ethical AI principles, guide AI use, remove barriers to innovation, improve cross-sector collaboration, develop AI skills, promote AI investment, and advance Brazilian technology overseas.[55] The strategy faced criticism for its lack of specifics on regulation.[56]
2022 January 28 Japan AI Governance Guidelines National Policy Japan’s Ministry of Economy, Trade and Industry (METI) releases Governance Guidelines for Implementation of AI Principles Ver. 1.1.[57] The principles include guidelines on AI such as conditions and risk analysis, goal setting, implementation, and evaluation.[57] They consider the social acceptance of AI system development and operation, company AI proficiency, and suggest reducing incident related harms on users by emphasizing prevention and early response.[57] These guidelines are practical and action-oriented, following Japan’s Social Principles on AI from 2019 which focus on ethics. In January 2021, METI released Ver. 1.0 of the guidelines, outlining AI trends overseas and locally. The current guidelines are the result of METI receiving public comment and holding meetings discussing Japan’s AI governance and how to operationalize the Social Principles.[58] Japan maintains the ethos that with a rapidly changing AI landscape, regulation can hamper innovation. METI concludes that the government should respect companies’ voluntary efforts for AI governance by providing nonbinding guidance.[59]
2022 March 1 China Internet Information Service Algorithmic Recommendation Management Provisions National Policy The Internet Information Service Algorithmic Recommendation Management Provisions goes into effect in China, regulating AI in the context of content recommendation technologies. The tech is defined as generation and synthesis, personalized push, sorting and selection, retrieval and filtering, and scheduling-related decision-making of content.[60] The regulations apply to service providers, who are now prohibited from offering different prices based on personal characteristics, promoting addictive content, manipulating traffic numbers, or pushing fake news.[61] Companies using AI-based personalized recommendations must now uphold user rights, protect minors, elders, and workers from harm, maintain transparency, and present information in line with mainstream socialist values.[62] Companies will face a warning and possibly a fine for noncompliance.[63]
2022 May International Quad AI International Organization The Quadrilateral Security Dialogue, known as the Quad, releases the report “Assessing AI-related Collaboration between the United States, Australia, India, and Japan” as an effort to cooperate on critical and emerging technology and as an alternative to China’s techno-authoritarian development model (including surveillance and censorship).[64] The document aims to ensure that tech innovation is aligned with the Quad members’ shared democratic values and respect for human rights.[64] The Quad began as a loose partnership between the United States, Australia, India, and Japan after the 2004 Indian Ocean tsunami to provide humanitarian aid to the affected region. It fell dormant after Australian concerns about irritation to China.[65] The Quad was resurrected in 2017 and held its first formal summit in 2021. [66]
2023 January 10 China Deep Synthesis Provisions National Policy China implements Deep Synthesis Provisions to increase government supervision over those technologies, becoming one of the first countries to regulate deepfakes. The government defines Deep Synthesis as technology that utilizes generative and/or synthetic algorithms to produce text, audio, video, or scenes.[67] The Provisions define the redline for deepfake services, prohibiting companies with deepfake tech from disseminating illegal information and requiring a “Generated by AI” label.[68] The Provisions apply to service providers, tech supporters, users, or other entities involved in deepfake services, such as online app distribution platforms.[69] The services must also adhere to China’s political ideology. However, penalties for noncompliance are not explicitly stated.[70]
2023 June 15 International AI Governance Alliance (AIGA) Established International Organization The World Economic Forum launches the AI Governance Alliance (AIGA) to guide responsible development and deployment of AI systems.[71] The Alliance prioritizes safe systems and technology, promoting sustainable applications and transformation, and contributing to resilient governance and regulation.[72] Its members would be from industry, government, and civil society worldwide.
2023 July 12 United States Whitehouse Meets with AI Companies National Policy President Joe Biden and Vice President Kamala Harris host leading AI companies Amazon (company), Anthropic, Google, Inflection AI, Meta Platforms, Microsoft, and OpenAI at the Whitehouse and secure their voluntary commitments to prioritize safe, secure, and transparent AI development.[73] The companies promise to ensure product safety before public introduction, build secure systems, and earn public trust. Congress has yet to pass contemporary AI bills, so this voluntary, nonbinding agreement is the primary guidance around AI concerns.[74]
2023 August 15 China Generative AI Measures National Policy China affects its final version of Generative AI Measures, becoming one of the first countries to regulate generative artificial intelligence technology.[75] They require service transparency and focus on the privacy of pre-training data. These requirements apply to services offered to the people of China regardless of the provider’s location.[76] China also requires GenAI content to reflect socialist values, prohibiting content that can harm national interests or discriminate against Chinese citizens.[77] The Measures do not require users to provide their real identities while creating using GenAI, and they only apply to services offered to the public (as opposed to privately used tech).[78] The penalty for violating the Measures is a warning followed by a possible fine.[79]
2023 September 19 United States Anthropic’s Responsible Scaling Policy Company Policy Anthropic publishes its Responsible Scaling Policy (RSP) - a series of technical and organizational protocols to guide risk management and development of increasingly powerful AI systems.[80] The RSP delineates AI Safety Levels 1-4, loosely based on the US Governments biosafety levels, to address catastrophic risk.[80] Anthropic aims in part to create competition in the AI safety space by publishing the policy.[80] The Institute for AI Policy and Strategy offers critiques of Anthropic’s RSP. The institute states the risk thresholds should be based on absolute risk rather than relative risk, the risk level thresholds should be lower than Anthropic defines them, and Anthropic should outline when it will alert authorities of identified risks and commit to outside scrutiny and evaluations.[81]
2023 October 30 United States Executive Order 14110 National Policy Biden signs Executive Order 14110 titled Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Order establishes new standards for AI safety and security. It compels developers to share test results with the US government and create tools to ensure AI system safety, protects Americans from AI fraud and deception, sets up a cybersecurity program to develop AI tools and fix vulnerabilities, and orders the development of a national security memorandum that directs future AI security measures.[82] The Order also directs the National Institute of Standards and Technology (NIST) to develop standards for evaluation and red-teaming and to provide testing environments for AI systems. The general reaction to the bill is cautious optimism.[83] As Less Wrong blogger Zvi Mowshowitz reports, some worry that this is the first step in a slippery slope of heightened regulation that could dampen innovation and development.[84] A complete timeline and outlook of the Executive Order can be found here.[85]
2023 November 1 – 2 International AI Safety Summit International Policy The first AI Safety Summit is held at Bletchley Park, Milton Keynes in the United Kingdom. It leads to an agreement known as the Bletchley Declaration by the 28 countries participating in the summit, including the United States, United Kingdom, China, and the European Union.[86] It receives some commentary on LessWrong, viewing it as a partial step in the right direction,[87] including a lengthy blog post by Zvi Mowshowitz, a frequent commentator on AI developments from an AI safety lens.[88]
2023 November 1 United States AI Safety Institute Organization United States Vice President Kamala Harris announces the US AI Safety Institute (USAISI) at the AI Safety Summit in the United Kingdom. The launch of USAISI builds on Biden's executive order of two days ago (October 30).[89]
2023 November 2 United Kingdom AI Safety Institute Organization The United Kingdom government announces the launch of the UK AI Safety Institute. The UK AI Safety Institute is to be formed from the Frontier AI Taskforce, which in turn had previously been called the Foundation Model Taskforce. Ian Hogarth serves as its chair.[90]
2023 December 4 Singapore National AI Strategy Update National Policy Singapore releases updated strategy “AI for the Public Good For Singapore and the World” in response to technological developments since it’s 2019 strategy, offering broader coverage, more concrete goals, and a shifting rhetoric from nice to necessary to have.[91] The Strategy’s main goals are excellence and empowerment. The core differences from Singapore’s last strategy are moving from opportunity to necessity, local to global, and projects to systems.[92] The strategy suggests directing efforts to three systems via ten enablers: Activity Drivers (industry, government, research), People and Communities (talent, capabilities, placemaking), and Infrastructure and Environment (compute, data, trusted environment, leading thought and action).[92] The Ministry of Communications and Information (MCI), Smart Nation, and The Topos Institute would be hosting the inaugural Singapore Conference on AI in the following days, hosting experts from academia, industry, and government to discuss critical AI questions.[92] Singapore shows little indication of setting hard rules for AI, opting to promote responsible AI practices through collaborative initiatives.[93] Singapore would rank high in leading AI hubs such as the AI Government Readiness Index (2023), The Global AI Index (2023), and the Asia Pacific AI Readiness Index (2023).[94] Singapore would later announce that it would invest more than S$1 billion or US$743 million into AI over the next five years in the 2024 National Budget.[95]
2023 December 9 European Union Artificial Intelligence Act International Policy The European Council and European Parliment reach a provisional agreement on the Artificial Intelligence Act. The Act should go into effect in 2026.[45]
2023 December 17 Israel AI Ethics Policy National Policy Israel releases its first-ever comprehensive AI ethics policy via the collaborative effort between the Ministry of Innovation, Science, and Technology, The Office of Legal Counsel and Legislative Affairs, and the Ministry of Justice.[96] The innovation policy identifies challenges in private sector use of AI (discrimination, human oversight, explainability, disclosure of AI interactions, safety, accountability, and privacy). It suggests collaborative development, aligning policy principles with the OECD AI recommendations, and responsible innovation.[96] The document also recommends fortifying human-centric innovation, policy coordination, internal collaboration, tools for responsible AI, and public participation.[96] The document outlines a responsible innovation policy. Israel is not expected to enact an AI policy; instead, it will opt for guidelines in hopes that regulation will not impede global positioning.[97]
2023 December 18 United States OpenAI Publishes Preparedness Framework Company Policy OpenAI releases its “Preparedness Framework,” a living document positing that a “robust approach to AI catastrophic risk safety requires proactive, science-based determinations of when and how it is safe to proceed with development and deployment.”[98] The elements of the framework include tracking catastrophic risk with evaluations, seeking out unknown-unknowns, establishing safety baselines, tasking preparedness teams with on-the-ground-work, and creating a cross-functional advisory board.[99] This document is released a few months after Anthropic’s RSP. Safer AI comments on OpenAI’s improvements to Anthropic’s safety document, including calling for more safety tests, allowing the board to veto CEO decisions, adding risk identification and analysis, and forecasting risks.[99] Elements included in the RSP that were not in the Preparedness Framework are a commitment to publicizing results of evaluation, incident reporting mechanisms, and detailed commitments for infosecurity and cybersecurity.[99]
2024 January 17 International AIGA Releases AI Guidelines International Policy At the 2024 annual World Economic Forum in Davos, the AIGA, in collaboration with IBM Consulting and Accenture, releases three reports on AI regulation and governance.[100] “Presidio AI Framework: Towards Safe Generative AI Models,” “Unlocking Value from Generative AI: Guidance for Responsible Transformation,” and “Generative AI Governance: Shaping Our Collective Global Future,” aim to address the digital divide and to apply and mobilize AI resources in sectors like healthcare and education.[101]
2024 February 8 United States US AI Safety Institute Consortium Organization US Secretary of Commerce Gina Raimondo announces "the creation of the US AI Safety Institute Consortium (AISIC), which will unite AI creators and users, academics, government and industry researchers, and civil society organizations in support of the development and deployment of safe and trustworthy artificial intelligence (AI)." AISIC is to be housed under the US AI Safety Institute, and includes over 200 member organizations.[102] The member organizations were recruited through a notice published the Federal Register asking interested organizations to submit a letter of interest over a period of 75 days (between November 2, 2023, and January 15, 2024).[103][104]
2024 March 7 (anticipation), April 16 (official announcement) United States AI Safety Institute Leadership Organization US Secretary of Commerce Gina Raimondo announces additional members of the executive leadership of the US AI Safety Institute (AISI); one of these is Paul Christiano as head of AI safety.[105] A month prior, when there was anticipation of this appointment VentureBeat had reported dissatisfaction with the idea of appointing Christiano, from "employees who fear that Christiano’s association with [effective alturism] and longtermism could compromise the institute’s objectivity and integrity."[106][107]
2024 April 8 France French Data Protection Publishes Recommendation on AI National Policy The French data protection authority Commission nationale de l'informatique et des libertés (CNIL), publicizes its recommendation on AI, requesting public consultation and comments.[108] The recommendations focus on the developmental phase of AI systems, aiming to guide developers who process personal data in the context of the General Data Protection Regulation (GDPR).[109] The CNIL encourages developers to understand why they are developing systems, rely on legal bases, ensure their rights to reuse personal data, conduct impact assessments, minimize data, and plan data lifecycles.[110] The CINL asserts the importance of caution regarding data scraping for model training, open-source AI models, human rights, and the application of the GDPR to AI models trained with personal data.[108]
2024 May 21 European Union Artificial Intelligence Act International Policy The European Council approves the Artificial Intelligence Act.[45] This Act is the first of its kind and operates within a risk-based approach - the higher the risk to society, the stricter the rules.
2024 July 19 African Union African Union Executive Council: Continental AI Strategy International Policy The Executive Council of the African Union endorses the Continental AI Strategy, a commitment to an Africa-centric development approach to AI.[111] The Strategy provides a unified approach while encouraging African countries to develop contextually specific national AI policies.[111] The 5-year implementation plan includes supporting African Union member states in creating national AI strategies, fostering AI talent in Africa, nurturing AI partnerships and investments, adopting AI in priority sectors, building AI infrastructure, promoting research, and developing legal frameworks.[111] Benin, Egypt, Ghana, Mauritius, Rwanda, Senegal, and Tunisia all have National AI Strategies.[112]

See also

References

  1. "Government AI Readiness Index 2023|". oxfordinsights.com. 2023. Retrieved 23 October 2024. 
  2. {{cite web |title=Global AI Index 2024| |url=https://www.tortoisemedia.com/intelligence/global-ai/#rankings%7Cwebsite=tortoisemedia.com |access-date=23 October 2023 |language=en |date=2024}
  3. {{cite web |title=Top 10 Countries Leading in AI Research & Technology in 2024| |url=https://www.techopedia.com/top-10-countries-leading-in-ai-research-technology%7Cwebsite=techopedia.com |access-date=23 October 2023 |language=en |date=2024}
  4. 4.0 4.1 "Canada is a global AI leader|". cifar.ca. 2017. Retrieved 18 September 2024. 
  5. "Pan-Canadian Artificial Intelligence Strategy|". dig.watch. June 2017. Retrieved 18 September 2024. 
  6. "China issues guideline on artificial intelligence development|". english.gov.cn. 20 July 2017. Retrieved 6 September 2024. 
  7. 7.0 7.1 7.2 "National strategies on Artificial Intelligence A European perspective in 2019|" (PDF). knowledge4policy.ec.europa.eu. 25 February 2020. Retrieved 30 October 2024. 
  8. Bareis, Jascha; Katzenbach, Christian (29 November 2018). "Global AI race: States aiming for the top". hiig.de. Retrieved 11 October 2024. 
  9. Bareis, Jascha; Katzenbach, Christian (29 November 2018). "Global AI race: States aiming for the top". hiig.de. Retrieved 11 October 2024. 
  10. Bareis, Jascha; Katzenbach, Christian (29 November 2018). "Global AI race: States aiming for the top". hiig.de. Retrieved 11 October 2024. 
  11. 11.0 11.1 "France AI Strategy Report"|". ai-watch.ec.europa.eu. 1 September 2021. Retrieved 11 October 2024. 
  12. 12.0 12.1 "What is GDPR|". GDPR.EU. Retrieved 28 August 2024. 
  13. "The EU General Data Protection Regulation|". HRW.org. 6 June 2018. Retrieved 28 August 2024. 
  14. 14.0 14.1 "National Strategy for Artificial Intelligence #AIForAll (2018) Overview of the strategy|". datagovhub.elliott.gwu. 2018. Retrieved 11 October 2024. 
  15. 15.0 15.1 "National Strategy for AI|" (PDF). niti.gov.in. June 2018. Retrieved 11 October 2024. 
  16. "California Consumer Privacy Act (CCPA)|". oag.ca.gov. Retrieved 30 August 2024. 
  17. Ocampo, Danielle (June 2024). "CCPA and the EU AI ACT". calawyers.org. Retrieved 30 August 2024. 
  18. 18.0 18.1 "Germany AI Strategy Report|". ai-watch.ec. 21 September 2024. Retrieved 11 October 2024. 
  19. 19.0 19.1 "Artificial Intelligence Strategy AI - a brand for Germany|". bundesregierung.de. 15 November 2018. Retrieved 11 October 2024. 
  20. 20.0 20.1 "Georgetown University — Center for Security and Emerging |". openphilanthropy.org. January 2019. Retrieved 27 September 2024. 
  21. Anderson, Nick (28 February 2019). "Georgetown launches think tank on security and emerging technology". washingtonpost.com. Retrieved 8 March 2019. 
  22. "CSET Named Member of Biden Administration's AI Safety Institute Consortium|". cset.georgetown.edu. 8 February 2024. Retrieved 27 September 2024. 
  23. "Maintaining American Leadership in Artificial Intelligence|". Federalregister.gov. Retrieved 30 August 2024. 
  24. Metz, Cade (11 February 2019). "Trump Signs Executive Order Promoting Artificial Intelligence". nytimes.com. Retrieved 30 August 2024. 
  25. "Human Centric AI|" (PDF). cas.co. 29 March 2019. Retrieved 15 September 2024. 
  26. Habuka, Hiroki (14 February 2023). "Japan's Approach to AI Regulation and Its Impact on the 2023 G7 Presidency". CSIS.org. Retrieved 15 September 2024. 
  27. "116th Congress|". Congress.gov. Retrieved 30 August 2024. 
  28. Robertson, Adi (10 April 2019). "A new bill would force companies to check their algorithms for bias". theverge.com. Retrieved 30 August 2024. 
  29. "117th Congress|". Congress.gov. Retrieved 30 August 2024. 
  30. 30.0 30.1 "OECD AI Principles overview|". oecd.ai. Retrieved 11 September 2024. 
  31. "AI Principles|". oecd.org. Retrieved 11 September 2024. 
  32. 32.0 32.1 32.2 "National strategies on Artificial Intelligence A European perspective in 2019|" (PDF). knowledge4policy.ec.europa.eu. 25 February 2020. Retrieved 30 October 2024. 
  33. "Raising Standards: Data and Artificial Intelligence in Southeast Asia|". asiasociety.org. July 2022. Retrieved 18 October 2024. 
  34. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  35. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  36. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  37. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  38. "Australia's AI Ethics Principles|". industry.gov.au. 7 November 2019. Retrieved 18 September 2024. 
  39. Tonkin, Casey (7 November 2019). "AI ethics framework being put to the test". ia.acs.org.au. Retrieved 18 September 2024. 
  40. "NATIONAL STRATEGY FOR AI|". oecd.ai. 5 July 2024. Retrieved 20 September 2024. 
  41. Kyul, Han (13 January 2020). "Korean Government Announces the "National AI Strategy," Jointly Developed by All Ministries". kimchang.com. Retrieved 20 September 2024. 
  42. "National Strategy for Artificial Intelligence|". msit.go.kr. 17 December 2024. Retrieved 20 September 2024. 
  43. 43.0 43.1 43.2 "About GPAI|". gpai.ai. June 2020. Retrieved 20 September 2024. 
  44. O’Brien, Matt (29 May 2020). "US joins G7 artificial intelligence group to counter China". defensenews.com. Retrieved 20 September 2024. 
  45. 45.0 45.1 45.2 45.3 "European Council - Council of the European Union|". consilium.europa.eu. Retrieved 30 August 2024. 
  46. 46.0 46.1 "Artificial Intelligence 4.0 programme accelerates business digitalisation|". tem.fi. November 2020. Retrieved 30 October 2024. 
  47. 47.0 47.1 47.2 "Guidelines on Artificial Intelligence for the Confederation|". bakom.admin.ch. November 2020. Retrieved 30 October 2024. 
  48. ""Artificial Intelligence" – Adoption of Guidelines for the Federal Administration|". admin.ch. 25 November 2020. Retrieved 30 October 2024. 
  49. "Switzerland AI Strategy Report|". ai-watch.ec.europa.eu. 1 September 2021. Retrieved 30 October 2024. 
  50. 50.0 50.1 50.2 50.3 "Artificial Intelligence Strategy for the German Government: 2020 Update|" (PDF). ki-strategie-deutschland.de. December 2020. Retrieved 11 October 2024. 
  51. Kachra, Ashyana-Jasmine (12 February 2024). "Making Sense of China's AI Regulations". holisticai.com. Retrieved 28 October 2024. 
  52. "National AI Strategy|" (PDF). gov.uk. 22 September 2021. Retrieved 15 September 2024. 
  53. Marr, Bernard (3 November 2021). "The Future Role Of AI And The UK National AI Strategy – Insights From Professor Mark Girolami". forbes.com. Retrieved 15 September 2024. 
  54. Roman, Juliana (4 October 2021). "Artificial Intelligence in Brazil: the Brazilian Strategy for Artificial Intelligence (BSAI/EBIA) and Bill No. 21/2020". irisbh.com.br. Retrieved 18 September 2024. 
  55. Lowe, Josh (13 April 2021). "Brazil launches national AI strategy". globalgovernmentforum.com. Retrieved 18 September 2024. 
  56. Roman, Juliana (4 October 2021). "Artificial Intelligence in Brazil: the Brazilian Strategy for Artificial Intelligence (BSAI/EBIA) and Bill No. 21/2020". irisbh.com.br. Retrieved 18 September 2024. 
  57. 57.0 57.1 57.2 "Japan: METI releases updated version of Governance Guidelines on AI Principles|". dataguidance.com. 31 January 2023. Retrieved 25 September 2024. 
  58. "Call for Public Comments on "AI Governance Guidelines for Implementation of AI Principles Ver. 1.0" Opens|". meti.go.jp. 9 July 2021. Retrieved 25 September 2024. 
  59. Habuka, Hiroki (14 February 2023). "Japan's Approach to AI Regulation and Its Impact on the 2023 G7 Presidency". CSIS.org. Retrieved 25 September 2024. 
  60. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  61. Kachra, Ashyana-Jasmine (12 February 2024). "Making Sense of China's AI Regulations". holisticai.com. Retrieved 28 October 2024. 
  62. Kachra, Ashyana-Jasmine (12 February 2024). "Making Sense of China's AI Regulations". holisticai.com. Retrieved 28 October 2024. 
  63. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  64. 64.0 64.1 "Assessing AI-related Collaboration between the United States, Australia, India, and Japan|". cset.georgetown.edu. May 2022. Retrieved 9 October 2024. 
  65. Madhani, Aamer; Miller, Zeke (24 May 2022). "EXPLAINER: What's the 4-nation Quad, where did it come from?". apnews.com. Retrieved 27 September 2024. 
  66. Madhani, Aamer; Miller, Zeke (24 May 2022). "EXPLAINER: What's the 4-nation Quad, where did it come from?". apnews.com. Retrieved 27 September 2024. 
  67. Kachra, Ashyana-Jasmine (12 February 2024). "Making Sense of China's AI Regulations". holisticai.com. Retrieved 28 October 2024. 
  68. Li, Barbara; Zhou, Amaya (7 August 2024). "Navigating the Complexities of AI Regulation in China". reedsmith.com. Retrieved 28 October 2024. 
  69. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  70. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  71. Tedeneke, Alem (15 June 2023). "World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI". weforum.org. Retrieved 20 September 2024. 
  72. Tedeneke, Alem (15 June 2023). "World Economic Forum Launches AI Governance Alliance Focused on Responsible Generative AI". weforum.org. Retrieved 20 September 2024. 
  73. "FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI|". whitehouse.gov. 21 July 2023. Retrieved 11 September 2024. 
  74. Chatterjee, Mohar (21 July 2023). "White House notches AI agreement with top tech firms". politico.com. Retrieved 11 September 2024. 
  75. Li, Barbara; Zhou, Amaya (7 August 2024). "Navigating the Complexities of AI Regulation in China". reedsmith.com. Retrieved 28 October 2024. 
  76. Li, Barbara; Zhou, Amaya (7 August 2024). "Navigating the Complexities of AI Regulation in China". reedsmith.com. Retrieved 28 October 2024. 
  77. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  78. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  79. Xu, Hui; Lee, Bianca (16 August 2023). "China's New AI Regulations" (PDF). lw.com. Retrieved 28 October 2024. 
  80. 80.0 80.1 80.2 "Anthropic's Responsible Scaling Policy |". anthropic.com. 19 September 2023. Retrieved 11 September 2024. 
  81. "Responsible Scaling: Comparing Government Guidance and Company Policy |". iaps.ai. 11 March 2024. Retrieved 11 September 2024. 
  82. "FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence|". whitehouse.gov. Retrieved 30 August 2024. 
  83. "NIST Calls for Information to Support Safe, Secure and Trustworthy Development and Use of Artificial Intelligence|". nist.gov. 19 December 2023. Retrieved 30 August 2024. 
  84. Mowshowitz, Zvi (1 November 2023). "Reactions to the Executive Order". lesswrong.com. Retrieved 30 August 2024. 
  85. "AI Executive Order Timeline|". bipartisanpolicy.org. 13 December 2023. Retrieved 30 August 2024. 
  86. "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023". GOV.UK. November 1, 2023. Retrieved May 19, 2024. 
  87. Soares, Nate (October 31, 2023). "Thoughts on the AI Safety Summit company policy requests and responses". LessWrong. Retrieved May 19, 2024. 
  88. Mowshowitz, Zvi (November 7, 2023). "On the UK Summit". LessWrong. Retrieved May 19, 2024. 
  89. "FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence". White House. November 1, 2023. Retrieved July 6, 2024. 
  90. "Prime Minister launches new AI Safety Institute. World's first AI Safety Institute launched in UK, tasked with testing the safety of emerging types of AI.". GOV.UK. November 2, 2023. Retrieved July 6, 2024. 
  91. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  92. 92.0 92.1 92.2 "National Artificial Intelligence Strategy 2 to uplift Singapore's social and economic potential|". smartnation.gov.sg. 10 October 2024. Retrieved 18 October 2024. 
  93. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  94. Ho, Ming Yin (26 February 2024). "Singapore's National Strategy in the Global Race for AI". kas.de. Retrieved 18 October 2024. 
  95. Aziz, Muhamad (13 March 2024). "Singapore's Ambitious AI Investment Plan". aseanbriefing.com. Retrieved 18 October 2024. 
  96. 96.0 96.1 96.2 "Israel's Policy on Artificial Intelligence Regulation and Ethics|". gov.il. 17 December 2023. Retrieved 18 October 2024. 
  97. Or-Hof, Dan (10 January 2024). "Proactive caution: Israel's approach to AI regulation". iapp.org. Retrieved 18 October 2024. 
  98. "OpenAi Preparedness Framework (Beta)|" (PDF). openai.com. 18 December 2023. Retrieved 11 September 2024. 
  99. 99.0 99.1 99.2 "Is OpenAI's Preparedness Framework better than its competitors' "Responsible Scaling Policies"? A Comparative Analysis|". safer-ai.org. 19 January 2024. Retrieved 11 September 2024. 
  100. Sibahle, Malinga (18 January 2024). "AI Governance Alliance debuts research reports on AI guidelines". itweb.co.za. Retrieved 20 September 2024. 
  101. Sibahle, Malinga (18 January 2024). "AI Governance Alliance debuts research reports on AI guidelines". itweb.co.za. Retrieved 20 September 2024. 
  102. "Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety". U.S. Department of Commerce. February 8, 2024. Retrieved July 6, 2024. 
  103. "Artificial Intelligence Safety Institute Consortium". Federal Register. November 2, 2023. Retrieved July 6, 2024. 
  104. "Artificial Intelligence Safety Institute Consortium (AISIC)". National Institute of Standards and Technology. Retrieved July 6, 2024. 
  105. "U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team". U.S. Department of Commerce. April 16, 2024. Retrieved July 6, 2024. 
  106. Goldman, Sharon (March 7, 2024). "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute". VentureBeat. Retrieved July 6, 2024. 
  107. "NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute (linkpost)". Effective Altruism Forum. March 8, 2024. Retrieved July 6, 2024. 
  108. 108.0 108.1 "France: CNIL requests comments on new AI recommendations|". dataguidance.com. 2 July 2024. Retrieved 11 October 2024. 
  109. Padova, Yann; Burton, Cedric (23 April 2024). "French Data Protection Authority Publishes Recommendations on the Development of AI Systems: Seven Takeaways". wsgr.com. Retrieved 11 October 2024. 
  110. Padova, Yann; Burton, Cedric (23 April 2024). "French Data Protection Authority Publishes Recommendations on the Development of AI Systems: Seven Takeaways". wsgr.com. Retrieved 11 October 2024. 
  111. 111.0 111.1 111.2 "African Union committed to developing AI capabilities in Africa|". au.int. 28 August 2024. Retrieved 11 October 2024. 
  112. Okolo, Chinasa (15 March 2024). "Reforming data regulation to advance AI governance in Africa". brookings.edu. Retrieved 11 October 2024.