Difference between revisions of "Timeline of AI ethics violations"
From Timelines
m (→Timeline of AI ethics violations) |
|||
Line 1: | Line 1: | ||
+ | =Big picture= | ||
+ | |||
+ | STATUS: Unfinished | ||
+ | |||
+ | |||
+ | ===Overall summary=== | ||
+ | |||
+ | ==Full timeline== | ||
+ | |||
+ | ===Inclusion criteria=== | ||
+ | |||
+ | The information included in the timeline outlines incidents of human rights violations in which AI was involved. | ||
+ | Here is a list of criteria on what rows were included: | ||
+ | * AI involvement: The incident must involve the significant use of AI technologies. | ||
+ | * Human rights impact: the incident must have violated human rights defined by international law and standards such as the {{w|Universal Declaration of Human Rights}} (UDHR) and subsequent treaties. Examples of human rights abuses include unlawful killings or injuries, arbitrary detention or torture, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, freedom of expression or assembly restrictions, privacy violations | ||
+ | * State of Corporate Responsibility: The incident must involve a state or corporate entity that has used AI technology to abuse human rights. | ||
+ | * Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research. | ||
+ | * The geographical range is global. | ||
+ | * Relevance or significance: incidents with significant human rights violations will be prioritized. | ||
+ | |||
+ | |||
===Timeline of AI ethics violations=== | ===Timeline of AI ethics violations=== | ||
Line 4: | Line 25: | ||
! Onset !! Region !! Perpetrators !! Name !! Details | ! Onset !! Region !! Perpetrators !! Name !! Details | ||
|- | |- | ||
− | | {{dts|2016}} || {{w|Xinjiang}}, China|| Chinese Government || {{w|Mass Surveillance in China}} of Ethnic Minorities|| Chinese police and other officials use the AI-powered application {{w|Integrated Joint Operations Platform}} (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref>The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions. | + | | {{dts|2016}} || {{w|Xinjiang}}, China|| Chinese Government || {{w|Mass Surveillance in China}} of Ethnic Minorities|| Chinese police and other officials use the AI-powered application {{w|Integrated Joint Operations Platform}} (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref>The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Techno-Authoritarianism Has Gone Global|url=https://www.hrw.org/news/2021/04/08/chinas-techno-authoritarianism-has-gone-global?gad_source=1&gclid=Cj0KCQjwveK4BhD4ARIsAKy6pML5lKn7yVs9RyvOnwyOPf6IlW0fFfot_WApZuywwlLL6Q3OAW9FSpIaAv4yEALw_wcB |website=hrw.org |access-date=23 October 2024 |language=en |date=8 April 2021}}</ref> |
|- | |- | ||
| {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israel Defense Forces}} || {{w|AI-assisted targeting in the Gaza Strip}} || Israel implemented {{w|AI-assisted targeting in the Gaza Strip}} in the {{w|Israeli bombing of the Gaza Strip}} which would last through October 2024.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> | | {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israel Defense Forces}} || {{w|AI-assisted targeting in the Gaza Strip}} || Israel implemented {{w|AI-assisted targeting in the Gaza Strip}} in the {{w|Israeli bombing of the Gaza Strip}} which would last through October 2024.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> | ||
|} | |} | ||
+ | |||
+ | ==See also== | ||
+ | |||
+ | * [[Timeline of AI policy]] | ||
+ | * [[Timeline of AI safety]] | ||
+ | * [[Timeline of machine learning]] | ||
+ | |||
+ | ==References== | ||
+ | |||
+ | {{reflist|30em}} | ||
==See also== | ==See also== |
Revision as of 09:10, 6 November 2024
Contents
Big picture
STATUS: Unfinished
Overall summary
Full timeline
Inclusion criteria
The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows were included:
- AI involvement: The incident must involve the significant use of AI technologies.
- Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include unlawful killings or injuries, arbitrary detention or torture, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, freedom of expression or assembly restrictions, privacy violations
- State of Corporate Responsibility: The incident must involve a state or corporate entity that has used AI technology to abuse human rights.
- Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
- The geographical range is global.
- Relevance or significance: incidents with significant human rights violations will be prioritized.
Timeline of AI ethics violations
Onset | Region | Perpetrators | Name | Details |
---|---|---|---|---|
2016 | Xinjiang, China | Chinese Government | Mass Surveillance in China of Ethnic Minorities | Chinese police and other officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[1]The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[1] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[1] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[1] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.[2] |
October 2023 | Gaza Strip | Israel Defense Forces | AI-assisted targeting in the Gaza Strip | Israel implemented AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip which would last through October 2024.[3] The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[4] |
See also
References
- ↑ 1.0 1.1 1.2 1.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024.
- ↑ Wang, Maya (8 April 2021). "China's Techno-Authoritarianism Has Gone Global". hrw.org. Retrieved 23 October 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
See also
- Timeline of AI policy
- Timeline of AI safety
- Timeline of machine learning
- Timeline of ChatGPT
- Timeline of Google Gemini
- Timeline of OpenAI
- Timeline of large language models