Timeline of AI ethics violations

From Timelines
Revision as of 10:41, 6 November 2024 by Olivia (talk | contribs)
Jump to: navigation, search

Big picture

STATUS: Unfinished


Full timeline

Inclusion criteria

The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows were included:

  • AI involvement: The incident must involve the significant use of AI technologies.
  • Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include unlawful killings or injuries, arbitrary detention or torture, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, freedom of expression or assembly restrictions, privacy violations
  • State of Corporate Responsibility: The incident must involve a state or corporate entity that has used AI technology to abuse human rights.
  • Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
  • The geographical range is global.
  • Relevance or significance: incidents with significant human rights violations will be prioritized.


Timeline of AI ethics violations

Onset Region Perpetrators Name Details
2016 Xinjiang, China Chinese Government Mass Surveillance in China of Ethnic Minorities Chinese police and other officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[1]The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[1] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[1] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[1] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.[2]
October 2023 Gaza Strip Israel AI-assisted targeting in the Gaza Strip Israel implemented AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip which would last through November 2024.[3] The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[4] The targeting tool Lavender is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate.[5] As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.[5]

See also

See also

References

  1. 1.0 1.1 1.2 1.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024. 
  2. Wang, Maya (8 April 2021). "China's Techno-Authoritarianism Has Gone Global". hrw.org. Retrieved 23 October 2024. 
  3. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024. 
  4. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024. 
  5. 5.0 5.1 "='AI-assisted genocide': Israel reportedly used database for Gaza kill lists |". Aljazeera.com. 4 April 2024. Retrieved 6 November 2024.