Timeline of AI ethics violations
From Timelines
Contents
Big picture
STATUS: Unfinished
Full timeline
Inclusion criteria
The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows were included:
- AI involvement: The incident must involve the significant use of AI technologies.
- Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include unlawful killings or injuries, arbitrary detention or torture, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, freedom of expression or assembly restrictions, privacy violations
- State of Corporate Responsibility: The incident must involve a state or corporate entity that has used AI technology to abuse human rights.
- Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
- The geographical range is global.
- Relevance or significance: incidents with significant human rights violations will be prioritized.
Timeline of AI ethics violations
Onset | Region | Perpetrators | Name | Details |
---|---|---|---|---|
2008 | United States | United States Law Enforcement Agencies | Predictive Policing | Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.[1] However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities.[2] Predictive policing also threatens the Fourth Amendment to the United States Constitution, requiring reasonable suspicion before arrest.[3] The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.[4] The Chicago Police Department would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.[5] In 2016, the Rand Corporation would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.[6] By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.[7] Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.[8] |
2016 | Xinjiang, China | Chinese Government | Mass Surveillance in China of Ethnic Minorities | Chinese police and other officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[9]The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[9] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[9] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[9] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.[10] |
August 2017 | Myanmar | Facebooks role in the Rohingya genocide | Myanmar security forces begin a campaign of ethnic cleansing Rohingya People in Rakhine State, causing 700,000 to flee from the systematic murder, rape, and burning of homes.[11] Meta Platforms (formerly Facebook) is increasingly turning towards AI to detect “hate speech.”[12] However, its detection algorithms proactively amplify content that incites violence against the Rohingya people, who already face long-standing discrimination.[12] Facebook favors inflammatory content in its AI-powered engagement-based algorithmic systems, which power news feeds, ranking, recommendation, and group features.[11] Internal Meta studies dating back to 2012 indicate the corporation’s awareness that its algorithms could lead to real-world harm. A 2016 study documents acknowledgment that the recommendation system can increase extremism.[11] In 2014, Myanmar authorities even temporarily blocked Facebook due to an outbreak of ethnic violence in Mandalay.[11] Facebook facilitates peer-to-peer interaction affirming harmful narratives targeting the Rohingya, hosts massive misinformation campaigns originated by the Myanmar military, and knowingly proliferates a product that exacerbates political division and the spread of disinformation.[13] A 2022 Global Witness investigation would reveal Meta’s failure to detect blatant anti-Rohingya and anti-Muslim content, even after Mark Zuckerburg promises to increase Burmese language-speaking content moderators in 2018.[12] Facebook’s content shaping algorithm is designed to maximize user engagement and, therefore, profit, in this case contributing to the genocide of the Rohingya. [12] | |
2018 | India | Aadhaar | Mass surveillance of Indian citizens with Facial Recognition Technology | The Indian government rolls out Facial Recognition Technology (FRT) beginning with telecommunication companies using data collected by the Unique Identification Authority of India (UIDIA).[14] Before 2009, there was no centralized identification in India, sparking the creation of Aadhaar, a unique 12-digit ID number assigned to over 1 billion Indian citizens.[15] The Aadhaar database includes biometric and demographic information, which is law enforcement can use for FRT.[16] FRT using Aadhaar data would be used to access public benefits and services, and FRT would infiltrate India’s telecommunications and travel, policing, public health, welfare programs, education, and elections.[17] These FRT systems are used for racial surveillance[18] and have higher inaccuracy in racially homogenous groups. [19] |
2018 | United States | Amazon Rekognition | Biased facial recognition software | It is reported in 2018 that Amazon’s facial recognition software, Amazon Rekognition, has a disproportionate error rate when identifying women and people of color. [20] The Amazon (company) service is offered at a price to the public but heavily marketed towards US law enforcement agencies. [21] Amazon lists the city of Orlando, Florida, and the Washington County Sheriff’s Office in Oregon among its customers.[22] Amazon claims the software can track people in real-time through surveillance cameras, scan body camera footage, and identify up to 100 faces in a single image, which is pertinent in an era of unprecedented protest attendance.[23] In 2019, a Massachusetts Institute of Technology researcher also found higher error rates in classifying darker-skinned women than lighter-skinned men. [24] |
2019 | Iran | Iranian Government | Facial recognition software to target Iranian protesters | The Iranian government integrates AI-based surveillance technologies into its governance framework, enabling the identification and detention of protesters by positioning high-definition surveillance equipment to capture public activity.[2] In 2021, China would become Iran’s biggest technological investor, more than doubling the governments possession of high-definition surveillance video recorders.[25] In 2022 after the onset of the Mahsa Amini protests, the Iranian government would adopt legislation laying out the use of AI-assisted facial recognition tools to enforce morality codes and identify women not abiding by Hijab mandates. [26] More than 20,000 arrests and 500 assassinations of protesters would follow.[27] In 2024 Iran would make an AI ethics deal with Russia to encourage technological cooperation and investment. [28] Iran has also been accused of analyzing citizen social media engagement and creating AI-driven bots and automated social media accounts to flood platforms with regime-sanctioned content.[2] |
2020 | United States | U.S. Immigration and Customs Enforcement | ICE contracts Clearview AI | The American Civil Liberties Union (ACLU) files a FOIA (Freedom of Information Act (United States)) after US Immigration and Customs Enforcement (ICE) purchases Clearview AI technology.[29] Clearview AI is a facial recognition software.[30] The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[31] Using the controversial data scraping tool, ICE can now deploy mass surveillance to identify and detain immigrants.[29] United States government agencies have a history of mass surveillance. In 2017, the DHS, ICE, and the Department of Health and Human Services used Palantir technology to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children.[32] The FBI and ICE searched state and federal driver’s license databases to find undocumented immigrants using facial recognition.[33][2][34] Facial recognition technology is proven to be less accurate in identifying women and individuals with darker skin,[2] therefore discriminating against women and minorities. |
March 2020 | Libya | Government of National Accord | Possibly the first wartime autonomous drone kill | Political unrest in Libya leads to conflict between the UN-backed Government of National Accord and forces aligned with Khalifa Haftar. [35] In the March 2020 skirmish, Haftar’s troops are hunted down with an autonomically capable drone and engaged with.[36] The device is a Turkish-made STU Kargu-2 loitering drone, possessing the ability to use machine learning-based object classification to select and engage with targets and capable of swarming.[37] While the UN report of March 2020 doesn’t specifically state the drone was used autonomously and only heavily implies casualties if confirmed this would be the first incident of battlefield deaths due to autonomous robots.[38] Autonomous weapons could rely on biased data and result in disproportionate battlefield deaths of protected demographics. |
July 2021 | International | NSO Group | Pegasus Project (investigation) | Amnesty International and Forbidden Stories release their Pegasus Project (investigation). The investigation found the Israeli company NSO Group contracted Pegasus (spyware) to over 50 countries to spy on over 50,000 surveillance targets from 2016 to 2021.[39] The NSO Group’s clients included Azerbaijan, Bahrain, Hungary, India, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Togo, and the United Arab Emirates (UAE).[40] The UAE is one of the most active users of Pegasus, targeting 10,000 people, including Ahemd Mansoor.[41] The targets included activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, several heads of state, and least 180 journalists.[39] The spyware, used by repressive governments to silence dissent, is surreptitiously installed into victims’ phones and allows complete device access to the perpetrator (including messages, emails, cameras, microphones, calls, contacts, and media).[40] The NSO Group claims to sell their products to government clients to collect data from individuals’ mobile devices suspected of being involved in serious crimes or terrorism and that the leaked state surveillance was misuse and would be investigated.[39] The NSO Group did not take action to stop its tools from being used to target unlawfully and surveil citizens, denies any wrongdoing, and claims its company is involved in a lifesaving mission. [40] |
2022 | Ukraine | Russia | Russia’s Use of AI in the Ukraine Invasion | The February 2022 Russian invasion of Ukraine brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.[42] The Russian Internet was isolated from the world after the 2019 Sovereign Internet Law, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.[2] The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. There are reports of the Ministry of Defense (Russia) using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.[43] Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.[44][45] OpenAI would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.[46] |
February 2023 | Russia | Roskomnadzor | Russian surveillance of LGBTQ people | The Russian federal agency Roskomnadzor, responsible for monitoring, controlling, and censoring Russian media launches the AI-based detection system Oculus.[2] The program analyzes over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.[47] The program scans for suicide, pro-drug, and extremist content, as well as calls for protests. It is also searching for pro-LGBTQ content, cracking down on the community as part of a framing tactic in the Ukrainian War, claiming to be defending traditional Russian values.[48] The Kremlin claims the program can identify people under a beard or a mask and determine age, insinuating the ability to easily identify and target LGBTQ content creators.[49] |
October 2023 | Gaza Strip | Israeli Defense Forces | AI-assisted targeting in the Gaza Strip | Israel implement AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip.[50] The IDF itself has acknowledged the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.[51] The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[52] Lavender relies on surveillance network and assigns each imputed Gazan a score from 1-100, estimating how likely they are to be a hamas militant.[53] The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).[54] The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.[55] As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.[54] |
Notable software
Pegasus (spyware)
The tool is downloaded onto the target's phone and gives the user full access and control of the device.[39] State governments have used its to spy on dissidents and journalists[2]. Examples of state use:
Perpetrating State | Use |
---|---|
Saudia Arabia | Pegasus was found on the phones of journalist Jamal Khashoggi and his family after the Assassination of Jamal Khashoggi.[40] |
United Arab Emirates | To monitor and detain journalists and dissidents including Ahmed Mansoor.[2] Mansoor, an Emirati human rights defender, openly criticized the government and was arrested in 2017. He would serve a 10-year prison sentence, be kept in solitary confinement, and be denied books, a bed, and basic hygiene.[56] |
Mexico | To monitor over 25 journalists looking into corruption, including Celio Pineda, whose device was marked for surveillance just weeks before his killing in 2017.[40] |
Morocco | To surveil and capture journalist Omar Radi after he criticized the government.[2] |
Spain | To spy on Catalan separatists.[2] |
Israel | In the AI-assisted targeting in the Gaza Strip.[2] |
Germany | Purchased the spyware for Federal Criminal Police Office (Germany) use.[57] |
Hungary | Surveilling journalists, including investigative reporter Szabolcs Panyi.[39] |
Belgium | Surveilling journalists.[2] |
Poland | Surveilling journalists.[2] |
Azerbijan | Surveilling over 40 journalists including Sevinj Vagifgizi.[40] |
India | Surveilling over 40 journalists from almost every major media outlet including Siddharth Varadarajan, co-founder of independent news outlet The Wire (India).[40] |
Lavender
A risk assessment tool that relies on a surveillance network.[58]
Perpetrating State | Use |
---|---|
Israel | AI-assisted targeting in the Gaza Strip.[59] |
Where’s Daddy
Target tracking software.[60]
Perpetrating State | Use |
---|---|
Israel | AI-assisted targeting in the Gaza Strip.[61] |
Palantir
A surveillance and tracking tool.[62]
Perpetrating State | Use |
---|---|
United States, Department of Health and Human Services | Track and surveil migrants.[63] |
United States, Chicago Police Department | Predictive Policing.[64] |
German law enforcement | The police of Hamburg and Hesse used Palantir to surveil and enact predictive policing until ruled unconstitutional in February 2023.[65] |
Clearview AI
The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often violating platform rules.[66]
Perpetrating State | Use |
---|---|
U.S. Immigration and Customs Enforcement | Surveilling immigrants.[29] |
Oculus
Perpetrating State | Use |
---|---|
Russia | Monitoring and surveilling LGBTQ citizens.[2] |
See also
See also
- Timeline of AI policy
- Timeline of AI safety
- Timeline of machine learning
- Timeline of ChatGPT
- Timeline of Google Gemini
- Timeline of OpenAI
- Timeline of large language models
References
- ↑ F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
- ↑ 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 "Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights|" (PDF). europarl.europe.eu. May 2024. Retrieved 6 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ 9.0 9.1 9.2 9.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024.
- ↑ Wang, Maya (8 April 2021). "China's Techno-Authoritarianism Has Gone Global". hrw.org. Retrieved 23 October 2024.
- ↑ 11.0 11.1 11.2 11.3 "Myanmar: Facebook's systems promoted violence against Rohingya; Meta owes reparations|". WEB.WEB. 29 September 2022. Retrieved 22 November 2024.
- ↑ 12.0 12.1 12.2 12.3 "The Social Atrocity: Meta and The Right to Remedy for The Rohingya|". amnesty.org. 29 September 2022. Retrieved 22 November 2024.
- ↑ Zaleznik, Daniel (July 2021). "Facebook and Genocide: How Facebook contributed to genocide in Myanmar and why it will not be held accountable". systemicjustice.org. Retrieved 22 November 2024.
- ↑ Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
- ↑ Sudhir, K.; Sunder, Shyam (27 March 2020). "What Happens When a Billion Identities Are Digitized?". insights.som.yale.edu. Retrieved 22 November 2024.
- ↑ Panigrahi, Subhashish (April 2022). "TITLE". interactions.acm.org. Retrieved 22 November 2024.
- ↑ Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
- ↑ Banerjee, Arjun (9 April 2023). "India the surveillance state and the role of Aadhaar". countercurrents.org. Retrieved 22 November 2024.
- ↑ Jain, Anushka (3 December 2021). "Crores of pensioners to be verified using UIDAI-linked facial recognition app". medianama.com. Retrieved 22 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ Cagle, Matt; Ozer, Nicole (22 May 2018). "Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology". aclu.org. Retrieved 21 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ O’Brien, Matt (3 April 2019). "Face recognition researcher fights Amazon over biased AI". apnews.com. Retrieved 21 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ Tkeshelashvili, Mariami; Saade, Tiffany (26 September 2024). "Decrypting Iran's AI-Enhanced Operations in Cyberspace". securityandtechnology.org. Retrieved 21 November 2024.
- ↑ 29.0 29.1 29.2 "Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software|" (PDF). immigrantdefenseproject.org. 19 October 2020. Retrieved 8 November 2024.
- ↑ Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
- ↑ Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
- ↑ Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
- ↑ Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
- ↑ Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
- ↑ 39.0 39.1 39.2 39.3 39.4 "About the Pegasus Project|". forbiddenstories.org. 18 July 2021. Retrieved 9 November 2024.
- ↑ 40.0 40.1 40.2 40.3 40.4 40.5 40.6 "TITLE|". amnesty.org. 19 July 2021. Retrieved 9 November 2024.
- ↑ Coates Ulrichsen, Kristian (9 June 2022). "Pegasus as a case study of evolving ties between the UAE and Israel". gulfstateanalytics.com. Retrieved 8 November 2024.
- ↑ Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
- ↑ Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
- ↑ Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
- ↑ Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
- ↑ "Russia using generative AI to ramp up disinformation, says Ukraine minister|". reuters.com. 16 October 2024. Retrieved 13 November 2024.
- ↑ Litvinova, Dasha (23 May 2023). "The cyber gulag: How Russia tracks, censors and controls its citizens". apnews.com. Retrieved 15 November 2024.
- ↑ Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
- ↑ Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ 54.0 54.1 "='AI-assisted genocide': Israel reportedly used database for Gaza kill lists |". Aljazeera.com. 4 April 2024. Retrieved 6 November 2024.
- ↑ Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ White, Rebecca (12 December 2023). "Ahmed Mansoor: the poet who spoke truth to power and paid a heavy price". securitylab.amnesty.org. Retrieved 8 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Killeen, Molly (16 February 2023). "German Constitutional Court strikes down predictive algorithms for policing". euractiv.com. Retrieved 13 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.