Timeline of AI ethics violations

From Timelines
Jump to navigation Jump to search

Big picture

This timeline was completed in December 2024. These events are still evolving, and ongoing investigations are being conducted. Due to limited information access and transparency around AI developments, there are likely other cases of AI-related human rights abuses and ethics controversies that have not been documented yet.

Olivia Mora authored this Timeline.

Year Details
2008
2009
  • India creates its first centralized identification system, Aadhar, providing each citizen a unique 12-digit ID number.
2011
2012
2013
2014
2015
  • Israeli firm Cellebrite begins offering AI-powered tools of evidence processing to law enforcement.
2016
  • The RAND Corporation finds inefficacy in the Chicago Police Department's Strategic Subject List.
  • Australian agency Centrelink rolls out inaccurate debt notices generated by an automated decision-making system to over 500,000 Australians, known as the Robodebt scheme.
  • The Chinese government uses the AI-powered Integrated Joint Operations Platform for mass surveillance of Turkic Muslims and Uyghurs of Xinjiang, resulting in mass detention and further oppression of the surveilled population.
  • An internal Facebook study acknowledges that its content recommendation system can increase extremism.
  • The NSO Group contracts Pegasus (spyware) and begins a massive secret surveillance operation, providing services to over 50 countries.
  • Vladimir Putin orders an influence campaign of the 2016 United States elections.
2017
2018
  • Mark Zuckerburg promises to increase Burmese language-speaking content moderators in the wake of the Rohingya Genocide.
  • India implements facial recognition technology using Aadhar identification data for citizens to access services.
  • The facial recognition software by Amazon (company) (Amazon Rekognition) is proven to have a disproportionate error rate when identifying women and people of color.
  • Facebook's recommendation algorithms fuel false Islamaphobic claims and calls for violence against Muslims in Sri Lanka. Mosques and Muslim-owned businesses, shops, and homes are destroyed, coupled with violence against Muslims in Ampara and Digana. The country bans the app for three days.
  • The European Union pilots the AI-assisted "micro-gesture" analyzing border control tool "iBorderCtrl."
  • Twitter hosts misinformation and disinformation in the months leading up to the 2018 Brazilian general election.
2019
  • Predictive policing is shut down in Chicago.
  • The Robodebt scheme is ruled unlawful by an Australian court.
  • The European Union's iBorderCtrl trial ends.
  • The government of Iran integrates AI-based surveillance technologies into its legislative framework.
  • YouTube is accused using algorithms biased against LGBTQ+ people and demonetizing queer content.
  • Cellebtrite publicly announces its Universal Forensic Extraction Device (UFED) for law enforcement to unlock and extract files from phones.
2020
2021
  • A top European Union court hears a case brought against iBorderCtrl by digital rights activists.
  • The Pegasus Project (investigation) reveals that the Israeli company NSO Group offered their surveillance services (using Pegasus (spyware)) to over 50 countries to spy on over 50,000 targets including activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, heads of state, and at least 180 journalists.
  • The Department of Homeland Security (DHS) receives $780 million for border technology and surveillance (drones, sensors, and other tech to detect border crossings).
  • The U.S. Customs and Border Protection (CBP) deploys a system of autonomous surveillance towers equipped with radar, cameras, and algorithms that use AI systems trained to analyze objects and movement.
2022
2023
2024

Full timeline

Inclusion criteria

The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows are included:

  • AI involvement: the incident must involve the significant use of AI technologies.
  • Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include privacy violation, war and destruction, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, restricting association and movement, inhibiting expression, preventing asylum, and arbitrary detention.
  • State of corporate responsibility: the incident must involve a state or corporate entity that has used AI technology to abuse human rights.
  • Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
  • The geographical range is global.
  • Relevance or significance: incidents with significant human rights violations will be prioritized.

Timeline of AI ethics violations

Onset Region Perpetrators Name AI Type Right Violated Details
2008 United States United States Law Enforcement Agencies Predictive Policing in the United States Predictive algorithmic scoring Privacy, presumption of innocence Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.[1] However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities in the United States.[2] Predictive policing also threatens the Fourth Amendment to the United States Constitution, requiring reasonable suspicion before arrest.[3] The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.[4] The Chicago Police Department would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.[5] In 2016, the RAND Corporation would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.[6] By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.[7] Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.[8]
2011 Brazil Brazil city and state governments Facial Recognition Technology in Brazil Image recognition Privacy, presumption of innocence Brazil implements Facial Recognition Technology (FRT) in public spaces in 2011, developed under the context of “smart city” programs, operating with no legal framework for the use of FRT.[9] FRT provides real-time analytics to expedite the identification of criminals, stolen cars, missing persons, and lost objects.[10] The country would have three FRT state and city projects in 2018 and nearly 300 in 2024.[11] FRT disproportionately misidentifies Brazil’s 56% black population.[12] In 2019, 90% of people arrested for nonviolent crimes using FRT would be black,[13] indicating severe disproportionality in FRT’s influence on arrest and detention.
2011 England London's Metropolitan Police Service Gangs Matrix Predictive algorithmic scoring Privacy, presumption of innocence In response to the 2011 England riots the Metropolitan Police Service creates and implements the Gangs Matrix, an algorithmic database containing individuals thought to be in a gang and likely to commit violence.[14] The algorithm considers data such as social media activity, ethnicity, and known criminal activity.[14] The term ‘gang’ is not defined, and a police officer only needs two pieces of ‘verifiable intelligence’ to place a subject on the list.[15] Sharing a YouTube video containing gang signs or any other social media interaction with gang-related symbols could land an individual on the database, targeting certain subcultures.[15] The algorithm would contain 80% individuals between the ages of 16-24, 78% black males, 75% victims of violence, 35% people who never committed a serious offense, and 15% minors.[14] In October of 2017, the Matrix would hold 3,806 people, some as young as 12,[15] whose status would lead to struggles to find employment opportunities, government benefits, and housing.[14] The Matrix, which does not process personal data fairly, retains data of individuals at zero or low risk, and retains data of individuals longer than necessary, would be discontinued in February 2024.[14]
2013 China Government of China Social Credit System Big data analytics Privacy, freedom of association and movement, presumption of innocence The government of China begins planning a social credit system in 2012 to be rolled out by 2020, measuring the trustworthiness of individuals.[16] In 2014 the State Council of the People's Republic of China would release the systems founding document, planning to rollout pilot systems.[17] While the legal framework for the state-wide social credit system would not be released until 2022, the system inspired local and city governments to implement their own credit systems, promoting state-sanctioned moral values through incentives and punishments.[18] In 2013, Rongcheng would implement one of the more well known social credit scoring systems. The rankings include traffic tickets, city-level rewards, donations,[19] shopping habits, and online speech[20] which would affect eligibility for government jobs, placing kids in desired schools, and credit card applications.[21] Those deemed trustworthy can receive discounts on heating bills and favorable bank loans.[22] In 2017 the Supreme People’s Court would have released more than 7 million blacklisted individuals who would be banned from air travel, high-speed trains, and luxury purchases.[23]
2015 International Law enforcement use of Cellebrite Cellebrite AI-powered products used for surveillance Big data analytics, image recognition Right to privacy, presumption of innocence Israeli firm and law enforcement contractor Cellebrite begins incorporating AI-powered tools of evidence processing in 2015 with image classification and in 2018 with its service “Pathfinder.”[24] The company eventually offers other AI-enhanced digital evidence analysis tools Guardian, Smart Search, Physical Analyzer, Autonomy, and Inspector.[24] In 2019 the company would publicly announce its Universal Forensic Extraction Device (UFED), to be implemented by law enforcement to unlock and extract files from any iOS device and recent Android (operating system) phones.[25] Cellebrite would be the most prominent maker of UFEDs enabling police to access information from phones and cloud services.[26] Cellebrite’s data extraction tools would be involved in multiple privacy and surveillance controversies globally. In the United States, domestic police and U.S. Customs and Border Protection would claim the authority to search and gain full access to phones without warrants.[27] In 2024, Serbian authorities would use Cellebrite’s spyware product NoviSpy to infect the phones of activists and journalists, obtaining their personal information.[28]
2016 Australia Cenrelink Robodebt scheme Predictive algorithmic scoring Privacy The Australian agency Centrelink enacts an automated decision-making system to identify overpayments to welfare recipients.[29] The system, named Robodebt, was not tested prior to rollout, and generates inaccurate debt notices.[30] Over 500,000 Australians on welfare are affected, some incorrectly told they owe thousands of dollars to the government.[31] There would be multiple suicides and reports of depression and anxiety over the payment notices.[32] The government would defend the system until a 2019 court decision ruled Robodebt unlawful.[33]
2016 Xinjiang, China Government of China Mass Surveillance in China of Ethnic Minorities Image recognition, natural language processing, voice recognition, big data analytics, automated decision-making, geospatial analytics Privacy, presumption of innocence, freedom of association and movement Chinese police and other government officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[34] The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[34] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[34] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[34] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy.
August 2017 Myanmar Facebook Facebooks role in the Rohingya genocide Content recommendation algorithm, natural language processing War/destruction Myanmar security forces begin a campaign of ethnic cleansing Rohingya People in Rakhine State, causing 700,000 to flee from the systematic murder, rape, and burning of homes.[35] Meta Platforms (formerly Facebook) is increasingly turning towards AI to detect “hate speech.”[36] However, its detection algorithms proactively amplify content that incites violence against the Rohingya people, who already face long-standing discrimination.[36] Facebook favors inflammatory content in its AI-powered engagement-based algorithmic systems, which power news feeds, ranking, recommendation, and group features.[35] Internal Meta studies dating back to 2012 indicate the corporation’s awareness that its algorithms could lead to real-world harm. In 2014, Myanmar authorities even temporarily blocked Facebook due to an outbreak of ethnic violence in Mandalay.[35] A 2016 Meta study documented acknowledgment that the recommendation system can increase extremism.[35] Facebook facilitates peer-to-peer interaction affirming harmful narratives targeting the Rohingya, hosts massive disinformation campaigns originated by the Myanmar military, and knowingly proliferates a product that exacerbates political division and the spread of disinformation.[37] A 2022 Global Witness investigation would reveal Meta’s failure to detect blatant anti-Rohingya and anti-Muslim content, even after Mark Zuckerburg promises to increase Burmese language-speaking content moderators in 2018.[36] Facebook’s content shaping algorithm is designed to maximize user engagement and, therefore, profit, in this case contributing to the genocide of the Rohingya.[36]
October 2018 European Union border European Union iBorderCtrl Image recognition, predictive algorithmic scoring Privacy, presumption of innocence, freedom of association and movement In October 2018, the European Union announces the funding of a new auto-border control system called iBorderCtrl, to be piloted in the nations Hungary, Greece, and Latvia.[38] The program is administered to travelers at the border by a virtual border guard and analyzes “micro-gestures” to determine if the traveler is lying.[39] The “honest” travelers can cross the border with a code, and those deemed lying are transferred to human guards for further questioning.[38] The development of the technology lacks transparency.[38] It also relies on the widely contested “affect recognition science” and facial recognition technology, proven to be inherently biased.[38] The trial would end in 2019, and a top EU court would hear a case against the tech brought by digital rights activists in February 2021.[40]
2018 Sri Lanka Facebook Facebook algorithm spreads Islamophobic content in Sri Lanka Content recommendation algorithm War/destruction, freedom of expression A viral Facebook rumor spreads across Sri Lanka, falsely claiming that police seized 23,000 sterilization pills from a Muslim pharmacist in Ampara, supposedly unveiling a Muslim plot to sterilize and overthrow the Sinhalese majority.[41] Violence ensues, including the beating of a Muslim shop owner and destruction of his shop in Ampara.[42] More viral Facebook videos calling for violence spark the destruction of Muslim-owned shops and homes, and the death of 27-year-old Abdul Basith trapped inside a burning storefront in Digana.[43] Facebook officials ignore the repeated warnings of potential violence, and the app continues to push the inflammatory content that keeps people on the site.[44] Sri Lanka blocks Facebook (including platforms owned by Facebook like Whatsapp) for three days in March in response to the calls to attack Muslims.[45]
2018 India Aadhaar Mass surveillance of Indian citizens with facial recognition technology using Aadhaar data Privacy, freedom of association and movement Image recognition The Indian government rolls out Facial Recognition Technology (FRT) beginning with telecommunication companies using data collected by the Unique Identification Authority of India (UIDIA).[46] Before 2009, there was no centralized identification in India, sparking the creation of Aadhaar, a unique 12-digit ID number assigned to the over 1 billion Indian citizens.[47] The Aadhaar database includes biometric and demographic information, which law enforcement can use for FRT.[48] FRT using Aadhaar data would be used for citizens to access public benefits and services, and FRT would infiltrate India’s telecommunications and travel, policing, public health, welfare programs, education, and elections.[49] These FRT systems are used for racial surveillance[50] and have higher inaccuracy in racially homogenous groups.[51]
2018 United States Amazon Rekognition Biased facial recognition software Image recognition Privacy It is reported in 2018 that Amazon’s facial recognition software, Amazon Rekognition, has a disproportionate error rate when identifying women and people of color.[52] The Amazon (company) service is offered at a price to the public but heavily marketed towards US law enforcement agencies.[53] Amazon lists the city of Orlando, Florida, and the Washington County Sheriff’s Office in Oregon among its customers.[54] Amazon claims the software can track people in real-time through surveillance cameras, scan body camera footage, and identify up to 100 faces in a single image, which is pertinent in an era of unprecedented protest attendance.[55] In 2019, a Massachusetts Institute of Technology researcher would also find higher error rates in classifying darker-skinned women than lighter-skinned men.[56]
June 2019 International YouTube AI discrimination in monetizing LGBTQ+ YouTube videos Content recommendation algorithm Freedom of expression, right to livelihood After an investigation, Youtube content creators allege YouTube’s AI monetization algorithm flags videos with LGBTQ-related words as non-advertiser friendly, monetarily punishing videos tagged as “gay,” “transgender,” and “lesbian.”[57] Due to a lack of consistently available human moderators, YouTube uses relies in part on AI algorithms to take down inappropriate videos.[58] YouTube denies the allegation, claiming they aim to protect users from hate speech.[59] In August of 2019, a group of LGBTQ+ content creators would file a class action lawsuit alleging unlawful content regulation, distribution, and monetization practices that stigmatize, restrict, block, demonetize, and financially harm the queer community.[60]
2019 Iran Government of Iran Facial recognition software to target Iranian protesters Image recognition Privacy, freedom of association and movement, presumption of innocence The Iranian government integrates AI-based surveillance technologies into its legislative framework, enabling the identification and detention of protesters by positioning high-definition surveillance equipment to capture public activity.[2] In 2021, China would become Iran’s biggest technological investor, more than doubling the governments possession of high-definition surveillance video recorders.[61] In 2022 after the onset of the Mahsa Amini protests, the Iranian government would adopt legislation laying out the use of AI-assisted facial recognition tools to enforce morality codes and identify women not abiding by Hijab mandates.[62] More than 20,000 arrests and 500 assassinations of protesters would follow.[63] In 2024 Iran would make an AI ethics deal with Russia to encourage technological cooperation and investment.[64] Iran has also been accused of analyzing citizen social media engagement and creating AI-driven bots and automated social media accounts to flood platforms with regime-sanctioned content.[2]
March 2020 Libya Government of National Accord Possibly the first wartime autonomous drone kill Automated decision-making, weapon War/destruction Political unrest in Libya leads to conflict between the UN-backed Government of National Accord and forces aligned with Khalifa Haftar.[65] In the March 2020 skirmish, Haftar’s troops are hunted down with an autonomically capable drone and engaged with.[66] The device is a Turkish-made STU Kargu-2 loitering drone, possessing the ability to use machine learning-based object classification to select and engage with targets and capable of swarming.[67] While the UN report of March 2020 doesn’t specifically state the drone was used autonomously and only heavily implies casualties, if confirmed, this would be the first incident of battlefield deaths due to autonomous robots.[68] Autonomous weapons could rely on biased data and result in disproportionate battlefield deaths of protected demographics.
August 2020 United Kingdom Ofqual 2020 United Kingdom school exam grading controversy Predictive algorithmic scoring Right to livelihood In person GCSE and A-level exams in the UK are disrupted by the COVID-19 pandemic.[69] These exams influence where students can work and attend university, heavily involved in their immediate future.[70] The Office of Qualifications and Examinations Regulation (Ofqual) requests schools submit grades and rank order predictions made by teachers.[71] Assuming biased teacher rankings, Ofqual creates a scoring algorithm, and 40% of students, disproportionately working class, receive a downgraded exam score from their teacher predictions.[72] Protests break out August 16 and Ofqual announces that students should be awarded whichever is higher out of the teacher-predicted score and the algorithm score.[73] Many students would already have lost slots at their preferred universities by the time the scores were readjusted.[74] The algorithm, intended to make the system more fair, would harm lower-income students the most.
2020 United States U.S. Immigration and Customs Enforcement (ICE) ICE contracts Clearview AI Image recognition, big data analytics Privacy, presumption of innocence, freedom of association and movement The American Civil Liberties Union (ACLU) files a Freedom of Information Act (United States) (FOIA) after US Immigration and Customs Enforcement (ICE) purchases Clearview AI technology.[75] Clearview AI is a facial recognition software.[76] The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[77] Using the controversial data scraping tool, ICE can now deploy mass surveillance to identify and detain immigrants.[75] United States government agencies have a history of mass surveillance. In 2017, the DHS, ICE, and the Department of Health and Human Services used Palantir Technologies to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children.[78] The FBI and ICE searched state and federal driver’s license databases to find undocumented immigrants using facial recognition.[79][2][80] Facial recognition technology is proven to be less accurate in identifying women and individuals with darker skin,[2] therefore discriminating against migrants of color and women.
July 2021 International NSO Group Pegasus Project (investigation) Natural language processing, geospatial analytics Privacy, presumption of innocence, freedom of expression Amnesty International and Forbidden Stories release their Pegasus Project (investigation). The investigation reveals that the Israeli company NSO Group contracted Pegasus (spyware) to over 50 countries to spy on over 50,000 surveillance targets from 2016 to 2021.[81] The NSO Group’s clients include Azerbaijan, Bahrain, Hungary, India, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Togo, and the United Arab Emirates (UAE).[82] The UAE is revealed to be one of the most active users of Pegasus, having targeted 10,000 people, including Ahemd Mansoor.[83] The targets across states included activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, several heads of state, and least 180 journalists.[81] The spyware, used by repressive governments to silence dissent, is surreptitiously installed into victims’ phones and allows complete device access to the perpetrator (including messages, emails, cameras, microphones, calls, contacts, and media).[82] The NSO Group claims to sell their products to government clients to collect data from individuals’ mobile devices suspected of being involved in serious crimes or terrorism and that the leaked state surveillance was misuse and would be investigated.[81] The NSO Group would not take further action to stop its tools from being used to unlawfully target and surveil citizens, would deny any wrongdoing, and would claim its company is involved in a lifesaving mission.[82]
2021 United States Department of Homeland Security, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement AI used at the Mexico-United States Border during the Presidency of Joe Biden Image recognition, voice recognition, natural language processing, big data analytics, automated decision-making, geospatial analytics, generative AI Privacy, right to seek asylum, freedom of association and movement, presumption of innocence In 2021 the Department of Homeland Security (DHS) receives $780 million for border technology and surveillance (drones, sensors, and other tech to detect border crossings).[84] The U.S. Customs and Border Protection (CBP) deploys a system of autonomous surveillance towers equipped with radar, cameras, and algorithms that use AI systems trained to analyze objects and movement.[85] These towers, able to recognize the difference between humans, animals, and objects, are part of the Biden administrations push for “smart borders.”[86] The United States also utilizes small unmanned aerial systems, remote drones for military operations, to identify and surveil migrants.[87] Local border police use facial recognition technology, cellphone tracking, license-plate cameras, drones, and spy planes, sparking debate on privacy rights to anyone in the region.[88] The expansion of AI-bolstered border surveillance infrastructure is associated with an increase in deaths at the border, pushing migrants to more remote and dangerous routes.[89] The CBP requires migrants to utilize mobile application CBP One upon arrival to the US-Mexico border, submitting biometric and personal data to be considered for asylum.[90] However, the app is significantly worse at recognizing the faces of black and brown people, which would lead to a reduction in the number of black asylum seekers after its rollout in 2023.[91] Asylum seekers from Haiti and African countries are often blocked from entry at the southern border, victim to a biased algorithm and its inability to recognize faces with darker skin tones.[92] The DHS is also building a $6.158 billion biometric database, the Homeland Advanced Recognition Technology (HART) system, to collect, organize, and share data on 270 million people (including children).[93] It is expected to be the largest biometric database in the US, aggregating the biometric data of individuals (without their consent) from government agencies to create digital profiles to target migrants for surveillance, raids, arrests, detention, and deportation.[93]
May 2022 United States and allied countries Russia Doppelganger (disinformation campaign) Natural language processing Freedom of expression, privacy The Kremlin launches a disinformation campaign, promoting pro-Russian narratives and disseminating disinformation through cloned websites, fake articles, and social media.[94] The Kremlin utilizes AI to create disinformation content, buys domains similar to legitimate sites, and spreads fearmongering across the West.[94] The project would include a total of 7,983 disinformation campaigns, mimicking German, American, Italian, British, and French media outlets and websites, including The Guardian, Fox News, Ministry for Europe and Foreign Affairs (France), and NATO, resulting in a total of 828,842 clicks.[95] The Kremlin coordinates and finances the campaign, undermining Ukrainian objectives, promoting pro-Russia policies, and stoking internal Western tension.[96]
2022 Ukraine Russia Russia’s use of AI in wartime in the context of the Russian invasion of Ukraine Natural language processing, big data analytics, automated decision-making, content recommendation algorithm, weapons, geospatial analytics War/destruction The February 2022 Russian invasion of Ukraine brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.[97] There are reports of the Ministry of Defense (Russia) using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.[98] Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.[99][100] OpenAI would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.[101]
February 2023 Russia Roskomnadzor Russian surveillance of civilians (especially LGBTQ people) in the context of the Russian invasion of Ukraine Image recognition, natural language processing, big data analytics Privacy, freedom of expression The Russian Internet was isolated from the world after the 2019 Sovereign Internet Law, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.[2] The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. In February, 2023, Russian federal agency Roskomnadzor, responsible for monitoring, controlling, and censoring Russian media, launches the AI-based detection system Oculus.[2] The program analyzes over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.[102] The program scans for suicide, pro-drug, and extremist content, as well as calls for protests. It is also searching for pro-LGBTQ content, cracking down on the community as part of a framing tactic in the Ukrainian War, claiming to be defending traditional Russian values.[103] The Kremlin claims the program can identify people under a beard or a mask and determine age, insinuating the ability to easily identify and target LGBTQ content creators.[104]
February 2023 International Team Jorge Team Jorge Generative AI, natural language processing Privacy, presumption of innocence, freedom of expression An investigation reveals an Israeli-based team of journalists, led by former Israeli special forces operatives Tal Hanan, have been working for profit under the radar on various disinformation campaigns and elections for decades. Under the pseudonym Jorge, Hanan leads a team of journalists from over 30 outlets.[105] The investigation reveals that the team worked on 33 presidential disinformation campaigns, and several other voting campaigns in almost every continent including the 2016 Donald Trump win for US presidency and Brexit.[106] Team Jorge has various technological services to provide to those willing to pay, including active intelligence (hijacking email and encrypted messaging including Gmail and Telegram), deceit and defamation (leaking documents and fabricating scandals to harm rivals), vote suppression (disrupting democratic processes and creating stolen election campaigns), perception hacking (creating conspiracies and fake blogs), and influence ops (using an army of avatars to spread disinformation).[107] The team uses the following tools to achieve its objectives: Profiler (a tool that creates an intel profile by data-scraping), Global Bank Scan (financial intel reports on targets), Hacks (messaging accounts), AIMS (a platform that creates avatars and deploys them on disinformation campaigns), Blogger (a system for mass-creating fake blogs to spread content), and Residential Proxy (a tool to separate Jorge and clients from the disinformation campaigns).[108] The software package Advanced Impact Media Solutions (AIMS) enables the Team’s army of 30,000 avatars, which have unique digital backgrounds and authentic profile pictures, to create and spread propaganda and disinformation at the client’s behest.[109] AIMS can use key words to create posts, articles, comments, or tweets in any language with positive, negative, or neutral tones.[110][111]
October 2023 Gaza Strip Israeli Defense Forces (IDF) AI-assisted targeting in the Gaza Strip Generative AI, automated decision-making, weapons, geospatial analysis, big data analytics War/destruction The Israeli Defense Forces (IDF) implements AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip.[112] The IDF itself acknowledges the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.[113] The Israeli military uses the AI tool Pegasus (spyware) to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[114] Lavender relies on surveillance network and assigns each inputed Gazan a score from 1-100, estimating their likelihood of being a Hamas militant.[115] The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).[116] The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.[117] As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.[116]

AI in elections

With the increasing sophistication of artificial intelligence (AI) technologies, the threat of AI-generated misinformation and disinformation influencing elections has become a growing concern. This list highlights well-documented instances of AI involvement in elections, including deepfakes, bots, and other forms of AI-powered manipulation. Please note that this is not an exhaustive list, as the scope of AI's impact on elections is still evolving, and new cases may emerge. The following examples illustrate how AI has been used to influence electoral outcomes, undermine democratic processes, or manipulate public opinion.

Election Year Country Name AI Type Details
2016 United States Russian interference in the 2016 United States elections Deepfakes, natural language processing Vladimir Putin orders an influence campaign of the 2016 United States elections.[118] Russia’s goal is to exacerbate political division, destabilize democracy,[118] and to influence American voting behavior in favor of Donald Trump.[119] Russia implements cyber-attacks, disinformation campaigns, and dispatches an army of bots on social media sites.[118]
2018 Brazil Twitter bots ahead of 2018 Brazilian general election Natural language processing In the months leading up to the 2018 Brazilian general election, Twitter was rife with bots spreading misinformation and disinformation.[120] Between June 22 and 23, over 20% of retweets related to the two leading candidates Luiz Inácio Lula da Silva and Jair Bolsonaro were performed by bots.[121]
2022 Kenya Disinformation in the 2022 Kenyan general election Generative AI, deepfakes From May 2022 to July 2022, the 11.8 million Kenyans on social media are exposed to disinformation including fake polls, news, and videos.[122] A deepfake featuring a candidate covered in blood, implying they were a murderer, garners over 505,000 views on TikTok.[123] Other popular deepfakes include Barack Obama endorsing a candidate, and doctored videos of candidates, ethnic groups, and the son of Kenya’s former president.[124] The hundreds of false and misleading claims about the election are not specific to one party or campaign.[125]
2023 Nigeria AI-generated disinformation in the 2023 Nigerian presidential election Generative AI, deepfakes The 2023 Nigerian elections cycle is preceded by AI-generated deepfakes flooding social media.[126] In the months leading up to the election, AI-generated disinformation includes deepfakes of Nollywood actors and American celebrities, inorganic hashtags, and misinformation spread by social media influencers and bot accounts endorsing multiple nominees.[127]
2023 Burkina Faso Synthesia (company) used for creating deepfakes Generative AI, deepfakes Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.[128] However, the technology can be used to create deepfakes, and videos surface in 2023 in Burkina Faso, claiming international support of the West African Junta.[129] Synthesia bans the accounts that created the 2023 election propaganda videos and strengthens their content review processes.[130] Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.[131]
2024 United States Deepfakes in the 2024 United States presidential election Generative AI, deepfakes Deepfakes are not restricted by the United States Federal Election Commission (FEC), and they are circulated during the leadup to the 2024 US election.[132] The Ron DeSantis campaign releases an AI-generated video of Donald Trump hugging Anthony Fauci,[133] the Trump campaign spreads a deepfake of Ron DeSantis in a women’s suit, and the Republican National Commission releases a video of AI-generated images including China invading Taiwan claiming to depict a future if Joe Biden is re-elected.[134] Nearly 5,000 people in New Hampshire receive a call from a voice sounding like Joe Biden (AI generated) telling people not to vote.[135]
2024 India Misinformation and disinformation in the 2024 elections in India Generative AI, deepfakes Political parties use AI to generate content leading up to the 2024 elections in India.[136] The Indian National Congress (INC) releases two videos featuring deepfakes of Bollywood stars with large followings criticizing Narendra Modi (who belongs to the Bharatiya Janata Party (BJP)) which garner millions of views.[137] The videos claim Modi fails to keep campaign promises and is harming the economy and the actors whose likeness is used lodge a police complaint against the social media handles.[138] The Dravida Munnetra Kazhagam (DMK) party, which rules the southern state of Tamil Nadu, play AI-generated videos of their dead former leader at political events to garner support.[139] Over half of India’s population of over 1 billion people are active internet users.[140] The 2024 Global Risks Report cites mis and disinformation as the most severe global risk, and India as the country facing the highest risk in this regard.[141]
2024 Venezuela Synthesia (company) used for creating deepfakes to influence 2024 Venezuelan presidential election Generative AI, deepfakes Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.[142] However, the technology can be used to create deepfakes and is used by the Venezuelan state prior to the 2024 elections to spread disinformation on behalf of the dictator Nicolás Maduro.[143] Two pro-Venezuelan videos surface in 2023 ahead of the 2024 election, featuring AI deepfakes with real actors’ likenesses, boasting a healthy Venezuelan economy.[144] Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.[131]
2024 Pakistan Deepfakes in the 2024 Pakistani general election Generative AI, deepfakes Deepfakes are at the center of the digital debate in the Pakistani elections. In January, 110 million of the 240 million Pakistani people are active internet users and vulnerable to the deepfakes that arise before the election.[145] Imran Khan, the ex-prime minister, campaigns from prison, his team using an AI tool to generate speeches and videos in his likeness.[145]

Notable software

The tool is downloaded onto the target's phone and gives the user full access and control of the device.[81] State governments have used its to spy on dissidents and journalists.[2] Examples of state use:

Perpetrating State Use
Saudi Arabia Pegasus was found on the phones of journalist Jamal Khashoggi and his family after the Assassination of Jamal Khashoggi.[82]
United Arab Emirates To monitor and detain journalists and dissidents including Ahmed Mansoor.[2] Mansoor, an Emirati human rights defender, openly criticized the government and was arrested in 2017. He would serve a 10-year prison sentence, be kept in solitary confinement, and be denied books, a bed, and basic hygiene.[146]
Mexico To monitor over 25 journalists looking into corruption, including Celio Pineda, whose device was marked for surveillance just weeks before his killing in 2017.[82]
Morocco To surveil and capture journalist Omar Radi after he openly criticized the government.[2]
Spain To spy on Catalan separatists.[2]
Israel In the AI-assisted targeting in the Gaza Strip.[2]
Germany Purchased the spyware for Federal Criminal Police Office (Germany) use.[147]
Hungary Surveilling journalists, including investigative reporter Szabolcs Panyi.[81]
Belgium Surveilling journalists.[2]
Poland Surveilling journalists.[2]
Azerbijan Surveilling over 40 journalists including Sevinj Vagifgizi.[82]
India Surveilling over 40 journalists from almost every major media outlet including Siddharth Varadarajan, co-founder of independent news outlet The Wire (India).[82]

Lavender

A risk assessment tool that relies on a surveillance network.[148]

Perpetrating State Use
Israel AI-assisted targeting in the Gaza Strip.[149]

Gospel

The Gospel is an AI-driven target creation platform that provides autorecommendations for attacking individuals.[150]

Perpetrating State Use
Israel AI-assisted targeting in the Gaza Strip.[151] The application accelerates the amount of targets created for the Israeli Defense Forces to locate and assassinate.[152]

Where’s Daddy

Target tracking software.[153]

Perpetrating State Use
Israel AI-assisted targeting in the Gaza Strip.[154]

A surveillance and tracking tool.[155]

Perpetrating Entity Use
United States, Department of Health and Human Services Tracking and surveilling migrants.[156]
United States, Chicago Police Department Predictive Policing until shut down in 2019.[157]
German law enforcement The police of Hamburg and Hesse use Palantir to surveil and enact predictive policing until ruled unconstitutional in February 2023.[158]

Clearview AI is a facial recognition software employed by law enforcement agencies and private companies.[159] While compiling data, the company scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[160]

Perpetrating Entity Use
U.S. Immigration and Customs Enforcement Surveilling immigrants.[75]

Oculus

A surveillance technology that can analyze over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.[161]

Perpetrating State Use
Russia Monitoring and surveilling LGBTQ citizens.[2]

The London based company's stated services are creating marketing material and internal presentations but have been used to create politically charged deep fakes in breach of its terms.[162]

Perpetrating State Use
China Chinese state-aligned actors used Synthesia's AI-generated broadcasters to disseminate Chinese Communist Party propaganda.[163]
Burkina Faso Deepfakes surface around the 2023 elections in Burkina Faso, claiming international support of the West African Junta.[164]

See also

See also

References

  1. F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
  2. 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 "Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights|" (PDF). europarl.europe.eu. May 2024. Retrieved 6 November 2024.
  3. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
  4. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
  5. F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
  6. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
  7. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
  8. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
  9. Belli, Luca (28 March 2024). "Regulating Facial Recognition in Brazil". cambridge.org. Retrieved 6 December 2024.
  10. Mari, Angelica (13 July 2023). "Facial recognition surveillance in Sao Paulo could worsen racism". aljazeera.com. Retrieved 6 December 2024.
  11. Arcoverde, Leticia (14 October 2024). "Brazilian law enforcement cagey about facial recognition use". brazilian.report. Retrieved 6 December 2024.
  12. Mari, Angelica (13 July 2023). "Facial recognition surveillance in Sao Paulo could worsen racism". aljazeera.com. Retrieved 6 December 2024.
  13. Liang, Lu-Hai (16 October 2024). "Brazilian groups call forban on facial recognition". biometricupdate.com. Retrieved 6 December 2024.
  14. 14.0 14.1 14.2 14.3 14.4 "The gangs matrix*|". stop-watch.org. 2024. Retrieved 6 December 2024.
  15. 15.0 15.1 15.2 "What is the Gangs Matrix|". amnesty.org.uk. 18 May 2020. Retrieved 6 December 2024.
  16. Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
  17. Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
  18. Yang, Zeyi (22 November 2022). "China just announced a new social credit law. Here's what it means". technologyreview.com. Retrieved 27 December 2024.
  19. Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
  20. Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
  21. Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
  22. Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
  23. Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
  24. 24.0 24.1 "Revolutionizing Investigations: The Future of Generative AI in Assisting Law Enforcement to Solve Crimes Faster|". cellebrite.com. 2024. Retrieved 12 December 2024.
  25. Greenberg, Andy (14 June 2019). "TITLE". wired.com. Retrieved 19 December 2024.
  26. Stanley, Jay (16 May 2017). "Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution". aclu.org. Retrieved 19 December 2024.
  27. Stanley, Jay (16 May 2017). "Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution". aclu.org. Retrieved 19 December 2024.
  28. "Serbia: Authorities using spyware and Cellebrite forensic extraction tools to hack journalists and activists|". amnesty.org. 16 December 2024. Retrieved 12 December 2024.
  29. Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
  30. Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
  31. Mao, Frances (7 July 2023). "Robodebt: Illegal Australian welfare hunt drove people to despair". bbc.com. Retrieved 26 November 2024.
  32. Mao, Frances (7 July 2023). "Robodebt: Illegal Australian welfare hunt drove people to despair". bbc.com. Retrieved 26 November 2024.
  33. Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
  34. 34.0 34.1 34.2 34.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024.
  35. 35.0 35.1 35.2 35.3 "Myanmar: Facebook's systems promoted violence against Rohingya; Meta owes reparations|". WEB.WEB. 29 September 2022. Retrieved 22 November 2024.
  36. 36.0 36.1 36.2 36.3 "The Social Atrocity: Meta and The Right to Remedy for The Rohingya|". amnesty.org. 29 September 2022. Retrieved 22 November 2024.
  37. Zaleznik, Daniel (July 2021). "Facebook and Genocide: How Facebook contributed to genocide in Myanmar and why it will not be held accountable". systemicjustice.org. Retrieved 22 November 2024.
  38. 38.0 38.1 38.2 38.3 "Automated technologies and the future of Fortress Europe|". amnesty.org. 28 March 2019. Retrieved 4 December 2024.
  39. Bacchi, Umberto (5 February 2021). "EU's lie-detecting virtual border guards face court scrutiny". reuters.com. Retrieved 4 December 2024.
  40. Bacchi, Umberto (5 February 2021). "EU's lie-detecting virtual border guards face court scrutiny". reuters.com. Retrieved 4 December 2024.
  41. Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
  42. Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
  43. Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
  44. Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
  45. Rajagopalan, Megha (7 March 2018). "Sri Lanka Is Blocking Facebook For Three Days In Response To Violence Against Minorities". buzzfeednews.com. Retrieved 16 December 2024.
  46. Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
  47. Sudhir, K.; Sunder, Shyam (27 March 2020). "What Happens When a Billion Identities Are Digitized?". insights.som.yale.edu. Retrieved 22 November 2024.
  48. Panigrahi, Subhashish (April 2022). "TITLE". interactions.acm.org. Retrieved 22 November 2024.
  49. Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
  50. Banerjee, Arjun (9 April 2023). "India the surveillance state and the role of Aadhaar". countercurrents.org. Retrieved 22 November 2024.
  51. Jain, Anushka (3 December 2021). "Crores of pensioners to be verified using UIDAI-linked facial recognition app". medianama.com. Retrieved 22 November 2024.
  52. Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
  53. Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
  54. Cagle, Matt; Ozer, Nicole (22 May 2018). "Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology". aclu.org. Retrieved 21 November 2024.
  55. Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
  56. O’Brien, Matt (3 April 2019). "Face recognition researcher fights Amazon over biased AI". apnews.com. Retrieved 21 November 2024.
  57. Alexander, Julia (30 September 2019). "YouTube moderation bots punish videos tagged as 'gay' or 'lesbian,' study finds". theverge.com. Retrieved 26 November 2024.
  58. Sams, Brandon (29 September 2020). "YouTube's New Age Restriction AI Worries LGBTQ+ Community". lifewire.com. Retrieved 26 November 2024.
  59. Bensinger, Greg (14 August 2019). "YouTube discriminates against LGBT content by unfairly culling it, suit alleges". washingtonpost.com. Retrieved 26 November 2024.
  60. Sams, Brandon (29 September 2020). "YouTube's New Age Restriction AI Worries LGBTQ+ Community". lifewire.com. Retrieved 26 November 2024.
  61. George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
  62. George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
  63. George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
  64. Tkeshelashvili, Mariami; Saade, Tiffany (26 September 2024). "Decrypting Iran's AI-Enhanced Operations in Cyberspace". securityandtechnology.org. Retrieved 21 November 2024.
  65. Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
  66. Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
  67. Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
  68. Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
  69. Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
  70. Leckie, George (30 September 2023). "The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm". bristol.ac.uk. Retrieved 4 December 2024.
  71. Leckie, George (30 September 2023). "The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm". bristol.ac.uk. Retrieved 4 December 2024.
  72. Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
  73. Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
  74. Satariano, Adam (20 August 2020). "British Grading Debacle Shows Pitfalls of Automating Government". nytimes.com. Retrieved 4 December 2024.
  75. 75.0 75.1 75.2 "Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software|" (PDF). immigrantdefenseproject.org. 19 October 2020. Retrieved 8 November 2024.
  76. Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
  77. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
  78. Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
  79. Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
  80. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
  81. 81.0 81.1 81.2 81.3 81.4 "About the Pegasus Project|". forbiddenstories.org. 18 July 2021. Retrieved 9 November 2024.
  82. 82.0 82.1 82.2 82.3 82.4 82.5 82.6 "TITLE|". amnesty.org. 19 July 2021. Retrieved 9 November 2024.
  83. Coates Ulrichsen, Kristian (9 June 2022). "Pegasus as a case study of evolving ties between the UAE and Israel". gulfstateanalytics.com. Retrieved 8 November 2024.
  84. Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
  85. Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
  86. Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
  87. Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
  88. Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
  89. Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
  90. Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
  91. Del Bosque, Melissa (8 February 2023). "Facial recognition bias frustrates Black asylum applicants to US, advocates say". theguardian.com. Retrieved 19 December 2024.
  92. Del Bosque, Melissa (8 February 2023). "Facial recognition bias frustrates Black asylum applicants to US, advocates say". theguardian.com. Retrieved 19 December 2024.
  93. 93.0 93.1 "HART Attack|" (PDF). immigrantdefenseproject.org. May 2022. Retrieved 12 December 2024.
  94. 94.0 94.1 "Russian Disinformation Campaign "DoppelGanger" Unmasked: A Web of Deception|". cybercom.mil. 3 September 2024. Retrieved 6 December 2024.
  95. "What is the doppelganger operation? List of Resources|". disinfo.eu. 30 October 2024. Retrieved 6 December 2024.
  96. Chawrylo, Katarzyna (13 September 2024). "'Doppelganger': the pattern of Russia's anti-Western influence operation". osw.waw.pl. Retrieved 6 December 2024.
  97. Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
  98. Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
  99. Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
  100. Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
  101. "Russia using generative AI to ramp up disinformation, says Ukraine minister|". reuters.com. 16 October 2024. Retrieved 13 November 2024.
  102. Litvinova, Dasha (23 May 2023). "The cyber gulag: How Russia tracks, censors and controls its citizens". apnews.com. Retrieved 15 November 2024.
  103. Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
  104. Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
  105. Kirchgaessner, Stephanie; Ganguly, Manisha; Pegg, David (14 February 2023). "Revealed: the hacking and disinformation team meddling in elections". WEB. Retrieved 5 December 2024.
  106. Andrzejewski, Cecile (15 February 2023). ""Team Jorge": In the heart of a global disinformation machine". forbiddenstories.org. Retrieved 5 December 2024.
  107. Benjakob, Omer (15 February 2023). "Hacking, Extortion, Election Interference: These Are the Tools Used by Israel's Agents of Chaos and Manipulation". haaretz.com. Retrieved 27 March 2023.
  108. Benjakob, Omer (15 February 2023). "Hacking, Extortion, Election Interference: These Are the Tools Used by Israel's Agents of Chaos and Manipulation". haaretz.com. Retrieved 27 March 2023.
  109. Kirchgaessner, Stephanie; Ganguly, Manisha; Pegg, David (14 February 2023). "Revealed: the hacking and disinformation team meddling in elections". WEB. Retrieved 5 December 2024.
  110. Andrzejewski, Cecile (15 February 2023). ""Team Jorge": In the heart of a global disinformation machine". forbiddenstories.org. Retrieved 5 December 2024.
  111. Funk, Allie (21 November 2023). "The Repressive Power of Artificial Intelligence". freedomhouse.org. Retrieved 5 December 2024.
  112. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
  113. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
  114. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
  115. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
  116. 116.0 116.1 "='AI-assisted genocide': Israel reportedly used database for Gaza kill lists |". Aljazeera.com. 4 April 2024. Retrieved 6 November 2024.
  117. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
  118. 118.0 118.1 118.2 "Fact Sheet: What We Know about Russia's Interference Operations|". gmfus.org. 2019. Retrieved 12 December 2024.
  119. "Exposure to Russian Twitter Campaigns in 2016 Presidential Race Highly Concentrated, Largely Limited to Strongly Partisan Republicans|". nyu.edu. 9 January 2023. Retrieved 12 December 2024.
  120. Allen, Andrew (17 August 2018). "Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections". wilsoncenter.org. Retrieved 13 December 2024.
  121. Allen, Andrew (17 August 2018). "Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections". wilsoncenter.org. Retrieved 13 December 2024.
  122. Olivia, Lilian (5 January 2023). "Disinformation was rife in Kenya's 2022 elections". blogs.Ise.ac.uk. Retrieved 12 December 2024.
  123. Olivia, Lilian (5 January 2023). "Disinformation was rife in Kenya's 2022 elections". blogs.Ise.ac.uk. Retrieved 12 December 2024.
  124. Mwai, Peter (29 May 2022). "Kenta Elections 2022: Misinformation circulating online". bbc.com. Retrieved 12 December 2024.
  125. Kulundu, James (8 August 2022). "Election campaigning ends in Kenya but disinformation battle drags on". factcheck.afp.com. Retrieved 12 December 2024.
  126. Davis, Eric (18 March 2024). "Q&A: Hannah Ajakaiye on manipulated media in the 2023 Nigerian presidential elections, generative AI, and possible interventions". securityandtechnology.org. Retrieved 12 December 2024.
  127. Orakwe, Emmanuel (3 July 2024). "The challenges of AI-driven political disinformation in Nigeria". africainfact.com. Retrieved 12 December 2024.
  128. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
  129. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
  130. Ganguly, Manisha (16 October 2024). "'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda". theguardian.com. Retrieved 11 December 2024.
  131. 131.0 131.1 "Introducing new guidelines for political content on Synthesia|". synthesia.io. 2 December 2024. Retrieved 11 December 2024.
  132. Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
  133. Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
  134. Nehamas, Nicholas (8 June 2023). "DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter". nytimes.com. Retrieved 13 December 2024.
  135. Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
  136. Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
  137. Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
  138. Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
  139. Sharma, Yashraj (20 February 2024). "Deepfake democracy: Behind the AI trickery shaping India's 2024 election". aljazeera.com. Retrieved 16 December 2024.
  140. Mukherjee, Mitali (19 December 2023). "AI deepfakes, bad laws - and a big fat Indian election". reutersinstitute.politics.ox.ac.uk. Retrieved 16 December 2024.
  141. Mukherjee, Mitali (19 December 2023). "AI deepfakes, bad laws - and a big fat Indian election". reutersinstitute.politics.ox.ac.uk. Retrieved 16 December 2024.
  142. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
  143. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
  144. Ganguly, Manisha (16 October 2024). "'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda". theguardian.com. Retrieved 11 December 2024.
  145. 145.0 145.1 "Deepfakes weaponized to target Pakistan's women leaders|". france24.com. 12 March 2024. Retrieved 16 December 2024.
  146. White, Rebecca (12 December 2023). "Ahmed Mansoor: the poet who spoke truth to power and paid a heavy price". securitylab.amnesty.org. Retrieved 8 November 2024.
  147. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
  148. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
  149. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
  150. Davies, Harry (1 December 2023). "'The Gospel': how Israel uses AI to select bombing targets in Gaza". theguardian.com. Retrieved 4 December 2024.
  151. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
  152. Davies, Harry (1 December 2023). "'The Gospel': how Israel uses AI to select bombing targets in Gaza". theguardian.com. Retrieved 4 December 2024.
  153. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
  154. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
  155. Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
  156. Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
  157. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
  158. Killeen, Molly (16 February 2023). "German Constitutional Court strikes down predictive algorithms for policing". euractiv.com. Retrieved 13 November 2024.
  159. Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
  160. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
  161. Litvinova, Dasha (23 May 2023). "The cyber gulag: How Russia tracks, censors and controls its citizens". apnews.com. Retrieved 15 November 2024.
  162. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
  163. Antoniuk, Daryna (9 February 2023). "Deepfake news anchors spread Chinese propaganda on social media". therecord.media. Retrieved 11 December 2024.
  164. Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.