Difference between revisions of "Timeline of AI ethics violations"
(→Timeline of AI ethics violations) |
(→{{w|Synthesia (company)}}) |
||
(169 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
==Big picture== | ==Big picture== | ||
− | + | This timeline was completed in December 2024. These events are still evolving, and ongoing investigations are being conducted. Due to limited information access and transparency around AI developments, there are likely other cases of AI-related human rights abuses and ethics controversies that have not been documented yet. | |
+ | {| class="wikitable" | ||
+ | ! Year !! Details | ||
+ | |- | ||
+ | | 2008 || | ||
+ | * The {{w|Los Angeles Police Department}} starts working with United States federal agencies to explore {{w|Predictive policing}}. | ||
+ | |- | ||
+ | | 2009 || | ||
+ | * {{w|India}} creates its first centralized identification system, {{w|Aadhar}}, providing each citizen a unique 12-digit ID number. | ||
+ | |- | ||
+ | | 2011 || | ||
+ | * After the {{w|2011 England riots}} the {{w|Metropolitan Police Service}} enacts {{w|Predictive policing}} algorithm {{w|Gangs Matrix}}. | ||
+ | * {{w|Brazil}} implements facial recognition technology in public spaces. | ||
+ | |- | ||
+ | | 2012 || | ||
+ | * The {{w|Chicago Police Department}} creates the {{w|Predictive policing}} algorithm Strategic Subject List. | ||
+ | * The {{w|New York City Policy Department}} starts testing a {{w|Predictive policing}} system. | ||
+ | * {{w|Facebook}} internal studies document awareness that its recommendation algorithms could lead to real-world harm. | ||
+ | * The {{w|Government of China}} starts constructing a plan for the {{w|Social Credit System}}. | ||
+ | |- | ||
+ | | 2013 || | ||
+ | * The city of {{w|Rongcheng}} implements a local {{w|Social Credit System}}. | ||
+ | |- | ||
+ | | 2014 || | ||
+ | * {{w|Myanmar}} authorities temporarily block {{w|Facebook}} due to an outbreak of ethnic violence in {{w|Mandalay}}. | ||
+ | * The {{w|Government of China}} releases the founding {{w|Social Credit System}} document. | ||
+ | |- | ||
+ | | 2015 || | ||
+ | * Israeli firm {{w|Cellebrite}} begins offering AI-powered tools of evidence processing to law enforcement. | ||
+ | |- | ||
+ | | 2016 || | ||
+ | * The {{w|RAND Corporation}} finds inefficacy in the Chicago Police Department's Strategic Subject List. | ||
+ | * Australian agency {{w|Centrelink}} rolls out inaccurate debt notices generated by an automated decision-making system to over 500,000 Australians, known as the {{w|Robodebt scheme}}. | ||
+ | * The Chinese government uses the AI-powered {{w|Integrated Joint Operations Platform}} for mass surveillance of Turkic Muslims and Uyghurs of {{w|Xinjiang}}, resulting in mass detention and further oppression of the surveilled population. | ||
+ | * An internal {{w|Facebook}} study acknowledges that its content recommendation system can increase extremism. | ||
+ | * The {{w|NSO Group}} contracts {{w|Pegasus (spyware)}} and begins a massive secret surveillance operation, providing services to over 50 countries. | ||
+ | * {{w|Vladimir Putin}} orders an influence campaign of the {{w|2016 United States elections}}. | ||
+ | |- | ||
+ | | 2017 || | ||
+ | * The {{w|Myanmar}} security forces carry out the {{w|Rohingya Genocide}} after {{w|Facebook}} allowed inflammatory and extremist content against the {{w|Rohingya People}} to fester for years. | ||
+ | * The {{w|Department of Homeland Security}} and {{w|U.S. Immigration and Customs Enforcement}} uses {{w|Palantir Technologies}} to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children. | ||
+ | |- | ||
+ | | 2018 || | ||
+ | * {{w|Mark Zuckerburg}} promises to increase Burmese language-speaking content moderators in the wake of the {{w|Rohingya Genocide}}. | ||
+ | * {{w|India}} implements facial recognition technology using {{w|Aadhar}} identification data for citizens to access services. | ||
+ | * The facial recognition software by {{w|Amazon (company)}} ({{w|Amazon Rekognition}}) is proven to have a disproportionate error rate when identifying women and people of color. | ||
+ | * {{w|Facebook}}'s recommendation algorithms fuel false Islamaphobic claims and calls for violence against Muslims in {{w|Sri Lanka}}. Mosques and Muslim-owned businesses, shops, and homes are destroyed, coupled with violence against Muslims in {{w|Ampara}} and {{w|Digana}}. The country bans the app for three days. | ||
+ | * The {{w|European Union}} pilots the AI-assisted "micro-gesture" analyzing border control tool "iBorderCtrl." | ||
+ | * {{w|Twitter}} hosts misinformation and disinformation in the months leading up to the {{w|2018 Brazilian general election}}. | ||
+ | |- | ||
+ | | 2019 || | ||
+ | * {{w|Predictive policing}} is shut down in Chicago. | ||
+ | * The {{w|Robodebt scheme}} is ruled unlawful by an Australian court. | ||
+ | * The {{w|European Union}}'s iBorderCtrl trial ends. | ||
+ | * The government of {{w|Iran}} integrates AI-based surveillance technologies into its legislative framework. | ||
+ | * {{w|YouTube}} is accused using algorithms biased against LGBTQ+ people and demonetizing queer content. | ||
+ | * {{w|Cellebtrite}} publicly announces its Universal Forensic Extraction Device (UFED) for law enforcement to unlock and extract files from phones. | ||
+ | |- | ||
+ | | 2020 || | ||
+ | * {{w|Predictive policing}} is shut down in Los Angeles. | ||
+ | * {{w|U.S. Immigrations and Customs Enforcement}} contracts {{w|Clearview AI}}, a controversial software that scraped the internet against platform rules, for mass surveillance of immigrants. | ||
+ | * Reports emerge of a skirmish in {{w|Libya}} that involved a {{w|STU Kargu}}-2 loitering drone that may be the first instance of a wartime autonomous drone kill. | ||
+ | * The {{w|2020 United Kingdom school exam grading controversy}} occurs, involving an AI algorithm creating predicted test scores (due to {{w|COVID-19}} preventing in-person exams) and disproportionately predicting lower scores for working-class students. | ||
+ | |- | ||
+ | | 2021 || | ||
+ | * A top {{w|European Union}} court hears a case brought against iBorderCtrl by digital rights activists. | ||
+ | * The {{w|Pegasus Project (investigation)}} reveals that the Israeli company {{w|NSO Group}} offered their surveillance services (using {{w|Pegasus (spyware)}}) to over 50 countries to spy on over 50,000 targets including activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, heads of state, and at least 180 journalists. | ||
+ | * The {{w|Department of Homeland Security}} (DHS) receives $780 million for border technology and surveillance (drones, sensors, and other tech to detect border crossings). | ||
+ | * The {{w|U.S. Customs and Border Protection}} (CBP) deploys a system of autonomous surveillance towers equipped with radar, cameras, and algorithms that use AI systems trained to analyze objects and movement. | ||
+ | |- | ||
+ | | 2022 || | ||
+ | * A {{w|Global Witness}} investigation reveals {{w|Facebook}}'s failure to detect blatant anti-Rohingya content. | ||
+ | * {{w|Iran}} uses AI-assisted facial recognition tools to identify women not following the Hijab mandates in the midst of the {{w|Mahsa Amini protests}}. | ||
+ | * {{w|Russia}} engages with AI in the {{w|Russian invasion of Ukraine}} in the form of cyber attacks, data analysis and decision making in the battlespace, and prioritizing autonomous weapons research. | ||
+ | * {{w|Russia}} launches project {{w|Doppelganger (misinformation campaign)}}, which uses AI to mimic Western media sources to spread pro-Russia narratives in the context of the Ukraine invasion. | ||
+ | * The {{w|Government of China}} releases the legal framework for the {{w|Social Credit System}}. | ||
+ | * Deepfakes go viral surrounding the {{w|2022 Kenyan general election}} | ||
+ | |- | ||
+ | | 2023 || | ||
+ | * {{w|Russia}}'s {{w|Roskomnadzor}} launches AI-based system {{w|Oculus}} to analyze and identify online content, scanning for calls for protests and LGBTQ content. | ||
+ | * An investigation reveals Israeli-based {{w|Team Jorge}} has been working for years under the radar on disinformation and election influence campaigns in almost every content. | ||
+ | * The {{w|Israeli Defense Forces}} use AI in the {{w|AI-assisted targeting in the Gaza Strip}} leading to extensive civilian death and mass destruction. | ||
+ | * The {{w|U.S. Customs and Border Protection}} rolls out the mobile application {{w|CBP One}}, requiring migrants to submit biometric and personal data to apply for asylum. However, the app is significantly worse at recognizing the faces of black and brown people, which would lead to a reduction in the number of black asylum seekers after its rollout. | ||
+ | * AI-generated disinformation spreads in the {{w|2023 Nigerian presidential election}} | ||
+ | * {{w|Synthesia (company)}} is used for creating deepfakes in favor of the Junta of {{w|Burkina Faso}} | ||
+ | |- | ||
+ | | 2024 || | ||
+ | * In February the {{w|Gangs Matrix}} is discontinued in England. | ||
+ | * Serbian authorities use {{w|Cellebrite}}’s spyware product NoviSpy to infect the phones of activists and journalists, obtaining their personal information. | ||
+ | * Deepfakes are rampant in the {{w|2024 United States presidential election}} | ||
+ | * {{w|Synthesia (company)}} is used for creating deepfakes to spread disinformation on behalf of the dictator {{w|Nicolás Maduro}}. | ||
+ | * Misinformation and disinformation campaigns surround the {{w|2024 elections in India}} | ||
+ | |} | ||
==Full timeline== | ==Full timeline== | ||
Line 9: | Line 101: | ||
The information included in the timeline outlines incidents of human rights violations in which AI was involved. | The information included in the timeline outlines incidents of human rights violations in which AI was involved. | ||
− | Here is a list of criteria on what rows | + | Here is a list of criteria on what rows are included: |
− | * AI involvement: | + | * AI involvement: the incident must involve the significant use of AI technologies. |
− | * Human rights impact: the incident must have violated human rights defined by international law and standards such as the {{w|Universal Declaration of Human Rights}} (UDHR) and subsequent treaties. Examples of human rights abuses include | + | * Human rights impact: the incident must have violated human rights defined by international law and standards such as the {{w|Universal Declaration of Human Rights}} (UDHR) and subsequent treaties. Examples of human rights abuses include privacy violation, war and destruction, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, restricting association and movement, inhibiting expression, preventing asylum, and arbitrary detention. |
− | * State of | + | * State of corporate responsibility: the incident must involve a state or corporate entity that has used AI technology to abuse human rights. |
* Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research. | * Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research. | ||
* The geographical range is global. | * The geographical range is global. | ||
* Relevance or significance: incidents with significant human rights violations will be prioritized. | * Relevance or significance: incidents with significant human rights violations will be prioritized. | ||
− | |||
===Timeline of AI ethics violations=== | ===Timeline of AI ethics violations=== | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
− | ! Onset !! Region !! Perpetrators !! Name !! Details | + | ! Onset !! Region !! Perpetrators !! Name !! AI Type !! Right Violated !! Details |
+ | |- | ||
+ | | {{dts|2008}} || {{w|United States}} || United States Law Enforcement Agencies || {{w|Predictive Policing}} in the United States || Predictive algorithmic scoring || Privacy, presumption of innocence || Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.<ref>{{cite web |last1=F |first1=Holly |title=Predictive Policing: Promoting Peace or Perpetuating Prejudice|url=https://d3.harvard.edu/platform-rctom/submission/predictive-policing-promoting-peace-or-perpetuating-prejudice/ |website=d3.harvard.edu |access-date=13 November 2024 |language=en |date=13 November 2018}}</ref> However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities in the United States.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> Predictive policing also threatens the {{w|Fourth Amendment to the United States Constitution}}, requiring reasonable suspicion before arrest.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref> The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref> The {{w|Chicago Police Department}} would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.<ref>{{cite web |last1=F |first1=Holly |title=Predictive Policing: Promoting Peace or Perpetuating Prejudice|url=https://d3.harvard.edu/platform-rctom/submission/predictive-policing-promoting-peace-or-perpetuating-prejudice/ |website=d3.harvard.edu |access-date=13 November 2024 |language=en |date=13 November 2018}}</ref> In 2016, the {{w|RAND Corporation}} would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref> | ||
+ | |- | ||
+ | | {{dts|2011}} || {{w|Brazil}} || {{w|Brazil}} city and state governments || Facial Recognition Technology in Brazil || Image recognition || Privacy, presumption of innocence || Brazil implements Facial Recognition Technology (FRT) in public spaces in 2011, developed under the context of “smart city” programs, operating with no legal framework for the use of FRT.<ref>{{cite web |last1=Belli |first1=Luca |title=Regulating Facial Recognition in Brazil|url=https://www.cambridge.org/core/books/cambridge-handbook-of-facial-recognition-in-the-modern-state/regulating-facial-recognition-in-brazil/CEEA7751492156D1FFE96E029CB014AD |website=cambridge.org |access-date=6 December 2024 |language=en |date=28 March 2024}}</ref> FRT provides real-time analytics to expedite the identification of criminals, stolen cars, missing persons, and lost objects.<ref>{{cite web |last1=Mari |first1=Angelica |title=Facial recognition surveillance in Sao Paulo could worsen racism|url=https://www.aljazeera.com/economy/2023/7/13/facial-recognition-surveillance-in-sao-paulo-could-worsen-racism |website=aljazeera.com |access-date=6 December 2024 |language=en |date=13 July 2023}}</ref> The country would have three FRT state and city projects in 2018 and nearly 300 in 2024.<ref>{{cite web |last1=Arcoverde |first1=Leticia |title=Brazilian law enforcement cagey about facial recognition use|url=https://brazilian.report/tech/2024/10/14/facial-recognition-systems-lack-transparency/ |website=brazilian.report |access-date=6 December 2024 |language=en |date=14 October 2024}}</ref> FRT disproportionately misidentifies Brazil’s 56% black population.<ref>{{cite web |last1=Mari |first1=Angelica |title=Facial recognition surveillance in Sao Paulo could worsen racism|url=https://www.aljazeera.com/economy/2023/7/13/facial-recognition-surveillance-in-sao-paulo-could-worsen-racism |website=aljazeera.com |access-date=6 December 2024 |language=en |date=13 July 2023}}</ref> In 2019, 90% of people arrested for nonviolent crimes using FRT would be black,<ref>{{cite web |last1=Liang |first1=Lu-Hai |title=Brazilian groups call forban on facial recognition|url=https://www.biometricupdate.com/202410/brazilian-groups-call-for-ban-on-facial-recognition |website=biometricupdate.com |access-date=6 December 2024 |language=en |date=16 October 2024}}</ref> indicating severe disproportionality in FRT’s influence on arrest and detention. | ||
+ | |- | ||
+ | | {{dts|2011}} || {{w|England}} || London's {{w|Metropolitan Police Service}} || {{w|Gangs Matrix}} || Predictive algorithmic scoring || Privacy, presumption of innocence || In response to the {{w|2011 England riots}} the {{w|Metropolitan Police Service}} creates and implements the {{w|Gangs Matrix}}, an algorithmic database containing individuals thought to be in a gang and likely to commit violence.<ref name="Stop Watch">{{cite web |title=The gangs matrix*{{!}} |url=https://www.stop-watch.org/what-we-do/projects/the-gangs-matrix/|website=stop-watch.org |access-date=6 December 2024 |language=en |date=2024}}</ref> The algorithm considers data such as social media activity, ethnicity, and known criminal activity.<ref name="Stop Watch">{{cite web |title=The gangs matrix*{{!}} |url=https://www.stop-watch.org/what-we-do/projects/the-gangs-matrix/|website=stop-watch.org |access-date=6 December 2024 |language=en |date=2024}}</ref> The term ‘gang’ is not defined, and a police officer only needs two pieces of ‘verifiable intelligence’ to place a subject on the list.<ref name="Amnesty: Gangs Matrix">{{cite web |title=What is the Gangs Matrix{{!}} |url=https://www.amnesty.org.uk/london-trident-gangs-matrix-metropolitan-police|website=amnesty.org.uk |access-date=6 December 2024 |language=en |date=18 May 2020}}</ref> Sharing a {{w|YouTube}} video containing gang signs or any other social media interaction with gang-related symbols could land an individual on the database, targeting certain subcultures.<ref name="Amnesty: Gangs Matrix">{{cite web |title=What is the Gangs Matrix{{!}} |url=https://www.amnesty.org.uk/london-trident-gangs-matrix-metropolitan-police|website=amnesty.org.uk |access-date=6 December 2024 |language=en |date=18 May 2020}}</ref> The algorithm would contain 80% individuals between the ages of 16-24, 78% black males, 75% victims of violence, 35% people who never committed a serious offense, and 15% minors.<ref name="Stop Watch">{{cite web |title=The gangs matrix*{{!}} |url=https://www.stop-watch.org/what-we-do/projects/the-gangs-matrix/|website=stop-watch.org |access-date=6 December 2024 |language=en |date=2024}}</ref> In October of 2017, the Matrix would hold 3,806 people, some as young as 12,<ref name="Amnesty: Gangs Matrix">{{cite web |title=What is the Gangs Matrix{{!}} |url=https://www.amnesty.org.uk/london-trident-gangs-matrix-metropolitan-police|website=amnesty.org.uk |access-date=6 December 2024 |language=en |date=18 May 2020}}</ref> whose status would lead to struggles to find employment opportunities, government benefits, and housing.<ref name="Stop Watch">{{cite web |title=The gangs matrix*{{!}} |url=https://www.stop-watch.org/what-we-do/projects/the-gangs-matrix/|website=stop-watch.org |access-date=6 December 2024 |language=en |date=2024}}</ref> The Matrix, which does not process personal data fairly, retains data of individuals at zero or low risk, and retains data of individuals longer than necessary, would be discontinued in February 2024.<ref name="Stop Watch">{{cite web |title=The gangs matrix*{{!}} |url=https://www.stop-watch.org/what-we-do/projects/the-gangs-matrix/|website=stop-watch.org |access-date=6 December 2024 |language=en |date=2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2013}} || {{w|China}} || {{w|Government of China}} || {{w|Social Credit System}} || Big data analytics || Privacy, freedom of association and movement, presumption of innocence || The government of China begins planning a social credit system in 2012 to be rolled out by 2020, measuring the trustworthiness of individuals.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Chilling ‘Social Credit’ Blacklist|url=https://www.hrw.org/news/2017/12/12/chinas-chilling-social-credit-blacklist?gad_source=1&gclid=Cj0KCQiAvbm7BhC5ARIsAFjwNHuM65tjbhrxbd7dGv735QY6sTI0CUBxqwkbhEpUEEsfw36ItQaE9gsaAro_EALw_wcB |website=hrw.org |access-date=27 December 2024 |language=en |date=12 December 2017}}</ref> In 2014 the {{w|State Council of the People's Republic of China}} would release the systems founding document, planning to rollout pilot systems.<ref>{{cite web |last1=Mistreanu |first1=Simina |title=Life Inside China’s Social Credit Lboratory|url=https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/ |website=foreignpolicy.com |access-date=27 December 2024 |language=en |date=3 April 2018}}</ref> While the legal framework for the state-wide social credit system would not be released until 2022, the system inspired local and city governments to implement their own credit systems, promoting state-sanctioned moral values through incentives and punishments.<ref>{{cite web |last1=Yang |first1=Zeyi |title=China just announced a new social credit law. Here’s what it means.|url=https://www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean/ |website=technologyreview.com |access-date=27 December 2024 |language=en |date=22 November 2022}}</ref> In 2013, {{w|Rongcheng}} would implement one of the more well known social credit scoring systems. The rankings include traffic tickets, city-level rewards, donations,<ref>{{cite web |last1=Mistreanu |first1=Simina |title=Life Inside China’s Social Credit Lboratory|url=https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/ |website=foreignpolicy.com |access-date=27 December 2024 |language=en |date=3 April 2018}}</ref> shopping habits, and online speech<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Chilling ‘Social Credit’ Blacklist|url=https://www.hrw.org/news/2017/12/12/chinas-chilling-social-credit-blacklist?gad_source=1&gclid=Cj0KCQiAvbm7BhC5ARIsAFjwNHuM65tjbhrxbd7dGv735QY6sTI0CUBxqwkbhEpUEEsfw36ItQaE9gsaAro_EALw_wcB |website=hrw.org |access-date=27 December 2024 |language=en |date=12 December 2017}}</ref> which would affect eligibility for government jobs, placing kids in desired schools, and credit card applications.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Chilling ‘Social Credit’ Blacklist|url=https://www.hrw.org/news/2017/12/12/chinas-chilling-social-credit-blacklist?gad_source=1&gclid=Cj0KCQiAvbm7BhC5ARIsAFjwNHuM65tjbhrxbd7dGv735QY6sTI0CUBxqwkbhEpUEEsfw36ItQaE9gsaAro_EALw_wcB |website=hrw.org |access-date=27 December 2024 |language=en |date=12 December 2017}}</ref> Those deemed trustworthy can receive discounts on heating bills and favorable bank loans.<ref>{{cite web |last1=Mistreanu |first1=Simina |title=Life Inside China’s Social Credit Lboratory|url=https://foreignpolicy.com/2018/04/03/life-inside-chinas-social-credit-laboratory/ |website=foreignpolicy.com |access-date=27 December 2024 |language=en |date=3 April 2018}}</ref> In 2017 the Supreme People’s Court would have released more than 7 million blacklisted individuals who would be banned from air travel, high-speed trains, and luxury purchases.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Chilling ‘Social Credit’ Blacklist|url=https://www.hrw.org/news/2017/12/12/chinas-chilling-social-credit-blacklist?gad_source=1&gclid=Cj0KCQiAvbm7BhC5ARIsAFjwNHuM65tjbhrxbd7dGv735QY6sTI0CUBxqwkbhEpUEEsfw36ItQaE9gsaAro_EALw_wcB |website=hrw.org |access-date=27 December 2024 |language=en |date=12 December 2017}}</ref> | ||
+ | |- | ||
+ | | {{dts|2015}} || International || Law enforcement use of {{w|Cellebrite}} || {{w|Cellebrite}} AI-powered products used for surveillance || Big data analytics, image recognition || Right to privacy, presumption of innocence || Israeli firm and law enforcement contractor {{w|Cellebrite}} begins incorporating AI-powered tools of evidence processing in 2015 with image classification and in 2018 with its service “Pathfinder.”<ref name="Cellebrite AI Use">{{cite web |title=Revolutionizing Investigations: The Future of Generative AI in Assisting Law Enforcement to Solve Crimes Faster{{!}} |url=https://cellebrite.com/en/revolutionizing-investigations-the-future-of-generative-ai-in-assisting-law-enforcement-to-solve-crimes-faster/#:~:text=In%202018%2C%20Cellebrite%20introduced%20Pathfinder,include%20a%20multitude%20of%20devices.|website=cellebrite.com |access-date=12 December 2024 |language=en |date=2024}}</ref> The company eventually offers other AI-enhanced digital evidence analysis tools Guardian, Smart Search, Physical Analyzer, Autonomy, and Inspector.<ref name="Cellebrite AI Use">{{cite web |title=Revolutionizing Investigations: The Future of Generative AI in Assisting Law Enforcement to Solve Crimes Faster{{!}} |url=https://cellebrite.com/en/revolutionizing-investigations-the-future-of-generative-ai-in-assisting-law-enforcement-to-solve-crimes-faster/#:~:text=In%202018%2C%20Cellebrite%20introduced%20Pathfinder,include%20a%20multitude%20of%20devices.|website=cellebrite.com |access-date=12 December 2024 |language=en |date=2024}}</ref> In 2019 the company would publicly announce its Universal Forensic Extraction Device (UFED), to be implemented by law enforcement to unlock and extract files from any {{w|iOS}} device and recent {{w|Android (operating system)}} phones.<ref>{{cite web |last1=Greenberg |first1=Andy |title=TITLE|url=https://www.wired.com/story/cellebrite-ufed-ios-12-iphone-hack-android/ |website=wired.com |access-date=19 December 2024 |language=en |date=14 June 2019}}</ref> Cellebrite would be the most prominent maker of UFEDs enabling police to access information from phones and cloud services.<ref>{{cite web |last1=Stanley |first1=Jay |title=Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution|url=https://www.aclu.org/news/privacy-technology/mobile-phone-cloning-tools-need-be-subject-oversight-and |website=aclu.org |access-date=19 December 2024 |language=en |date=16 May 2017}}</ref> Cellebrite’s data extraction tools would be involved in multiple privacy and surveillance controversies globally. In the United States, domestic police and {{w|U.S. Customs and Border Protection}} would claim the authority to search and gain full access to phones without warrants.<ref>{{cite web |last1=Stanley |first1=Jay |title=Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution|url=https://www.aclu.org/news/privacy-technology/mobile-phone-cloning-tools-need-be-subject-oversight-and |website=aclu.org |access-date=19 December 2024 |language=en |date=16 May 2017}}</ref> In 2024, Serbian authorities would use Cellebrite’s spyware product NoviSpy to infect the phones of activists and journalists, obtaining their personal information.<ref name="Amnesty International: Cellebrite">{{cite web |title=Serbia: Authorities using spyware and Cellebrite forensic extraction tools to hack journalists and activists{{!}} |url=https://www.amnesty.org/en/latest/news/2024/12/serbia-authorities-using-spyware-and-cellebrite-forensic-extraction-tools-to-hack-journalists-and-activists/|website=amnesty.org |access-date=12 December 2024 |language=en |date=16 December 2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2016}} || {{w|Australia}} || {{w|Cenrelink}} || {{w|Robodebt scheme}} || Predictive algorithmic scoring || Privacy || The Australian agency {{w|Centrelink}} enacts an automated decision-making system to identify overpayments to welfare recipients.<ref>{{cite web |last1=Rinta-Kahila |first1=Tapani |last2=Someh |first2=Ida |title=Managing unintended consequences of algorithmic decision-making: The case of Robodebt|url=https://uk.sagepub.com/en-gb/eur/journals-permissions |website=sagepub.com |access-date=26 November 2024 |language=en |date=2023}}</ref> The system, named Robodebt, was not tested prior to rollout, and generates inaccurate debt notices.<ref>{{cite web |last1=Rinta-Kahila |first1=Tapani |last2=Someh |first2=Ida |title=Managing unintended consequences of algorithmic decision-making: The case of Robodebt|url=https://uk.sagepub.com/en-gb/eur/journals-permissions |website=sagepub.com |access-date=26 November 2024 |language=en |date=2023}}</ref> Over 500,000 Australians on welfare are affected, some incorrectly told they owe thousands of dollars to the government.<ref>{{cite web |last1=Mao |first1=Frances |title=Robodebt: Illegal Australian welfare hunt drove people to despair|url=https://www.bbc.com/news/world-australia-66130105 |website=bbc.com |access-date=26 November 2024 |language=en |date=7 July 2023}}</ref> There would be multiple suicides and reports of depression and anxiety over the payment notices.<ref>{{cite web |last1=Mao |first1=Frances |title=Robodebt: Illegal Australian welfare hunt drove people to despair|url=https://www.bbc.com/news/world-australia-66130105 |website=bbc.com |access-date=26 November 2024 |language=en |date=7 July 2023}}</ref> The government would defend the system until a 2019 court decision ruled Robodebt unlawful.<ref>{{cite web |last1=Rinta-Kahila |first1=Tapani |last2=Someh |first2=Ida |title=Managing unintended consequences of algorithmic decision-making: The case of Robodebt|url=https://uk.sagepub.com/en-gb/eur/journals-permissions |website=sagepub.com |access-date=26 November 2024 |language=en |date=2023}}</ref> | ||
+ | |- | ||
+ | | {{dts|2016}} || {{w|Xinjiang}}, {{w|China}}|| {{w|Government of China}} || {{w|Mass Surveillance in China}} of Ethnic Minorities|| Image recognition, natural language processing, voice recognition, big data analytics, automated decision-making, geospatial analytics || Privacy, presumption of innocence, freedom of association and movement || Chinese police and other government officials use the AI-powered application {{w|Integrated Joint Operations Platform}} (IJOP) for mass surveillance of the predominantly Turkic Muslim and {{w|Uyghur}} population of {{w|Xinjiang}}.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. | ||
+ | |- | ||
+ | | {{dts|August 2017}} || {{w|Myanmar}} || {{w|Facebook}} || {{w|Facebook}}s role in the {{w|Rohingya genocide}} || Content recommendation algorithm, natural language processing || War/destruction || Myanmar security forces begin a campaign of ethnic cleansing {{w|Rohingya People}} in {{w|Rakhine State}}, causing 700,000 to flee from the systematic murder, rape, and burning of homes.<ref name="Amnesty International: Facebook and Myanmar">{{cite web |title=Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations{{!}} |url=https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/|website=WEB.WEB |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> {{w|Meta Platforms}} (formerly {{w|Facebook}}) is increasingly turning towards AI to detect “hate speech.”<ref name="Amnesty International Report of Facebook and the Rohingya">{{cite web |title=The Social Atrocity: Meta and The Right to Remedy for The Rohingya{{!}}|url=https://www.amnesty.org/en/documents/ASA16/5933/2022/en/|website=amnesty.org |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> However, its detection algorithms proactively amplify content that incites violence against the Rohingya people, who already face long-standing discrimination.<ref name="Amnesty International Report of Facebook and the Rohingya">{{cite web |title=The Social Atrocity: Meta and The Right to Remedy for The Rohingya{{!}}|url=https://www.amnesty.org/en/documents/ASA16/5933/2022/en/|website=amnesty.org |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> Facebook favors inflammatory content in its AI-powered engagement-based algorithmic systems, which power news feeds, ranking, recommendation, and group features.<ref name="Amnesty International: Facebook and Myanmar">{{cite web |title=Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations{{!}} |url=https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/|website=WEB.WEB |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> Internal Meta studies dating back to 2012 indicate the corporation’s awareness that its algorithms could lead to real-world harm. In 2014, Myanmar authorities even temporarily blocked Facebook due to an outbreak of ethnic violence in {{w|Mandalay}}.<ref name="Amnesty International: Facebook and Myanmar">{{cite web |title=Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations{{!}} |url=https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/|website=WEB.WEB |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> A 2016 Meta study documented acknowledgment that the recommendation system can increase extremism.<ref name="Amnesty International: Facebook and Myanmar">{{cite web |title=Myanmar: Facebook’s systems promoted violence against Rohingya; Meta owes reparations{{!}} |url=https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/|website=WEB.WEB |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> Facebook facilitates peer-to-peer interaction affirming harmful narratives targeting the Rohingya, hosts massive disinformation campaigns originated by the Myanmar military, and knowingly proliferates a product that exacerbates political division and the spread of disinformation.<ref>{{cite web |last1=Zaleznik |first1=Daniel |title=Facebook and Genocide: How Facebook contributed to genocide in Myanmar and why it will not be held accountable|url=https://systemicjustice.org/article/facebook-and-genocide-how-facebook-contributed-to-genocide-in-myanmar-and-why-it-will-not-be-held-accountable/ |website=systemicjustice.org |access-date=22 November 2024 |language=en |date=July 2021}}</ref> A 2022 {{w|Global Witness}} investigation would reveal Meta’s failure to detect blatant anti-Rohingya and anti-Muslim content, even after {{w|Mark Zuckerburg}} promises to increase Burmese language-speaking content moderators in 2018.<ref name="Amnesty International Report of Facebook and the Rohingya">{{cite web |title=The Social Atrocity: Meta and The Right to Remedy for The Rohingya{{!}}|url=https://www.amnesty.org/en/documents/ASA16/5933/2022/en/|website=amnesty.org |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> Facebook’s content shaping algorithm is designed to maximize user engagement and, therefore, profit, in this case contributing to the genocide of the Rohingya.<ref name="Amnesty International Report of Facebook and the Rohingya">{{cite web |title=The Social Atrocity: Meta and The Right to Remedy for The Rohingya{{!}}|url=https://www.amnesty.org/en/documents/ASA16/5933/2022/en/|website=amnesty.org |access-date=22 November 2024 |language=en |date=29 September 2022}}</ref> | ||
+ | |- | ||
+ | | {{dts|October 2018}} || {{w|European Union}} border || {{w|European Union}} || iBorderCtrl || Image recognition, predictive algorithmic scoring || Privacy, presumption of innocence, freedom of association and movement || In October 2018, the {{w|European Union}} announces the funding of a new auto-border control system called iBorderCtrl, to be piloted in the nations {{w|Hungary}}, {{w|Greece}}, and {{w|Latvia}}.<ref name="iBorderCntrl: Amnesty International">{{cite web |title=Automated technologies and the future of Fortress Europe{{!}} |url=https://www.amnesty.org/en/latest/news/2019/03/automated-technologies-and-the-future-of-fortress-europe/|website=amnesty.org |access-date=4 December 2024 |language=en |date=28 March 2019}}</ref> The program is administered to travelers at the border by a virtual border guard and analyzes “micro-gestures” to determine if the traveler is lying.<ref>{{cite web |last1=Bacchi |first1=Umberto |title=EU’s lie-detecting virtual border guards face court scrutiny|url=https://www.reuters.com/article/technology/eus-lie-detecting-virtual-border-guards-face-court-scrutiny-idUSL8N2KB2GT/ |website=reuters.com |access-date=4 December 2024 |language=en |date=5 February 2021}}</ref> The “honest” travelers can cross the border with a code, and those deemed lying are transferred to human guards for further questioning.<ref name="iBorderCntrl: Amnesty International">{{cite web |title=Automated technologies and the future of Fortress Europe{{!}} |url=https://www.amnesty.org/en/latest/news/2019/03/automated-technologies-and-the-future-of-fortress-europe/|website=amnesty.org |access-date=4 December 2024 |language=en |date=28 March 2019}}</ref> The development of the technology lacks transparency.<ref name="iBorderCntrl: Amnesty International">{{cite web |title=Automated technologies and the future of Fortress Europe{{!}} |url=https://www.amnesty.org/en/latest/news/2019/03/automated-technologies-and-the-future-of-fortress-europe/|website=amnesty.org |access-date=4 December 2024 |language=en |date=28 March 2019}}</ref> It also relies on the widely contested “affect recognition science” and facial recognition technology, proven to be inherently biased.<ref name="iBorderCntrl: Amnesty International">{{cite web |title=Automated technologies and the future of Fortress Europe{{!}} |url=https://www.amnesty.org/en/latest/news/2019/03/automated-technologies-and-the-future-of-fortress-europe/|website=amnesty.org |access-date=4 December 2024 |language=en |date=28 March 2019}}</ref> The trial would end in 2019, and a top EU court would hear a case against the tech brought by digital rights activists in February 2021.<ref>{{cite web |last1=Bacchi |first1=Umberto |title=EU’s lie-detecting virtual border guards face court scrutiny|url=https://www.reuters.com/article/technology/eus-lie-detecting-virtual-border-guards-face-court-scrutiny-idUSL8N2KB2GT/ |website=reuters.com |access-date=4 December 2024 |language=en |date=5 February 2021}}</ref> | ||
+ | |- | ||
+ | | {{dts|2018}} || {{w|Sri Lanka}} || {{w|Facebook}} || Facebook algorithm spreads Islamophobic content in {{w|Sri Lanka}} || Content recommendation algorithm|| War/destruction, freedom of expression|| A viral Facebook rumor spreads across Sri Lanka, falsely claiming that police seized 23,000 sterilization pills from a Muslim pharmacist in {{w|Ampara}}, supposedly unveiling a Muslim plot to sterilize and overthrow the {{w|Sinhalese}} majority.<ref>{{cite web |last1=Taub |first1=Amanda |last2=Fisher |first2=Max |title=Where Countries Are Tinderboxes and Facebook Is a Match|url=https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html |website=nytimes.com |access-date=16 December 2024 |language=en |date=21 April 2018}}</ref> Violence ensues, including the beating of a Muslim shop owner and destruction of his shop in {{w|Ampara}}.<ref>{{cite web |last1=Taub |first1=Amanda |last2=Fisher |first2=Max |title=Where Countries Are Tinderboxes and Facebook Is a Match|url=https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html |website=nytimes.com |access-date=16 December 2024 |language=en |date=21 April 2018}}</ref> More viral Facebook videos calling for violence spark the destruction of Muslim-owned shops and homes, and the death of 27-year-old Abdul Basith trapped inside a burning storefront in {{w|Digana}}.<ref>{{cite web |last1=Taub |first1=Amanda |last2=Fisher |first2=Max |title=Where Countries Are Tinderboxes and Facebook Is a Match|url=https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html |website=nytimes.com |access-date=16 December 2024 |language=en |date=21 April 2018}}</ref> Facebook officials ignore the repeated warnings of potential violence, and the app continues to push the inflammatory content that keeps people on the site.<ref>{{cite web |last1=Taub |first1=Amanda |last2=Fisher |first2=Max |title=Where Countries Are Tinderboxes and Facebook Is a Match|url=https://www.nytimes.com/2018/04/21/world/asia/facebook-sri-lanka-riots.html |website=nytimes.com |access-date=16 December 2024 |language=en |date=21 April 2018}}</ref> Sri Lanka blocks Facebook (including platforms owned by Facebook like {{w|Whatsapp}}) for three days in March in response to the calls to attack Muslims.<ref>{{cite web |last1=Rajagopalan |first1=Megha |title=Sri Lanka Is Blocking Facebook For Three Days In Response To Violence Against Minorities|url=https://www.buzzfeednews.com/article/meghara/sri-lanka-is-blocking-facebook-for-three-days-in-response |website=buzzfeednews.com |access-date=16 December 2024 |language=en |date=7 March 2018}}</ref> | ||
|- | |- | ||
− | | {{dts| | + | | {{dts|2018}} || {{w|India}} || {{w|Aadhaar}} || Mass surveillance of Indian citizens with facial recognition technology using {{w|Aadhaar}} data || Privacy, freedom of association and movement || Image recognition || The Indian government rolls out Facial Recognition Technology (FRT) beginning with telecommunication companies using data collected by the Unique Identification Authority of India (UIDIA).<ref>{{cite web |last1=Sinha |first1=Amber |title=The Landscape of Facial Recognition Technologies in India|url=https://www.techpolicy.press/the-landscape-of-facial-recognition-technologies-in-india/ |website=techpolicy.press |access-date=22 November 2024 |language=en |date=13 March 2024}}</ref> Before 2009, there was no centralized identification in India, sparking the creation of {{w|Aadhaar}}, a unique 12-digit ID number assigned to the over 1 billion Indian citizens.<ref>{{cite web |last1=Sudhir |first1=K. |last2=Sunder |first2=Shyam |title=What Happens When a Billion Identities Are Digitized?|url=https://insights.som.yale.edu/insights/what-happens-when-billion-identities-are-digitized |website=insights.som.yale.edu |access-date=22 November 2024 |language=en |date=27 March 2020}}</ref> The Aadhaar database includes biometric and demographic information, which law enforcement can use for FRT.<ref>{{cite web |last1=Panigrahi |first1=Subhashish |title=TITLE|url=https://interactions.acm.org/enter/view/marginalizedaadhaar |website=interactions.acm.org |access-date=22 November 2024 |language=en |date=April 2022}}</ref> FRT using Aadhaar data would be used for citizens to access public benefits and services, and FRT would infiltrate India’s telecommunications and travel, policing, public health, welfare programs, education, and elections.<ref>{{cite web |last1=Sinha |first1=Amber |title=The Landscape of Facial Recognition Technologies in India|url=https://www.techpolicy.press/the-landscape-of-facial-recognition-technologies-in-india/ |website=techpolicy.press |access-date=22 November 2024 |language=en |date=13 March 2024}}</ref> These FRT systems are used for racial surveillance<ref>{{cite web |last1=Banerjee |first1=Arjun |title=India the surveillance state and the role of Aadhaar|url=https://countercurrents.org/2023/09/india-the-surveillance-state-and-the-role-of-aadhaar/ |website=countercurrents.org |access-date=22 November 2024 |language=en |date=9 April 2023}}</ref> and have higher inaccuracy in racially homogenous groups.<ref>{{cite web |last1=Jain |first1=Anushka |title=Crores of pensioners to be verified using UIDAI-linked facial recognition app|url=https://www.medianama.com/2021/12/223-facial-recognition-app-pensioners-uidai/ |website=medianama.com |access-date=22 November 2024 |language=en |date=3 December 2021}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|2018}} || {{w|United States}} || {{w|Amazon Rekognition}} || Biased facial recognition software || Image recognition || Privacy || It is reported in 2018 that Amazon’s facial recognition software, {{w|Amazon Rekognition}}, has a disproportionate error rate when identifying women and people of color.<ref>{{cite web |last1=Snow |first1=Jacob |title=Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots|url=https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 |website=aclu.org |access-date=21 November 2024 |language=en |date=26 July 2018}}</ref> The {{w|Amazon (company)}} service is offered at a price to the public but heavily marketed towards US law enforcement agencies.<ref>{{cite web |last1=Snow |first1=Jacob |title=Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots|url=https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 |website=aclu.org |access-date=21 November 2024 |language=en |date=26 July 2018}}</ref> Amazon lists the city of Orlando, Florida, and the Washington County Sheriff’s Office in Oregon among its customers.<ref>{{cite web |last1=Cagle |first1=Matt |last2=Ozer |first2=Nicole |title=Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology|url=https://www.aclu.org/news/privacy-technology/amazon-teams-government-deploy-dangerous-new |website=aclu.org |access-date=21 November 2024 |language=en |date=22 May 2018}}</ref> Amazon claims the software can track people in real-time through surveillance cameras, scan body camera footage, and identify up to 100 faces in a single image, which is pertinent in an era of unprecedented protest attendance.<ref>{{cite web |last1=Snow |first1=Jacob |title=Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots|url=https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 |website=aclu.org |access-date=21 November 2024 |language=en |date=26 July 2018}}</ref> In 2019, a {{w|Massachusetts Institute of Technology}} researcher would also find higher error rates in classifying darker-skinned women than lighter-skinned men.<ref>{{cite web |last1=O’Brien |first1=Matt |title=Face recognition researcher fights Amazon over biased AI|url=https://apnews.com/article/24fd8e9bc6bf485c8aff1e46ebde9ec1 |website=apnews.com |access-date=21 November 2024 |language=en |date=3 April 2019}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|June 2019}} || International || {{w|YouTube}} || AI discrimination in monetizing LGBTQ+ {{w|YouTube}} videos || Content recommendation algorithm || Freedom of expression, right to livelihood || After an investigation, {{w|Youtube}} content creators allege YouTube’s AI monetization algorithm flags videos with LGBTQ-related words as non-advertiser friendly, monetarily punishing videos tagged as “gay,” “transgender,” and “lesbian.”<ref>{{cite web |last1=Alexander |first1=Julia |title=YouTube moderation bots punish videos tagged as ‘gay’ or ‘lesbian,’ study finds|url=https://www.theverge.com/2019/9/30/20887614/youtube-moderation-lgbtq-demonetization-terms-words-nerd-city-investigation|website=theverge.com |access-date=26 November 2024 |language=en |date=30 September 2019}}</ref> Due to a lack of consistently available human moderators, YouTube uses relies in part on AI algorithms to take down inappropriate videos.<ref>{{cite web |last1=Sams |first1=Brandon |title=YouTube’s New Age Restriction AI Worries LGBTQ+ Community|url=https://www.lifewire.com/youtube-new-age-restriction-ai-worries-lgbtq-community-5079928https://www.lifewire.com/youtube-new-age-restriction-ai-worries-lgbtq-community-5079928 |website=lifewire.com |access-date=26 November 2024 |language=en |date=29 September 2020}}</ref> YouTube denies the allegation, claiming they aim to protect users from hate speech.<ref>{{cite web |last1=Bensinger |first1=Greg |title=YouTube discriminates against LGBT content by unfairly culling it, suit alleges|url=https://www.washingtonpost.com/technology/2019/08/14/youtube-discriminates-against-lgbt-content-by-unfairly-culling-it-suit-alleges/ |website=washingtonpost.com |access-date=26 November 2024 |language=en |date=14 August 2019}}</ref> In August of 2019, a group of LGBTQ+ content creators would file a class action lawsuit alleging unlawful content regulation, distribution, and monetization practices that stigmatize, restrict, block, demonetize, and financially harm the queer community.<ref>{{cite web |last1=Sams |first1=Brandon |title=YouTube’s New Age Restriction AI Worries LGBTQ+ Community|url=https://www.lifewire.com/youtube-new-age-restriction-ai-worries-lgbtq-community-5079928https://www.lifewire.com/youtube-new-age-restriction-ai-worries-lgbtq-community-5079928 |website=lifewire.com |access-date=26 November 2024 |language=en |date=29 September 2020}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|2019}} || {{w|Iran}} || {{w|Government of Iran}} || Facial recognition software to target Iranian protesters || Image recognition || Privacy, freedom of association and movement, presumption of innocence || The Iranian government integrates AI-based surveillance technologies into its legislative framework, enabling the identification and detention of protesters by positioning high-definition surveillance equipment to capture public activity.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> In 2021, China would become Iran’s biggest technological investor, more than doubling the governments possession of high-definition surveillance video recorders.<ref>{{cite web |last1=George |first1=Rachel |title=The AI Assault on Women: What Iran’s Tech Enabled Morality Laws Indicate for Women’s Rights Movements|url=https://www.cfr.org/blog/ai-assault-women-what-irans-tech-enabled-morality-laws-indicate-womens-rights-movements |website=cfr.org |access-date=21 November 2024 |language=en |date=7 December 2023}}</ref> In 2022 after the onset of the {{w|Mahsa Amini protests}}, the Iranian government would adopt legislation laying out the use of AI-assisted facial recognition tools to enforce morality codes and identify women not abiding by Hijab mandates.<ref>{{cite web |last1=George |first1=Rachel |title=The AI Assault on Women: What Iran’s Tech Enabled Morality Laws Indicate for Women’s Rights Movements|url=https://www.cfr.org/blog/ai-assault-women-what-irans-tech-enabled-morality-laws-indicate-womens-rights-movements |website=cfr.org |access-date=21 November 2024 |language=en |date=7 December 2023}}</ref> More than 20,000 arrests and 500 assassinations of protesters would follow.<ref>{{cite web |last1=George |first1=Rachel |title=The AI Assault on Women: What Iran’s Tech Enabled Morality Laws Indicate for Women’s Rights Movements|url=https://www.cfr.org/blog/ai-assault-women-what-irans-tech-enabled-morality-laws-indicate-womens-rights-movements |website=cfr.org |access-date=21 November 2024 |language=en |date=7 December 2023}}</ref> In 2024 Iran would make an AI ethics deal with Russia to encourage technological cooperation and investment.<ref>{{cite web |last1=Tkeshelashvili |first1=Mariami |last2=Saade |first2=Tiffany |title=Decrypting Iran’s AI-Enhanced Operations in Cyberspace|url=https://securityandtechnology.org/blog/decrypting-irans-ai-enhanced-operations-in-cyberspace/ |website=securityandtechnology.org |access-date=21 November 2024 |language=en |date=26 September 2024}}</ref> Iran has also been accused of analyzing citizen social media engagement and creating AI-driven bots and automated social media accounts to flood platforms with regime-sanctioned content.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|March 2020}} || {{w|Libya}} || {{w|Government of National Accord}} || Possibly the first wartime autonomous drone kill || Automated decision-making, weapon || War/destruction || Political unrest in {{w|Libya}} leads to conflict between the UN-backed {{w|Government of National Accord}} and forces aligned with {{w|Khalifa Haftar}}.<ref>{{cite web |last1=Hernandez |first1=Joe |title=A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says|url=https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d |website=npr.org |access-date=21 November 2024 |language=en |date=1 June 2021}}</ref> In the March 2020 skirmish, Haftar’s troops are hunted down with an autonomically capable drone and engaged with.<ref>{{cite web |last1=Hernandez |first1=Joe |title=A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says|url=https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d |website=npr.org |access-date=21 November 2024 |language=en |date=1 June 2021}}</ref> The device is a Turkish-made {{w|STU Kargu}}-2 loitering drone, possessing the ability to use machine learning-based object classification to select and engage with targets and capable of swarming.<ref>{{cite web |last1=Kallenborn |first1=Zachary |title=Was a flying killer robot used in Libya? Quite Possibly|url=https://thebulletin.org/2021/05/was-a-flying-killer-robot-used-in-libya-quite-possibly/ |website=thebulliten.org |access-date=21 November 2024 |language=en |date=20 May 2021}}</ref> While the UN report of March 2020 doesn’t specifically state the drone was used autonomously and only heavily implies casualties, if confirmed, this would be the first incident of battlefield deaths due to autonomous robots.<ref>{{cite web |last1=Kallenborn |first1=Zachary |title=Was a flying killer robot used in Libya? Quite Possibly|url=https://thebulletin.org/2021/05/was-a-flying-killer-robot-used-in-libya-quite-possibly/ |website=thebulliten.org |access-date=21 November 2024 |language=en |date=20 May 2021}}</ref> Autonomous weapons could rely on biased data and result in disproportionate battlefield deaths of protected demographics. |
|- | |- | ||
− | | {{dts| | + | | {{dts|August 2020}} || {{w|United Kingdom}} || {{w|Ofqual}} || {{w|2020 United Kingdom school exam grading controversy}} || Predictive algorithmic scoring || Right to livelihood || In person {{w|GCSE}} and {{w|A-level}} exams in the UK are disrupted by the {{w|COVID-19 pandemic}}.<ref>{{cite web |last1=Hao |first1=Karen |title=The UK exam debacle reminds us that algorithms can’t fix broken systems|url=https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system/ |website=technologyreview.com |access-date=4 December 2024 |language=en |date=20 August 2020}}</ref> These exams influence where students can work and attend university, heavily involved in their immediate future.<ref>{{cite web |last1=Leckie |first1=George |title=The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm |url=https://www.bristol.ac.uk/cmm/research/grade/ |website=bristol.ac.uk |access-date=4 December 2024 |language=en |date=30 September 2023}}</ref> The Office of Qualifications and Examinations Regulation ({{w|Ofqual}}) requests schools submit grades and rank order predictions made by teachers.<ref>{{cite web |last1=Leckie |first1=George |title=The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm |url=https://www.bristol.ac.uk/cmm/research/grade/ |website=bristol.ac.uk |access-date=4 December 2024 |language=en |date=30 September 2023}}</ref> Assuming biased teacher rankings, Ofqual creates a scoring algorithm, and 40% of students, disproportionately working class, receive a downgraded exam score from their teacher predictions.<ref>{{cite web |last1=Hao |first1=Karen |title=The UK exam debacle reminds us that algorithms can’t fix broken systems|url=https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system/ |website=technologyreview.com |access-date=4 December 2024 |language=en |date=20 August 2020}}</ref> Protests break out August 16 and Ofqual announces that students should be awarded whichever is higher out of the teacher-predicted score and the algorithm score.<ref>{{cite web |last1=Hao |first1=Karen |title=The UK exam debacle reminds us that algorithms can’t fix broken systems|url=https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system/ |website=technologyreview.com |access-date=4 December 2024 |language=en |date=20 August 2020}}</ref> Many students would already have lost slots at their preferred universities by the time the scores were readjusted.<ref>{{cite web |last1=Satariano |first1=Adam |title=British Grading Debacle Shows Pitfalls of Automating Government|url=https://www.nytimes.com/2020/08/20/world/europe/uk-england-grading-algorithm.html |website=nytimes.com |access-date=4 December 2024 |language=en |date=20 August 2020}}</ref> The algorithm, intended to make the system more fair, would harm lower-income students the most. |
|- | |- | ||
− | | {{dts| | + | | {{dts|2020}} || {{w|United States}} || {{w|U.S. Immigration and Customs Enforcement}} (ICE) || ICE contracts {{w|Clearview AI}} || Image recognition, big data analytics || Privacy, presumption of innocence, freedom of association and movement || The {{w|American Civil Liberties Union}} (ACLU) files a {{w|Freedom of Information Act (United States)}} (FOIA) after {{w|US Immigration and Customs Enforcement}} (ICE) purchases {{w|Clearview AI}} technology.<ref name="ACLU FOIA">{{cite web |title=Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/2020/10/2020.10.19-ACLU-NC-JFL-IDP-Mijente-FOIA-re-Clearview-AI_.pdf|website=immigrantdefenseproject.org |access-date=8 November 2024 |language=en |date=19 October 2020}}</ref> Clearview AI is a facial recognition software.<ref>{{cite web |last1=Scott |first1=Jeramie |title=Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?|url=https://epic.org/is-ice-using-facial-recognition-to-track-people-who-allegedly-threaten-their-agents/ |website=epic.org |access-date=8 November 2024 |language=en |date=17 March 2022}}</ref> The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.<ref>{{cite web |last1=Lyons |first1=Kim |title=ICE just signed a contract with facial recognition company Clearview AI|url=https://www.theverge.com/2020/8/14/21368930/clearview-ai-ice-contract-privacy-immigration |website=theverge.com |access-date=9 November 2024 |language=en |date=14 August 2020}}</ref> Using the controversial data scraping tool, ICE can now deploy mass surveillance to identify and detain immigrants.<ref name="ACLU FOIA">{{cite web |title=Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/2020/10/2020.10.19-ACLU-NC-JFL-IDP-Mijente-FOIA-re-Clearview-AI_.pdf|website=immigrantdefenseproject.org |access-date=8 November 2024 |language=en |date=19 October 2020}}</ref> United States government agencies have a history of mass surveillance. In 2017, the DHS, ICE, and the Department of Health and Human Services used {{w|Palantir Technologies}} to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children.<ref>{{cite web |last1=Del Villar |first1=Ashley |last2=Hayes |first2=Myaisha |title=How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now|url=https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now |website=aclu.org |access-date=8 November 2024 |language=en |date=22 July 2021}}</ref> The FBI and ICE searched state and federal driver’s license databases to find undocumented immigrants using facial recognition.<ref>{{cite web |last1=Scott |first1=Jeramie |title=Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?|url=https://epic.org/is-ice-using-facial-recognition-to-track-people-who-allegedly-threaten-their-agents/ |website=epic.org |access-date=8 November 2024 |language=en |date=17 March 2022}}</ref><ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref><ref>{{cite web |last1=Lyons |first1=Kim |title=ICE just signed a contract with facial recognition company Clearview AI|url=https://www.theverge.com/2020/8/14/21368930/clearview-ai-ice-contract-privacy-immigration |website=theverge.com |access-date=9 November 2024 |language=en |date=14 August 2020}}</ref> Facial recognition technology is proven to be less accurate in identifying women and individuals with darker skin,<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> therefore discriminating against migrants of color and women. |
|- | |- | ||
− | | {{dts| | + | | {{dts|July 2021}} || International || {{w|NSO Group}} || {{w|Pegasus Project (investigation)}} || Natural language processing, geospatial analytics || Privacy, presumption of innocence, freedom of expression || {{w|Amnesty International}} and {{w|Forbidden Stories}} release their {{w|Pegasus Project (investigation)}}. The investigation reveals that the Israeli company {{w|NSO Group}} contracted {{w|Pegasus (spyware)}} to over 50 countries to spy on over 50,000 surveillance targets from 2016 to 2021.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> The NSO Group’s clients include Azerbaijan, Bahrain, Hungary, India, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Togo, and the United Arab Emirates (UAE).<ref name="Pegasus Project: Amnesty International">{{cite web |title=TITLE{{!}} |url=https://www.amnesty.org/en/latest/press-release/2021/07/the-pegasus-project/|website=amnesty.org |access-date=9 November 2024 |language=en |date=19 July 2021}}</ref> The UAE is revealed to be one of the most active users of Pegasus, having targeted 10,000 people, including {{w|Ahemd Mansoor}}.<ref>{{cite web |last1=Coates Ulrichsen |first1=Kristian |title=Pegasus as a case study of evolving ties between the UAE and Israel|url=https://gulfstateanalytics.com/pegasus-as-a-case-study-of-evolving-ties-between-the-united-arab-emirates-and-israel/ |website=gulfstateanalytics.com |access-date=8 November 2024 |language=en |date=9 June 2022}}</ref> The targets across states included activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, several heads of state, and least 180 journalists.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> The spyware, used by repressive governments to silence dissent, is surreptitiously installed into victims’ phones and allows complete device access to the perpetrator (including messages, emails, cameras, microphones, calls, contacts, and media).<ref name="Pegasus Project: Amnesty International">{{cite web |title=TITLE{{!}} |url=https://www.amnesty.org/en/latest/press-release/2021/07/the-pegasus-project/|website=amnesty.org |access-date=9 November 2024 |language=en |date=19 July 2021}}</ref> The NSO Group claims to sell their products to government clients to collect data from individuals’ mobile devices suspected of being involved in serious crimes or terrorism and that the leaked state surveillance was misuse and would be investigated.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> The NSO Group would not take further action to stop its tools from being used to unlawfully target and surveil citizens, would deny any wrongdoing, and would claim its company is involved in a lifesaving mission.<ref name="Pegasus Project: Amnesty International">{{cite web |title=TITLE{{!}} |url=https://www.amnesty.org/en/latest/press-release/2021/07/the-pegasus-project/|website=amnesty.org |access-date=9 November 2024 |language=en |date=19 July 2021}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|2021}} || {{w|United States}} || {{w|Department of Homeland Security}}, {{w|U.S. Customs and Border Protection}}, and {{w|U.S. Immigration and Customs Enforcement}} || AI used at the {{w|Mexico-United States Border}} during the {{w|Presidency of Joe Biden}} || Image recognition, voice recognition, natural language processing, big data analytics, automated decision-making, geospatial analytics, generative AI || Privacy, right to seek asylum, freedom of association and movement, presumption of innocence || In 2021 the {{w|Department of Homeland Security}} (DHS) receives $780 million for border technology and surveillance (drones, sensors, and other tech to detect border crossings).<ref>{{cite web |last1=Tyler |first1=Hannah |title=The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions|url=https://www.migrationpolicy.org/article/artificial-intelligence-border-zones-privacy |website=migrationpolicy.org |access-date=19 December 2024 |language=en |date=2 February 2022}}</ref> The {{w|U.S. Customs and Border Protection}} (CBP) deploys a system of autonomous surveillance towers equipped with radar, cameras, and algorithms that use AI systems trained to analyze objects and movement.<ref>{{cite web |last1=Tyler |first1=Hannah |title=The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions|url=https://www.migrationpolicy.org/article/artificial-intelligence-border-zones-privacy |website=migrationpolicy.org |access-date=19 December 2024 |language=en |date=2 February 2022}}</ref> These towers, able to recognize the difference between humans, animals, and objects, are part of the Biden administrations push for “smart borders.”<ref>{{cite web |last1=Morley |first1=Priya |title=AI at the Border: Racialized Impacts and Implications|url=https://www.justsecurity.org/97172/ai-at-the-border/|website=justsecurity.org |access-date=19 December 2024 |language=en |date=28 June 2024}}</ref> The United States also utilizes small unmanned aerial systems, remote drones for military operations, to identify and surveil migrants.<ref>{{cite web |last1=Morley |first1=Priya |title=AI at the Border: Racialized Impacts and Implications|url=https://www.justsecurity.org/97172/ai-at-the-border/|website=justsecurity.org |access-date=19 December 2024 |language=en |date=28 June 2024}}</ref> Local border police use facial recognition technology, cellphone tracking, license-plate cameras, drones, and spy planes, sparking debate on privacy rights to anyone in the region.<ref>{{cite web |last1=Tyler |first1=Hannah |title=The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions|url=https://www.migrationpolicy.org/article/artificial-intelligence-border-zones-privacy |website=migrationpolicy.org |access-date=19 December 2024 |language=en |date=2 February 2022}}</ref> The expansion of AI-bolstered border surveillance infrastructure is associated with an increase in deaths at the border, pushing migrants to more remote and dangerous routes.<ref>{{cite web |last1=Tyler |first1=Hannah |title=The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions|url=https://www.migrationpolicy.org/article/artificial-intelligence-border-zones-privacy |website=migrationpolicy.org |access-date=19 December 2024 |language=en |date=2 February 2022}}</ref> The CBP requires migrants to utilize mobile application {{w|CBP One}} upon arrival to the US-Mexico border, submitting biometric and personal data to be considered for asylum.<ref>{{cite web |last1=Morley |first1=Priya |title=AI at the Border: Racialized Impacts and Implications|url=https://www.justsecurity.org/97172/ai-at-the-border/|website=justsecurity.org |access-date=19 December 2024 |language=en |date=28 June 2024}}</ref> However, the app is significantly worse at recognizing the faces of black and brown people, which would lead to a reduction in the number of black asylum seekers after its rollout in 2023.<ref>{{cite web |last1=Del Bosque |first1=Melissa |title=Facial recognition bias frustrates Black asylum applicants to US, advocates say|url=https://www.theguardian.com/us-news/2023/feb/08/us-immigration-cbp-one-app-facial-recognition-bias |website=theguardian.com |access-date=19 December 2024 |language=en |date=8 February 2023}}</ref> Asylum seekers from Haiti and African countries are often blocked from entry at the southern border, victim to a biased algorithm and its inability to recognize faces with darker skin tones.<ref>{{cite web |last1=Del Bosque |first1=Melissa |title=Facial recognition bias frustrates Black asylum applicants to US, advocates say|url=https://www.theguardian.com/us-news/2023/feb/08/us-immigration-cbp-one-app-facial-recognition-bias |website=theguardian.com |access-date=19 December 2024 |language=en |date=8 February 2023}}</ref> The DHS is also building a $6.158 billion biometric database, the Homeland Advanced Recognition Technology (HART) system, to collect, organize, and share data on 270 million people (including children).<ref name="HART-Attack">{{cite web |title=HART Attack{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/HART-Attack.pdf|website=immigrantdefenseproject.org |access-date=12 December 2024 |language=en |date=May 2022}}</ref> It is expected to be the largest biometric database in the US, aggregating the biometric data of individuals (without their consent) from government agencies to create digital profiles to target migrants for surveillance, raids, arrests, detention, and deportation.<ref name="HART-Attack">{{cite web |title=HART Attack{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/HART-Attack.pdf|website=immigrantdefenseproject.org |access-date=12 December 2024 |language=en |date=May 2022}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|May 2022}} || {{w|United States}} and allied countries || {{w|Russia}} || {{w|Doppelganger (disinformation campaign)}} || Natural language processing || Freedom of expression, privacy || The {{w|Kremlin}} launches a disinformation campaign, promoting pro-Russian narratives and disseminating disinformation through cloned websites, fake articles, and social media.<ref name="US Cyber Command">{{cite web |title=Russian Disinformation Campaign “DoppelGanger” Unmasked: A Web of Deception{{!}} |url=https://www.cybercom.mil/Media/News/Article/3895345/russian-disinformation-campaign-doppelgnger-unmasked-a-web-of-deception/|website=cybercom.mil |access-date=6 December 2024 |language=en |date=3 September 2024}}</ref> The Kremlin utilizes AI to create disinformation content, buys domains similar to legitimate sites, and spreads fearmongering across the West.<ref name="US Cyber Command">{{cite web |title=Russian Disinformation Campaign “DoppelGanger” Unmasked: A Web of Deception{{!}} |url=https://www.cybercom.mil/Media/News/Article/3895345/russian-disinformation-campaign-doppelgnger-unmasked-a-web-of-deception/|website=cybercom.mil |access-date=6 December 2024 |language=en |date=3 September 2024}}</ref> The project would include a total of 7,983 disinformation campaigns, mimicking German, American, Italian, British, and French media outlets and websites, including {{w|The Guardian}}, {{w|Fox News}}, {{w|Ministry for Europe and Foreign Affairs (France)}}, and {{w|NATO}}, resulting in a total of 828,842 clicks.<ref name="EU Disinformation">{{cite web |title=What is the doppelganger operation? List of Resources{{!}} |url=https://www.disinfo.eu/doppelganger-operation/|website=disinfo.eu |access-date=6 December 2024 |language=en |date=30 October 2024}}</ref> The Kremlin coordinates and finances the campaign, undermining Ukrainian objectives, promoting pro-Russia policies, and stoking internal Western tension.<ref>{{cite web |last1=Chawrylo |first1=Katarzyna |title=’Doppelganger’: the pattern of Russia’s anti-Western influence operation|url=https://www.osw.waw.pl/en/publikacje/analyses/2024-09-13/doppelganger-pattern-russias-anti-western-influence-operation |website=osw.waw.pl |access-date=6 December 2024 |language=en |date=13 September 2024}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|2022}} || {{w|Ukraine}} || {{w|Russia}} || Russia’s use of AI in wartime in the context of the {{w|Russian invasion of Ukraine}} || Natural language processing, big data analytics, automated decision-making, content recommendation algorithm, weapons, geospatial analytics || War/destruction || The February 2022 {{w|Russian invasion of Ukraine}} brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.<ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> There are reports of the {{w|Ministry of Defense (Russia)}} using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref> Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref><ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> {{w|OpenAI}} would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.<ref name="Russia using gen AI to ramp up disinformation">{{cite web |title=Russia using generative AI to ramp up disinformation, says Ukraine minister{{!}} |url=https://www.reuters.com/technology/artificial-intelligence/russia-using-generative-ai-ramp-up-disinformation-says-ukraine-minister-2024-10-16/|website=reuters.com |access-date=13 November 2024 |language=en |date=16 October 2024}}</ref> |
|- | |- | ||
− | | {{dts| | + | | {{dts|February 2023}} || {{w|Russia}} || {{w|Roskomnadzor}} || Russian surveillance of civilians (especially LGBTQ people) in the context of the {{w|Russian invasion of Ukraine}} || Image recognition, natural language processing, big data analytics || Privacy, freedom of expression || The Russian Internet was isolated from the world after the 2019 {{w|Sovereign Internet Law}}, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. In February, 2023, Russian federal agency {{w|Roskomnadzor}}, responsible for monitoring, controlling, and censoring Russian media, launches the AI-based detection system Oculus.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> The program analyzes over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.<ref>{{cite web |last1=Litvinova |first1=Dasha |title=The cyber gulag: How Russia tracks, censors and controls its citizens|url=https://apnews.com/article/russia-crackdown-surveillance-censorship-war-ukraine-internet-dab3663774feb666d6d0025bcd082fba |website=apnews.com |access-date=15 November 2024 |language=en |date=23 May 2023}}</ref> The program scans for suicide, pro-drug, and extremist content, as well as calls for protests. It is also searching for pro-LGBTQ content, cracking down on the community as part of a framing tactic in the Ukrainian War, claiming to be defending traditional Russian values.<ref>{{cite web |last1=Buziashvili |first1=Eto |title=Russia takes next step in domestic internet surveillance|url=https://dfrlab.org/2023/02/17/russia-takes-next-step-in-domestic-internet-surveillance/|website=dfrlab.org |access-date=15 November 2024 |language=en |date=17 February 2023}}</ref> The Kremlin claims the program can identify people under a beard or a mask and determine age, insinuating the ability to easily identify and target LGBTQ content creators.<ref>{{cite web |last1=Buziashvili |first1=Eto |title=Russia takes next step in domestic internet surveillance|url=https://dfrlab.org/2023/02/17/russia-takes-next-step-in-domestic-internet-surveillance/|website=dfrlab.org |access-date=15 November 2024 |language=en |date=17 February 2023}}</ref> |
|- | |- | ||
− | | {{dts|February 2023}} || {{w| | + | | {{dts|February 2023}} || International || {{w|Team Jorge}} || {{w|Team Jorge}} || Generative AI, natural language processing || Privacy, presumption of innocence, freedom of expression || An investigation reveals an Israeli-based team of journalists, led by former Israeli special forces operatives {{w|Tal Hanan}}, have been working for profit under the radar on various disinformation campaigns and elections for decades. Under the pseudonym Jorge, Hanan leads a team of journalists from over 30 outlets.<ref>{{cite web |last1=Kirchgaessner |first1=Stephanie |last2=Ganguly |first2=Manisha |last3=Pegg |first3=David |title=Revealed: the hacking and disinformation team meddling in elections|url=https://www.theguardian.com/world/2023/feb/15/revealed-disinformation-team-jorge-claim-meddling-elections-tal-hanan |website=WEB |access-date=5 December 2024 |language=en |date=14 February 2023}}</ref> The investigation reveals that the team worked on 33 presidential disinformation campaigns, and several other voting campaigns in almost every continent including the 2016 Donald Trump win for US presidency and {{w|Brexit}}.<ref>{{cite web |last1=Andrzejewski |first1=Cecile |title=”Team Jorge”: In the heart of a global disinformation machine|url=https://forbiddenstories.org/team-jorge-disinformation/ |website=forbiddenstories.org |access-date=5 December 2024 |language=en |date=15 February 2023}}</ref> Team Jorge has various technological services to provide to those willing to pay, including active intelligence (hijacking email and encrypted messaging including Gmail and Telegram), deceit and defamation (leaking documents and fabricating scandals to harm rivals), vote suppression (disrupting democratic processes and creating stolen election campaigns), perception hacking (creating conspiracies and fake blogs), and influence ops (using an army of avatars to spread disinformation).<ref>{{cite web |last1=Benjakob |first1=Omer |title=Hacking, Extortion, Election Interference: These Are the Tools Used by Israel’s Agents of Chaos and Manipulation|url=https://www.haaretz.com/israel-news/security-aviation/2023-02-15/ty-article-magazine/.premium/hacking-extortion-election-interference-the-toolkit-of-israels-agents-of-chaos/00000186-4aa6-d933-af9e-cbe7aa9c0000 |website=haaretz.com |access-date=27 March 2023 |language=en |date=15 February 2023}}</ref> The team uses the following tools to achieve its objectives: Profiler (a tool that creates an intel profile by data-scraping), Global Bank Scan (financial intel reports on targets), Hacks (messaging accounts), AIMS (a platform that creates avatars and deploys them on disinformation campaigns), Blogger (a system for mass-creating fake blogs to spread content), and Residential Proxy (a tool to separate Jorge and clients from the disinformation campaigns).<ref>{{cite web |last1=Benjakob |first1=Omer |title=Hacking, Extortion, Election Interference: These Are the Tools Used by Israel’s Agents of Chaos and Manipulation|url=https://www.haaretz.com/israel-news/security-aviation/2023-02-15/ty-article-magazine/.premium/hacking-extortion-election-interference-the-toolkit-of-israels-agents-of-chaos/00000186-4aa6-d933-af9e-cbe7aa9c0000 |website=haaretz.com |access-date=27 March 2023 |language=en |date=15 February 2023}}</ref> The software package Advanced Impact Media Solutions (AIMS) enables the Team’s army of 30,000 avatars, which have unique digital backgrounds and authentic profile pictures, to create and spread propaganda and disinformation at the client’s behest.<ref>{{cite web |last1=Kirchgaessner |first1=Stephanie |last2=Ganguly |first2=Manisha |last3=Pegg |first3=David |title=Revealed: the hacking and disinformation team meddling in elections|url=https://www.theguardian.com/world/2023/feb/15/revealed-disinformation-team-jorge-claim-meddling-elections-tal-hanan |website=WEB |access-date=5 December 2024 |language=en |date=14 February 2023}}</ref> AIMS can use key words to create posts, articles, comments, or tweets in any language with positive, negative, or neutral tones.<ref>{{cite web |last1=Andrzejewski |first1=Cecile |title=”Team Jorge”: In the heart of a global disinformation machine|url=https://forbiddenstories.org/team-jorge-disinformation/ |website=forbiddenstories.org |access-date=5 December 2024 |language=en |date=15 February 2023}}</ref><ref>{{cite web |last1=Funk |first1=Allie |title=The Repressive Power of Artificial Intelligence|url=https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence |website=freedomhouse.org |access-date=5 December 2024 |language=en |date=21 November 2023}}</ref> |
|- | |- | ||
− | | {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israeli Defense Forces}} || | + | | {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israeli Defense Forces}} (IDF) || Generative AI, automated decision-making, weapons, geospatial analysis, big data analytics || War/destruction || {{w|AI-assisted targeting in the Gaza Strip}} || The {{w|Israeli Defense Forces}} (IDF) implements {{w|AI-assisted targeting in the Gaza Strip}} in the {{w|Israeli bombing of the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The IDF itself acknowledges the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel using secret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The Israeli military uses the AI tool {{w|Pegasus (spyware)}} to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> Lavender relies on surveillance network and assigns each inputed Gazan a score from 1-100, estimating their likelihood of being a {{w|Hamas}} militant.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel using secret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref> The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel using secret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref> |
+ | |} | ||
+ | |||
+ | ==AI in elections== | ||
+ | |||
+ | With the increasing sophistication of artificial intelligence (AI) technologies, the threat of AI-generated misinformation and disinformation influencing elections has become a growing concern. This list highlights well-documented instances of AI involvement in elections, including deepfakes, bots, and other forms of AI-powered manipulation. Please note that this is not an exhaustive list, as the scope of AI's impact on elections is still evolving, and new cases may emerge. The following examples illustrate how AI has been used to influence electoral outcomes, undermine democratic processes, or manipulate public opinion. | ||
+ | |||
+ | {| class="sortable wikitable" | ||
+ | ! Election Year !! Country !! Name !! AI Type !! Details | ||
+ | |- | ||
+ | | {{dts|2016}} || {{w|United States}} || {{w|Russian interference in the 2016 United States elections}} || Deepfakes, natural language processing || {{w|Vladimir Putin}} orders an influence campaign of the {{w|2016 United States elections}}.<ref name="Fact Sheet">{{cite web |title=Fact Sheet: What We Know about Russia’s Interference Operations{{!}} |url=https://www.gmfus.org/news/fact-sheet-what-we-know-about-russias-interference-operations|website=gmfus.org |access-date=12 December 2024 |language=en |date=2019}}</ref> Russia’s goal is to exacerbate political division, destabilize democracy,<ref name="Fact Sheet">{{cite web |title=Fact Sheet: What We Know about Russia’s Interference Operations{{!}} |url=https://www.gmfus.org/news/fact-sheet-what-we-know-about-russias-interference-operations|website=gmfus.org |access-date=12 December 2024 |language=en |date=2019}}</ref> and to influence American voting behavior in favor of {{w|Donald Trump}}.<ref name="NYU Study">{{cite web |title= Exposure to Russian Twitter Campaigns in 2016 Presidential Race Highly Concentrated, Largely Limited to Strongly Partisan Republicans{{!}} |url=https://www.nyu.edu/about/news-publications/news/2023/january/exposure-to-russian-twitter-campaigns-in-2016-presidential-race-.html|website=nyu.edu |access-date=12 December 2024 |language=en |date=9 January 2023}}</ref> Russia implements cyber-attacks, disinformation campaigns, and dispatches an army of bots on social media sites.<ref name="Fact Sheet">{{cite web |title=Fact Sheet: What We Know about Russia’s Interference Operations{{!}} |url=https://www.gmfus.org/news/fact-sheet-what-we-know-about-russias-interference-operations|website=gmfus.org |access-date=12 December 2024 |language=en |date=2019}}</ref> | ||
+ | |- | ||
+ | | {{dts|2018}} || {{w|Brazil}} || Twitter bots ahead of {{w|2018 Brazilian general election}} || Natural language processing || In the months leading up to the {{w|2018 Brazilian general election}}, {{w|Twitter}} was rife with bots spreading misinformation and disinformation.<ref>{{cite web |last1=Allen |first1=Andrew |title=Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections|url=https://www.wilsoncenter.org/blog-post/bots-brazil-the-activity-social-media-bots-brazilian-elections |website=wilsoncenter.org |access-date=13 December 2024 |language=en |date=17 August 2018}}</ref> Between June 22 and 23, over 20% of retweets related to the two leading candidates {{w|Luiz Inácio Lula da Silva}} and {{w|Jair Bolsonaro}} were performed by bots.<ref>{{cite web |last1=Allen |first1=Andrew |title=Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections|url=https://www.wilsoncenter.org/blog-post/bots-brazil-the-activity-social-media-bots-brazilian-elections |website=wilsoncenter.org |access-date=13 December 2024 |language=en |date=17 August 2018}}</ref> | ||
+ | |- | ||
+ | | {{dts|2022}} || {{w|Kenya}} || Disinformation in the {{w|2022 Kenyan general election}} || Generative AI, deepfakes || From May 2022 to July 2022, the 11.8 million Kenyans on social media are exposed to disinformation including fake polls, news, and videos.<ref>{{cite web |last1=Olivia |first1=Lilian |title=Disinformation was rife in Kenya’s 2022 elections|url=https://blogs.lse.ac.uk/africaatlse/2023/01/05/disinformation-was-rife-in-kenyas-2022-election/ |website=blogs.Ise.ac.uk |access-date=12 December 2024 |language=en |date=5 January 2023}}</ref> A deepfake featuring a candidate covered in blood, implying they were a murderer, garners over 505,000 views on TikTok.<ref>{{cite web |last1=Olivia |first1=Lilian |title=Disinformation was rife in Kenya’s 2022 elections|url=https://blogs.lse.ac.uk/africaatlse/2023/01/05/disinformation-was-rife-in-kenyas-2022-election/ |website=blogs.Ise.ac.uk |access-date=12 December 2024 |language=en |date=5 January 2023}}</ref> Other popular deepfakes include {{w|Barack Obama}} endorsing a candidate, and doctored videos of candidates, ethnic groups, and the son of Kenya’s former president.<ref>{{cite web |last1=Mwai |first1=Peter |title=Kenta Elections 2022: Misinformation circulating online|url=https://www.bbc.com/news/61591054 |website=bbc.com |access-date=12 December 2024 |language=en |date=29 May 2022}}</ref> The hundreds of false and misleading claims about the election are not specific to one party or campaign.<ref>{{cite web |last1=Kulundu |first1=James |title=Election campaigning ends in Kenya but disinformation battle drags on|url=https://factcheck.afp.com/doc.afp.com.32G83TD|website=factcheck.afp.com |access-date=12 December 2024 |language=en |date=8 August 2022}}</ref> | ||
+ | |- | ||
+ | | {{dts|2023}} || {{w|Nigeria}} || AI-generated disinformation in the {{w|2023 Nigerian presidential election}} || Generative AI, deepfakes|| The 2023 Nigerian elections cycle is preceded by AI-generated deepfakes flooding social media.<ref>{{cite web |last1=Davis |first1=Eric |title=Q&A: Hannah Ajakaiye on manipulated media in the 2023 Nigerian presidential elections, generative AI, and possible interventions|url=https://securityandtechnology.org/blog/qa-hannah-ajakaiye/ |website=securityandtechnology.org |access-date=12 December 2024 |language=en |date=18 March 2024}}</ref> In the months leading up to the election, AI-generated disinformation includes deepfakes of {{w|Nollywood}} actors and American celebrities, inorganic hashtags, and misinformation spread by social media influencers and bot accounts endorsing multiple nominees.<ref>{{cite web |last1=Orakwe |first1=Emmanuel |title=The challenges of AI-driven political disinformation in Nigeria|url=https://africainfact.com/the-challenges-of-ai-driven-political-disinformation-in-nigeria/ |website=africainfact.com |access-date=12 December 2024 |language=en |date=3 July 2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2023}} || {{w|Burkina Faso}} || {{w|Synthesia (company)}} used for creating deepfakes || Generative AI, deepfakes || Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> However, the technology can be used to create deepfakes, and videos surface in 2023 in {{w|Burkina Faso}}, claiming international support of the West African Junta.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> Synthesia bans the accounts that created the 2023 election propaganda videos and strengthens their content review processes.<ref>{{cite web |last1=Ganguly |first1=Manisha |title=‘It’s not me, it’s just my face’: the models who found their likenesses had been used in AI propaganda|url=https://www.theguardian.com/technology/2024/oct/16/its-not-me-its-just-my-face-the-models-who-found-their-likenesses-had-been-used-in-ai-propaganda |website=theguardian.com |access-date=11 December 2024 |language=en |date=16 October 2024}}</ref> Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.<ref name="Synthesia Policy">{{cite web |title=Introducing new guidelines for political content on Synthesia{{!}} |url=https://www.synthesia.io/post/introducing-new-guidelines-for-political-content-on-synthesia|website=synthesia.io |access-date=11 December 2024 |language=en |date=2 December 2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2024}} || {{w|United States}} || Deepfakes in the {{w|2024 United States presidential election}} || Generative AI, deepfakes || Deepfakes are not restricted by the United States {{w|Federal Election Commission}} (FEC), and they are circulated during the leadup to the 2024 US election.<ref>{{cite web |last1=Beaumont |first1=Hilary |title=’A lack of trust’: How deepfakes and AI could rattle the US elections|url=https://www.aljazeera.com/news/2024/6/19/a-lack-of-trust-how-deepfakes-and-ai-could-rattle-the-us-elections |website=aljazeera.com |access-date=13 December 2024 |language=en |date=19 June 2024}}</ref> The {{w|Ron DeSantis}} campaign releases an AI-generated video of {{w|Donald Trump}} hugging {{w|Anthony Fauci}},<ref>{{cite web |last1=Beaumont |first1=Hilary |title=’A lack of trust’: How deepfakes and AI could rattle the US elections|url=https://www.aljazeera.com/news/2024/6/19/a-lack-of-trust-how-deepfakes-and-ai-could-rattle-the-us-elections |website=aljazeera.com |access-date=13 December 2024 |language=en |date=19 June 2024}}</ref> the Trump campaign spreads a deepfake of {{w|Ron DeSantis}} in a women’s suit, and the {{w|Republican National Commission}} releases a video of AI-generated images including China invading Taiwan claiming to depict a future if {{w|Joe Biden}} is re-elected.<ref>{{cite web |last1=Nehamas |first1=Nicholas |title=DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter|url=https://www.nytimes.com/2023/06/08/us/politics/desantis-deepfakes-trump-fauci.html |website=nytimes.com |access-date=13 December 2024 |language=en |date=8 June 2023}}</ref> Nearly 5,000 people in {{w|New Hampshire}} receive a call from a voice sounding like Joe Biden (AI generated) telling people not to vote.<ref>{{cite web |last1=Beaumont |first1=Hilary |title=’A lack of trust’: How deepfakes and AI could rattle the US elections|url=https://www.aljazeera.com/news/2024/6/19/a-lack-of-trust-how-deepfakes-and-ai-could-rattle-the-us-elections |website=aljazeera.com |access-date=13 December 2024 |language=en |date=19 June 2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2024}} || {{w|India}} || Misinformation and disinformation in the {{w|2024 elections in India}} || Generative AI, deepfakes || Political parties use AI to generate content leading up to the {{w|2024 elections in India}}.<ref>{{cite web |last1= Majumdar|first1= Anushree |title= Artificial Intelligence has a starring role in India’s 18th General Elections |url=https://upgradedemocracy.de/en/artificial-intelligence-has-a-starring-role-in-indias-18th-general-elections/ |website=upgradedemocracy.de |access-date=13 December 2024 |language=en |date=30 April 2024}}</ref> The {{w|Indian National Congress}} (INC) releases two videos featuring deepfakes of Bollywood stars with large followings criticizing {{w|Narendra Modi}} (who belongs to the {{w|Bharatiya Janata Party}} (BJP)) which garner millions of views.<ref>{{cite web |last1= Majumdar|first1= Anushree |title= Artificial Intelligence has a starring role in India’s 18th General Elections |url=https://upgradedemocracy.de/en/artificial-intelligence-has-a-starring-role-in-indias-18th-general-elections/ |website=upgradedemocracy.de |access-date=13 December 2024 |language=en |date=30 April 2024}}</ref> The videos claim Modi fails to keep campaign promises and is harming the economy and the actors whose likeness is used lodge a police complaint against the social media handles.<ref>{{cite web |last1= Majumdar|first1= Anushree |title= Artificial Intelligence has a starring role in India’s 18th General Elections |url=https://upgradedemocracy.de/en/artificial-intelligence-has-a-starring-role-in-indias-18th-general-elections/ |website=upgradedemocracy.de |access-date=13 December 2024 |language=en |date=30 April 2024}}</ref> The {{w|Dravida Munnetra Kazhagam}} (DMK) party, which rules the southern state of {{w|Tamil Nadu}}, play AI-generated videos of their dead former leader at political events to garner support.<ref>{{cite web |last1=Sharma |first1=Yashraj |title=Deepfake democracy: Behind the AI trickery shaping India’s 2024 election|url=https://www.aljazeera.com/news/2024/2/20/deepfake-democracy-behind-the-ai-trickery-shaping-indias-2024-elections|website=aljazeera.com |access-date=16 December 2024 |language=en |date=20 February 2024}}</ref> Over half of India’s population of over 1 billion people are active internet users.<ref>{{cite web |last1=Mukherjee |first1=Mitali |title=AI deepfakes, bad laws - and a big fat Indian election|url=https://reutersinstitute.politics.ox.ac.uk/news/ai-deepfakes-bad-laws-and-big-fat-indian-election |website=reutersinstitute.politics.ox.ac.uk |access-date=16 December 2024 |language=en |date=19 December 2023}}</ref> The 2024 {{w|Global Risks Report}} cites mis and disinformation as the most severe global risk, and India as the country facing the highest risk in this regard.<ref>{{cite web |last1=Mukherjee |first1=Mitali |title=AI deepfakes, bad laws - and a big fat Indian election|url=https://reutersinstitute.politics.ox.ac.uk/news/ai-deepfakes-bad-laws-and-big-fat-indian-election |website=reutersinstitute.politics.ox.ac.uk |access-date=16 December 2024 |language=en |date=19 December 2023}}</ref> | ||
+ | |- | ||
+ | | {{dts|2024}} || {{w|Venezuela}} || {{w|Synthesia (company)}} used for creating deepfakes to influence {{w|2024 Venezuelan presidential election}} || Generative AI, deepfakes || Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> However, the technology can be used to create deepfakes and is used by the Venezuelan state prior to the 2024 elections to spread disinformation on behalf of the dictator {{w|Nicolás Maduro}}.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> Two pro-Venezuelan videos surface in 2023 ahead of the 2024 election, featuring AI deepfakes with real actors’ likenesses, boasting a healthy Venezuelan economy.<ref>{{cite web |last1=Ganguly |first1=Manisha |title=‘It’s not me, it’s just my face’: the models who found their likenesses had been used in AI propaganda|url=https://www.theguardian.com/technology/2024/oct/16/its-not-me-its-just-my-face-the-models-who-found-their-likenesses-had-been-used-in-ai-propaganda |website=theguardian.com |access-date=11 December 2024 |language=en |date=16 October 2024}}</ref> Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.<ref name="Synthesia Policy">{{cite web |title=Introducing new guidelines for political content on Synthesia{{!}} |url=https://www.synthesia.io/post/introducing-new-guidelines-for-political-content-on-synthesia|website=synthesia.io |access-date=11 December 2024 |language=en |date=2 December 2024}}</ref> | ||
+ | |- | ||
+ | | {{dts|2024}} || {{w|Pakistan}} || Deepfakes in the {{w|2024 Pakistani general election}} || Generative AI, deepfakes || Deepfakes are at the center of the digital debate in the Pakistani elections. In January, 110 million of the 240 million Pakistani people are active internet users and vulnerable to the deepfakes that arise before the election.<ref name="France 24">{{cite web |title=Deepfakes weaponized to target Pakistan’s women leaders{{!}} |url=https://www.france24.com/en/live-news/20241203-deepfakes-weaponised-to-target-pakistan-s-women-leaders|website=france24.com |access-date=16 December 2024 |language=en |date=12 March 2024}}</ref> {{w|Imran Khan}}, the ex-prime minister, campaigns from prison, his team using an AI tool to generate speeches and videos in his likeness.<ref name="France 24">{{cite web |title=Deepfakes weaponized to target Pakistan’s women leaders{{!}} |url=https://www.france24.com/en/live-news/20241203-deepfakes-weaponised-to-target-pakistan-s-women-leaders|website=france24.com |access-date=16 December 2024 |language=en |date=12 March 2024}}</ref> | ||
|} | |} | ||
Line 55: | Line 192: | ||
==={{w|Pegasus (spyware)}}=== | ==={{w|Pegasus (spyware)}}=== | ||
− | The tool is downloaded onto the target's phone and gives the user full access and control of the device.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> State governments have used its to spy on dissidents and journalists<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | + | The tool is downloaded onto the target's phone and gives the user full access and control of the device.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> State governments have used its to spy on dissidents and journalists.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> Examples of state use: |
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
! Perpetrating State !! Use | ! Perpetrating State !! Use | ||
Line 65: | Line 202: | ||
| {{w|Mexico}} || To monitor over 25 journalists looking into corruption, including Celio Pineda, whose device was marked for surveillance just weeks before his killing in 2017.<ref name="Pegasus Project: Amnesty International">{{cite web |title=TITLE{{!}} |url=https://www.amnesty.org/en/latest/press-release/2021/07/the-pegasus-project/|website=amnesty.org |access-date=9 November 2024 |language=en |date=19 July 2021}}</ref> | | {{w|Mexico}} || To monitor over 25 journalists looking into corruption, including Celio Pineda, whose device was marked for surveillance just weeks before his killing in 2017.<ref name="Pegasus Project: Amnesty International">{{cite web |title=TITLE{{!}} |url=https://www.amnesty.org/en/latest/press-release/2021/07/the-pegasus-project/|website=amnesty.org |access-date=9 November 2024 |language=en |date=19 July 2021}}</ref> | ||
|- | |- | ||
− | | {{w|Morocco}} || To surveil and capture journalist {{w|Omar Radi}} after he criticized the government.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | + | | {{w|Morocco}} || To surveil and capture journalist {{w|Omar Radi}} after he openly criticized the government.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> |
|- | |- | ||
| {{w|Spain}} || To spy on Catalan separatists.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | | {{w|Spain}} || To spy on Catalan separatists.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | ||
Line 90: | Line 227: | ||
|- | |- | ||
| {{w|Israel}} || {{w|AI-assisted targeting in the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> | | {{w|Israel}} || {{w|AI-assisted targeting in the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> | ||
+ | |} | ||
+ | |||
+ | ===Gospel=== | ||
+ | The Gospel is an AI-driven target creation platform that provides autorecommendations for attacking individuals.<ref>{{cite web |last1=Davies |first1=Harry |title=‘The Gospel’: how Israel uses AI to select bombing targets in Gaza|url=https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets |website=theguardian.com |access-date=4 December 2024 |language=en |date=1 December 2023}}</ref> | ||
+ | {| class="sortable wikitable" | ||
+ | ! Perpetrating State !! Use | ||
+ | |- | ||
+ | | {{w|Israel}} || {{w|AI-assisted targeting in the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The application accelerates the amount of targets created for the {{w|Israeli Defense Forces}} to locate and assassinate.<ref>{{cite web |last1=Davies |first1=Harry |title=‘The Gospel’: how Israel uses AI to select bombing targets in Gaza|url=https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets |website=theguardian.com |access-date=4 December 2024 |language=en |date=1 December 2023}}</ref> | ||
|} | |} | ||
Line 100: | Line 245: | ||
|} | |} | ||
− | ===Palantir=== | + | ==={{w|Palantir Technologies}}=== |
A surveillance and tracking tool.<ref>{{cite web |last1=Del Villar |first1=Ashley |last2=Hayes |first2=Myaisha |title=How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now|url=https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now |website=aclu.org |access-date=8 November 2024 |language=en |date=22 July 2021}}</ref> | A surveillance and tracking tool.<ref>{{cite web |last1=Del Villar |first1=Ashley |last2=Hayes |first2=Myaisha |title=How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now|url=https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now |website=aclu.org |access-date=8 November 2024 |language=en |date=22 July 2021}}</ref> | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
− | ! Perpetrating | + | ! Perpetrating Entity !! Use |
|- | |- | ||
− | | {{w|United States}}, {{w|Department of Health and Human Services}} || | + | | {{w|United States}}, {{w|Department of Health and Human Services}} || Tracking and surveilling migrants.<ref>{{cite web |last1=Del Villar |first1=Ashley |last2=Hayes |first2=Myaisha |title=How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now|url=https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now |website=aclu.org |access-date=8 November 2024 |language=en |date=22 July 2021}}</ref> |
|- | |- | ||
− | | {{w|United States}}, {{w|Chicago Police Department}} || {{w|Predictive Policing}}.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> | + | | {{w|United States}}, {{w|Chicago Police Department}} || {{w|Predictive Policing}} until shut down in 2019.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> |
|- | |- | ||
− | | {{w|German law enforcement}} || The police of Hamburg and Hesse | + | | {{w|German law enforcement}} || The police of Hamburg and Hesse use Palantir to surveil and enact predictive policing until ruled unconstitutional in February 2023.<ref>{{cite web |last1=Killeen |first1=Molly |title=German Constitutional Court strikes down predictive algorithms for policing|url=https://www.euractiv.com/section/artificial-intelligence/news/german-constitutional-court-strikes-down-predictive-algorithms-for-policing/ |website=euractiv.com |access-date=13 November 2024 |language=en |date=16 February 2023}}</ref> |
|} | |} | ||
==={{w|Clearview AI}}=== | ==={{w|Clearview AI}}=== | ||
− | + | Clearview AI is a facial recognition software employed by law enforcement agencies and private companies.<ref>{{cite web |last1=Scott |first1=Jeramie |title=Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?|url=https://epic.org/is-ice-using-facial-recognition-to-track-people-who-allegedly-threaten-their-agents/ |website=epic.org |access-date=8 November 2024 |language=en |date=17 March 2022}}</ref> While compiling data, the company scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.<ref>{{cite web |last1=Lyons |first1=Kim |title=ICE just signed a contract with facial recognition company Clearview AI|url=https://www.theverge.com/2020/8/14/21368930/clearview-ai-ice-contract-privacy-immigration |website=theverge.com |access-date=9 November 2024 |language=en |date=14 August 2020}}</ref> | |
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
− | ! Perpetrating | + | ! Perpetrating Entity !! Use |
|- | |- | ||
| {{w|U.S. Immigration and Customs Enforcement}} || Surveilling immigrants.<ref name="ACLU FOIA">{{cite web |title=Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/2020/10/2020.10.19-ACLU-NC-JFL-IDP-Mijente-FOIA-re-Clearview-AI_.pdf|website=immigrantdefenseproject.org |access-date=8 November 2024 |language=en |date=19 October 2020}}</ref> | | {{w|U.S. Immigration and Customs Enforcement}} || Surveilling immigrants.<ref name="ACLU FOIA">{{cite web |title=Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software{{!}} |url=https://www.immigrantdefenseproject.org/wp-content/uploads/2020/10/2020.10.19-ACLU-NC-JFL-IDP-Mijente-FOIA-re-Clearview-AI_.pdf|website=immigrantdefenseproject.org |access-date=8 November 2024 |language=en |date=19 October 2020}}</ref> | ||
Line 121: | Line 266: | ||
===Oculus=== | ===Oculus=== | ||
+ | A surveillance technology that can analyze over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.<ref>{{cite web |last1=Litvinova |first1=Dasha |title=The cyber gulag: How Russia tracks, censors and controls its citizens|url=https://apnews.com/article/russia-crackdown-surveillance-censorship-war-ukraine-internet-dab3663774feb666d6d0025bcd082fba |website=apnews.com |access-date=15 November 2024 |language=en |date=23 May 2023}}</ref> | ||
{| class="sortable wikitable" | {| class="sortable wikitable" | ||
! Perpetrating State !! Use | ! Perpetrating State !! Use | ||
|- | |- | ||
| {{w|Russia}} || Monitoring and surveilling LGBTQ citizens.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | | {{w|Russia}} || Monitoring and surveilling LGBTQ citizens.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> | ||
+ | |} | ||
+ | ==={{w|Synthesia (company)}}=== | ||
+ | The London based company's stated services are creating marketing material and internal presentations but have been used to create politically charged deep fakes in breach of its terms.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> | ||
+ | {| class="sortable wikitable" | ||
+ | ! Perpetrating State !! Use | ||
+ | |- | ||
+ | | {{w|China}} || Chinese state-aligned actors used Synthesia's AI-generated broadcasters to disseminate Chinese Communist Party propaganda.<ref>{{cite web |last1=Antoniuk |first1=Daryna |title=Deepfake news anchors spread Chinese propaganda on social media|url=https://therecord.media/deepfake-news-anchors-spread-chinese-propaganda-on-social-media |website=therecord.media |access-date=11 December 2024 |language=en |date=9 February 2023}}</ref> | ||
+ | |- | ||
+ | | {{w|Burkina Faso}} || Deepfakes surface around the 2023 elections in {{w|Burkina Faso}}, claiming international support of the West African Junta.<ref>{{cite web |last1=Prada |first1=Luis |title=AI Avatars Are Making Fake News in Venezuela, Say Real People|url=https://www.vice.com/en/article/fake-news-ai-avatars-synthesia/ |website=vice.com |access-date=11 December 2024 |language=en |date=17 October 2023}}</ref> | ||
|} | |} | ||
Latest revision as of 16:13, 30 December 2024
Contents
Big picture
This timeline was completed in December 2024. These events are still evolving, and ongoing investigations are being conducted. Due to limited information access and transparency around AI developments, there are likely other cases of AI-related human rights abuses and ethics controversies that have not been documented yet.
Year | Details |
---|---|
2008 |
|
2009 | |
2011 |
|
2012 |
|
2013 |
|
2014 |
|
2015 |
|
2016 |
|
2017 |
|
2018 |
|
2019 |
|
2020 |
|
2021 |
|
2022 |
|
2023 |
|
2024 |
|
Full timeline
Inclusion criteria
The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows are included:
- AI involvement: the incident must involve the significant use of AI technologies.
- Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include privacy violation, war and destruction, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, restricting association and movement, inhibiting expression, preventing asylum, and arbitrary detention.
- State of corporate responsibility: the incident must involve a state or corporate entity that has used AI technology to abuse human rights.
- Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
- The geographical range is global.
- Relevance or significance: incidents with significant human rights violations will be prioritized.
Timeline of AI ethics violations
Onset | Region | Perpetrators | Name | AI Type | Right Violated | Details |
---|---|---|---|---|---|---|
2008 | United States | United States Law Enforcement Agencies | Predictive Policing in the United States | Predictive algorithmic scoring | Privacy, presumption of innocence | Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.[1] However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities in the United States.[2] Predictive policing also threatens the Fourth Amendment to the United States Constitution, requiring reasonable suspicion before arrest.[3] The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.[4] The Chicago Police Department would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.[5] In 2016, the RAND Corporation would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.[6] By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.[7] Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.[8] |
2011 | Brazil | Brazil city and state governments | Facial Recognition Technology in Brazil | Image recognition | Privacy, presumption of innocence | Brazil implements Facial Recognition Technology (FRT) in public spaces in 2011, developed under the context of “smart city” programs, operating with no legal framework for the use of FRT.[9] FRT provides real-time analytics to expedite the identification of criminals, stolen cars, missing persons, and lost objects.[10] The country would have three FRT state and city projects in 2018 and nearly 300 in 2024.[11] FRT disproportionately misidentifies Brazil’s 56% black population.[12] In 2019, 90% of people arrested for nonviolent crimes using FRT would be black,[13] indicating severe disproportionality in FRT’s influence on arrest and detention. |
2011 | England | London's Metropolitan Police Service | Gangs Matrix | Predictive algorithmic scoring | Privacy, presumption of innocence | In response to the 2011 England riots the Metropolitan Police Service creates and implements the Gangs Matrix, an algorithmic database containing individuals thought to be in a gang and likely to commit violence.[14] The algorithm considers data such as social media activity, ethnicity, and known criminal activity.[14] The term ‘gang’ is not defined, and a police officer only needs two pieces of ‘verifiable intelligence’ to place a subject on the list.[15] Sharing a YouTube video containing gang signs or any other social media interaction with gang-related symbols could land an individual on the database, targeting certain subcultures.[15] The algorithm would contain 80% individuals between the ages of 16-24, 78% black males, 75% victims of violence, 35% people who never committed a serious offense, and 15% minors.[14] In October of 2017, the Matrix would hold 3,806 people, some as young as 12,[15] whose status would lead to struggles to find employment opportunities, government benefits, and housing.[14] The Matrix, which does not process personal data fairly, retains data of individuals at zero or low risk, and retains data of individuals longer than necessary, would be discontinued in February 2024.[14] |
2013 | China | Government of China | Social Credit System | Big data analytics | Privacy, freedom of association and movement, presumption of innocence | The government of China begins planning a social credit system in 2012 to be rolled out by 2020, measuring the trustworthiness of individuals.[16] In 2014 the State Council of the People's Republic of China would release the systems founding document, planning to rollout pilot systems.[17] While the legal framework for the state-wide social credit system would not be released until 2022, the system inspired local and city governments to implement their own credit systems, promoting state-sanctioned moral values through incentives and punishments.[18] In 2013, Rongcheng would implement one of the more well known social credit scoring systems. The rankings include traffic tickets, city-level rewards, donations,[19] shopping habits, and online speech[20] which would affect eligibility for government jobs, placing kids in desired schools, and credit card applications.[21] Those deemed trustworthy can receive discounts on heating bills and favorable bank loans.[22] In 2017 the Supreme People’s Court would have released more than 7 million blacklisted individuals who would be banned from air travel, high-speed trains, and luxury purchases.[23] |
2015 | International | Law enforcement use of Cellebrite | Cellebrite AI-powered products used for surveillance | Big data analytics, image recognition | Right to privacy, presumption of innocence | Israeli firm and law enforcement contractor Cellebrite begins incorporating AI-powered tools of evidence processing in 2015 with image classification and in 2018 with its service “Pathfinder.”[24] The company eventually offers other AI-enhanced digital evidence analysis tools Guardian, Smart Search, Physical Analyzer, Autonomy, and Inspector.[24] In 2019 the company would publicly announce its Universal Forensic Extraction Device (UFED), to be implemented by law enforcement to unlock and extract files from any iOS device and recent Android (operating system) phones.[25] Cellebrite would be the most prominent maker of UFEDs enabling police to access information from phones and cloud services.[26] Cellebrite’s data extraction tools would be involved in multiple privacy and surveillance controversies globally. In the United States, domestic police and U.S. Customs and Border Protection would claim the authority to search and gain full access to phones without warrants.[27] In 2024, Serbian authorities would use Cellebrite’s spyware product NoviSpy to infect the phones of activists and journalists, obtaining their personal information.[28] |
2016 | Australia | Cenrelink | Robodebt scheme | Predictive algorithmic scoring | Privacy | The Australian agency Centrelink enacts an automated decision-making system to identify overpayments to welfare recipients.[29] The system, named Robodebt, was not tested prior to rollout, and generates inaccurate debt notices.[30] Over 500,000 Australians on welfare are affected, some incorrectly told they owe thousands of dollars to the government.[31] There would be multiple suicides and reports of depression and anxiety over the payment notices.[32] The government would defend the system until a 2019 court decision ruled Robodebt unlawful.[33] |
2016 | Xinjiang, China | Government of China | Mass Surveillance in China of Ethnic Minorities | Image recognition, natural language processing, voice recognition, big data analytics, automated decision-making, geospatial analytics | Privacy, presumption of innocence, freedom of association and movement | Chinese police and other government officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[34] The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[34] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[34] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[34] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. |
August 2017 | Myanmar | Facebooks role in the Rohingya genocide | Content recommendation algorithm, natural language processing | War/destruction | Myanmar security forces begin a campaign of ethnic cleansing Rohingya People in Rakhine State, causing 700,000 to flee from the systematic murder, rape, and burning of homes.[35] Meta Platforms (formerly Facebook) is increasingly turning towards AI to detect “hate speech.”[36] However, its detection algorithms proactively amplify content that incites violence against the Rohingya people, who already face long-standing discrimination.[36] Facebook favors inflammatory content in its AI-powered engagement-based algorithmic systems, which power news feeds, ranking, recommendation, and group features.[35] Internal Meta studies dating back to 2012 indicate the corporation’s awareness that its algorithms could lead to real-world harm. In 2014, Myanmar authorities even temporarily blocked Facebook due to an outbreak of ethnic violence in Mandalay.[35] A 2016 Meta study documented acknowledgment that the recommendation system can increase extremism.[35] Facebook facilitates peer-to-peer interaction affirming harmful narratives targeting the Rohingya, hosts massive disinformation campaigns originated by the Myanmar military, and knowingly proliferates a product that exacerbates political division and the spread of disinformation.[37] A 2022 Global Witness investigation would reveal Meta’s failure to detect blatant anti-Rohingya and anti-Muslim content, even after Mark Zuckerburg promises to increase Burmese language-speaking content moderators in 2018.[36] Facebook’s content shaping algorithm is designed to maximize user engagement and, therefore, profit, in this case contributing to the genocide of the Rohingya.[36] | |
October 2018 | European Union border | European Union | iBorderCtrl | Image recognition, predictive algorithmic scoring | Privacy, presumption of innocence, freedom of association and movement | In October 2018, the European Union announces the funding of a new auto-border control system called iBorderCtrl, to be piloted in the nations Hungary, Greece, and Latvia.[38] The program is administered to travelers at the border by a virtual border guard and analyzes “micro-gestures” to determine if the traveler is lying.[39] The “honest” travelers can cross the border with a code, and those deemed lying are transferred to human guards for further questioning.[38] The development of the technology lacks transparency.[38] It also relies on the widely contested “affect recognition science” and facial recognition technology, proven to be inherently biased.[38] The trial would end in 2019, and a top EU court would hear a case against the tech brought by digital rights activists in February 2021.[40] |
2018 | Sri Lanka | Facebook algorithm spreads Islamophobic content in Sri Lanka | Content recommendation algorithm | War/destruction, freedom of expression | A viral Facebook rumor spreads across Sri Lanka, falsely claiming that police seized 23,000 sterilization pills from a Muslim pharmacist in Ampara, supposedly unveiling a Muslim plot to sterilize and overthrow the Sinhalese majority.[41] Violence ensues, including the beating of a Muslim shop owner and destruction of his shop in Ampara.[42] More viral Facebook videos calling for violence spark the destruction of Muslim-owned shops and homes, and the death of 27-year-old Abdul Basith trapped inside a burning storefront in Digana.[43] Facebook officials ignore the repeated warnings of potential violence, and the app continues to push the inflammatory content that keeps people on the site.[44] Sri Lanka blocks Facebook (including platforms owned by Facebook like Whatsapp) for three days in March in response to the calls to attack Muslims.[45] | |
2018 | India | Aadhaar | Mass surveillance of Indian citizens with facial recognition technology using Aadhaar data | Privacy, freedom of association and movement | Image recognition | The Indian government rolls out Facial Recognition Technology (FRT) beginning with telecommunication companies using data collected by the Unique Identification Authority of India (UIDIA).[46] Before 2009, there was no centralized identification in India, sparking the creation of Aadhaar, a unique 12-digit ID number assigned to the over 1 billion Indian citizens.[47] The Aadhaar database includes biometric and demographic information, which law enforcement can use for FRT.[48] FRT using Aadhaar data would be used for citizens to access public benefits and services, and FRT would infiltrate India’s telecommunications and travel, policing, public health, welfare programs, education, and elections.[49] These FRT systems are used for racial surveillance[50] and have higher inaccuracy in racially homogenous groups.[51] |
2018 | United States | Amazon Rekognition | Biased facial recognition software | Image recognition | Privacy | It is reported in 2018 that Amazon’s facial recognition software, Amazon Rekognition, has a disproportionate error rate when identifying women and people of color.[52] The Amazon (company) service is offered at a price to the public but heavily marketed towards US law enforcement agencies.[53] Amazon lists the city of Orlando, Florida, and the Washington County Sheriff’s Office in Oregon among its customers.[54] Amazon claims the software can track people in real-time through surveillance cameras, scan body camera footage, and identify up to 100 faces in a single image, which is pertinent in an era of unprecedented protest attendance.[55] In 2019, a Massachusetts Institute of Technology researcher would also find higher error rates in classifying darker-skinned women than lighter-skinned men.[56] |
June 2019 | International | YouTube | AI discrimination in monetizing LGBTQ+ YouTube videos | Content recommendation algorithm | Freedom of expression, right to livelihood | After an investigation, Youtube content creators allege YouTube’s AI monetization algorithm flags videos with LGBTQ-related words as non-advertiser friendly, monetarily punishing videos tagged as “gay,” “transgender,” and “lesbian.”[57] Due to a lack of consistently available human moderators, YouTube uses relies in part on AI algorithms to take down inappropriate videos.[58] YouTube denies the allegation, claiming they aim to protect users from hate speech.[59] In August of 2019, a group of LGBTQ+ content creators would file a class action lawsuit alleging unlawful content regulation, distribution, and monetization practices that stigmatize, restrict, block, demonetize, and financially harm the queer community.[60] |
2019 | Iran | Government of Iran | Facial recognition software to target Iranian protesters | Image recognition | Privacy, freedom of association and movement, presumption of innocence | The Iranian government integrates AI-based surveillance technologies into its legislative framework, enabling the identification and detention of protesters by positioning high-definition surveillance equipment to capture public activity.[2] In 2021, China would become Iran’s biggest technological investor, more than doubling the governments possession of high-definition surveillance video recorders.[61] In 2022 after the onset of the Mahsa Amini protests, the Iranian government would adopt legislation laying out the use of AI-assisted facial recognition tools to enforce morality codes and identify women not abiding by Hijab mandates.[62] More than 20,000 arrests and 500 assassinations of protesters would follow.[63] In 2024 Iran would make an AI ethics deal with Russia to encourage technological cooperation and investment.[64] Iran has also been accused of analyzing citizen social media engagement and creating AI-driven bots and automated social media accounts to flood platforms with regime-sanctioned content.[2] |
March 2020 | Libya | Government of National Accord | Possibly the first wartime autonomous drone kill | Automated decision-making, weapon | War/destruction | Political unrest in Libya leads to conflict between the UN-backed Government of National Accord and forces aligned with Khalifa Haftar.[65] In the March 2020 skirmish, Haftar’s troops are hunted down with an autonomically capable drone and engaged with.[66] The device is a Turkish-made STU Kargu-2 loitering drone, possessing the ability to use machine learning-based object classification to select and engage with targets and capable of swarming.[67] While the UN report of March 2020 doesn’t specifically state the drone was used autonomously and only heavily implies casualties, if confirmed, this would be the first incident of battlefield deaths due to autonomous robots.[68] Autonomous weapons could rely on biased data and result in disproportionate battlefield deaths of protected demographics. |
August 2020 | United Kingdom | Ofqual | 2020 United Kingdom school exam grading controversy | Predictive algorithmic scoring | Right to livelihood | In person GCSE and A-level exams in the UK are disrupted by the COVID-19 pandemic.[69] These exams influence where students can work and attend university, heavily involved in their immediate future.[70] The Office of Qualifications and Examinations Regulation (Ofqual) requests schools submit grades and rank order predictions made by teachers.[71] Assuming biased teacher rankings, Ofqual creates a scoring algorithm, and 40% of students, disproportionately working class, receive a downgraded exam score from their teacher predictions.[72] Protests break out August 16 and Ofqual announces that students should be awarded whichever is higher out of the teacher-predicted score and the algorithm score.[73] Many students would already have lost slots at their preferred universities by the time the scores were readjusted.[74] The algorithm, intended to make the system more fair, would harm lower-income students the most. |
2020 | United States | U.S. Immigration and Customs Enforcement (ICE) | ICE contracts Clearview AI | Image recognition, big data analytics | Privacy, presumption of innocence, freedom of association and movement | The American Civil Liberties Union (ACLU) files a Freedom of Information Act (United States) (FOIA) after US Immigration and Customs Enforcement (ICE) purchases Clearview AI technology.[75] Clearview AI is a facial recognition software.[76] The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[77] Using the controversial data scraping tool, ICE can now deploy mass surveillance to identify and detain immigrants.[75] United States government agencies have a history of mass surveillance. In 2017, the DHS, ICE, and the Department of Health and Human Services used Palantir Technologies to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children.[78] The FBI and ICE searched state and federal driver’s license databases to find undocumented immigrants using facial recognition.[79][2][80] Facial recognition technology is proven to be less accurate in identifying women and individuals with darker skin,[2] therefore discriminating against migrants of color and women. |
July 2021 | International | NSO Group | Pegasus Project (investigation) | Natural language processing, geospatial analytics | Privacy, presumption of innocence, freedom of expression | Amnesty International and Forbidden Stories release their Pegasus Project (investigation). The investigation reveals that the Israeli company NSO Group contracted Pegasus (spyware) to over 50 countries to spy on over 50,000 surveillance targets from 2016 to 2021.[81] The NSO Group’s clients include Azerbaijan, Bahrain, Hungary, India, Kazakhstan, Mexico, Morocco, Rwanda, Saudi Arabia, Togo, and the United Arab Emirates (UAE).[82] The UAE is revealed to be one of the most active users of Pegasus, having targeted 10,000 people, including Ahemd Mansoor.[83] The targets across states included activists, human rights defenders, academics, businesspeople, lawyers, doctors, diplomats, union leaders, politicians, several heads of state, and least 180 journalists.[81] The spyware, used by repressive governments to silence dissent, is surreptitiously installed into victims’ phones and allows complete device access to the perpetrator (including messages, emails, cameras, microphones, calls, contacts, and media).[82] The NSO Group claims to sell their products to government clients to collect data from individuals’ mobile devices suspected of being involved in serious crimes or terrorism and that the leaked state surveillance was misuse and would be investigated.[81] The NSO Group would not take further action to stop its tools from being used to unlawfully target and surveil citizens, would deny any wrongdoing, and would claim its company is involved in a lifesaving mission.[82] |
2021 | United States | Department of Homeland Security, U.S. Customs and Border Protection, and U.S. Immigration and Customs Enforcement | AI used at the Mexico-United States Border during the Presidency of Joe Biden | Image recognition, voice recognition, natural language processing, big data analytics, automated decision-making, geospatial analytics, generative AI | Privacy, right to seek asylum, freedom of association and movement, presumption of innocence | In 2021 the Department of Homeland Security (DHS) receives $780 million for border technology and surveillance (drones, sensors, and other tech to detect border crossings).[84] The U.S. Customs and Border Protection (CBP) deploys a system of autonomous surveillance towers equipped with radar, cameras, and algorithms that use AI systems trained to analyze objects and movement.[85] These towers, able to recognize the difference between humans, animals, and objects, are part of the Biden administrations push for “smart borders.”[86] The United States also utilizes small unmanned aerial systems, remote drones for military operations, to identify and surveil migrants.[87] Local border police use facial recognition technology, cellphone tracking, license-plate cameras, drones, and spy planes, sparking debate on privacy rights to anyone in the region.[88] The expansion of AI-bolstered border surveillance infrastructure is associated with an increase in deaths at the border, pushing migrants to more remote and dangerous routes.[89] The CBP requires migrants to utilize mobile application CBP One upon arrival to the US-Mexico border, submitting biometric and personal data to be considered for asylum.[90] However, the app is significantly worse at recognizing the faces of black and brown people, which would lead to a reduction in the number of black asylum seekers after its rollout in 2023.[91] Asylum seekers from Haiti and African countries are often blocked from entry at the southern border, victim to a biased algorithm and its inability to recognize faces with darker skin tones.[92] The DHS is also building a $6.158 billion biometric database, the Homeland Advanced Recognition Technology (HART) system, to collect, organize, and share data on 270 million people (including children).[93] It is expected to be the largest biometric database in the US, aggregating the biometric data of individuals (without their consent) from government agencies to create digital profiles to target migrants for surveillance, raids, arrests, detention, and deportation.[93] |
May 2022 | United States and allied countries | Russia | Doppelganger (disinformation campaign) | Natural language processing | Freedom of expression, privacy | The Kremlin launches a disinformation campaign, promoting pro-Russian narratives and disseminating disinformation through cloned websites, fake articles, and social media.[94] The Kremlin utilizes AI to create disinformation content, buys domains similar to legitimate sites, and spreads fearmongering across the West.[94] The project would include a total of 7,983 disinformation campaigns, mimicking German, American, Italian, British, and French media outlets and websites, including The Guardian, Fox News, Ministry for Europe and Foreign Affairs (France), and NATO, resulting in a total of 828,842 clicks.[95] The Kremlin coordinates and finances the campaign, undermining Ukrainian objectives, promoting pro-Russia policies, and stoking internal Western tension.[96] |
2022 | Ukraine | Russia | Russia’s use of AI in wartime in the context of the Russian invasion of Ukraine | Natural language processing, big data analytics, automated decision-making, content recommendation algorithm, weapons, geospatial analytics | War/destruction | The February 2022 Russian invasion of Ukraine brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.[97] There are reports of the Ministry of Defense (Russia) using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.[98] Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.[99][100] OpenAI would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.[101] |
February 2023 | Russia | Roskomnadzor | Russian surveillance of civilians (especially LGBTQ people) in the context of the Russian invasion of Ukraine | Image recognition, natural language processing, big data analytics | Privacy, freedom of expression | The Russian Internet was isolated from the world after the 2019 Sovereign Internet Law, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.[2] The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. In February, 2023, Russian federal agency Roskomnadzor, responsible for monitoring, controlling, and censoring Russian media, launches the AI-based detection system Oculus.[2] The program analyzes over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.[102] The program scans for suicide, pro-drug, and extremist content, as well as calls for protests. It is also searching for pro-LGBTQ content, cracking down on the community as part of a framing tactic in the Ukrainian War, claiming to be defending traditional Russian values.[103] The Kremlin claims the program can identify people under a beard or a mask and determine age, insinuating the ability to easily identify and target LGBTQ content creators.[104] |
February 2023 | International | Team Jorge | Team Jorge | Generative AI, natural language processing | Privacy, presumption of innocence, freedom of expression | An investigation reveals an Israeli-based team of journalists, led by former Israeli special forces operatives Tal Hanan, have been working for profit under the radar on various disinformation campaigns and elections for decades. Under the pseudonym Jorge, Hanan leads a team of journalists from over 30 outlets.[105] The investigation reveals that the team worked on 33 presidential disinformation campaigns, and several other voting campaigns in almost every continent including the 2016 Donald Trump win for US presidency and Brexit.[106] Team Jorge has various technological services to provide to those willing to pay, including active intelligence (hijacking email and encrypted messaging including Gmail and Telegram), deceit and defamation (leaking documents and fabricating scandals to harm rivals), vote suppression (disrupting democratic processes and creating stolen election campaigns), perception hacking (creating conspiracies and fake blogs), and influence ops (using an army of avatars to spread disinformation).[107] The team uses the following tools to achieve its objectives: Profiler (a tool that creates an intel profile by data-scraping), Global Bank Scan (financial intel reports on targets), Hacks (messaging accounts), AIMS (a platform that creates avatars and deploys them on disinformation campaigns), Blogger (a system for mass-creating fake blogs to spread content), and Residential Proxy (a tool to separate Jorge and clients from the disinformation campaigns).[108] The software package Advanced Impact Media Solutions (AIMS) enables the Team’s army of 30,000 avatars, which have unique digital backgrounds and authentic profile pictures, to create and spread propaganda and disinformation at the client’s behest.[109] AIMS can use key words to create posts, articles, comments, or tweets in any language with positive, negative, or neutral tones.[110][111] |
October 2023 | Gaza Strip | Israeli Defense Forces (IDF) | Generative AI, automated decision-making, weapons, geospatial analysis, big data analytics | War/destruction | AI-assisted targeting in the Gaza Strip | The Israeli Defense Forces (IDF) implements AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip.[112] The IDF itself acknowledges the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.[113] The Israeli military uses the AI tool Pegasus (spyware) to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[114] Lavender relies on surveillance network and assigns each inputed Gazan a score from 1-100, estimating their likelihood of being a Hamas militant.[115] The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).[116] The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.[117] As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.[116] |
AI in elections
With the increasing sophistication of artificial intelligence (AI) technologies, the threat of AI-generated misinformation and disinformation influencing elections has become a growing concern. This list highlights well-documented instances of AI involvement in elections, including deepfakes, bots, and other forms of AI-powered manipulation. Please note that this is not an exhaustive list, as the scope of AI's impact on elections is still evolving, and new cases may emerge. The following examples illustrate how AI has been used to influence electoral outcomes, undermine democratic processes, or manipulate public opinion.
Election Year | Country | Name | AI Type | Details |
---|---|---|---|---|
2016 | United States | Russian interference in the 2016 United States elections | Deepfakes, natural language processing | Vladimir Putin orders an influence campaign of the 2016 United States elections.[118] Russia’s goal is to exacerbate political division, destabilize democracy,[118] and to influence American voting behavior in favor of Donald Trump.[119] Russia implements cyber-attacks, disinformation campaigns, and dispatches an army of bots on social media sites.[118] |
2018 | Brazil | Twitter bots ahead of 2018 Brazilian general election | Natural language processing | In the months leading up to the 2018 Brazilian general election, Twitter was rife with bots spreading misinformation and disinformation.[120] Between June 22 and 23, over 20% of retweets related to the two leading candidates Luiz Inácio Lula da Silva and Jair Bolsonaro were performed by bots.[121] |
2022 | Kenya | Disinformation in the 2022 Kenyan general election | Generative AI, deepfakes | From May 2022 to July 2022, the 11.8 million Kenyans on social media are exposed to disinformation including fake polls, news, and videos.[122] A deepfake featuring a candidate covered in blood, implying they were a murderer, garners over 505,000 views on TikTok.[123] Other popular deepfakes include Barack Obama endorsing a candidate, and doctored videos of candidates, ethnic groups, and the son of Kenya’s former president.[124] The hundreds of false and misleading claims about the election are not specific to one party or campaign.[125] |
2023 | Nigeria | AI-generated disinformation in the 2023 Nigerian presidential election | Generative AI, deepfakes | The 2023 Nigerian elections cycle is preceded by AI-generated deepfakes flooding social media.[126] In the months leading up to the election, AI-generated disinformation includes deepfakes of Nollywood actors and American celebrities, inorganic hashtags, and misinformation spread by social media influencers and bot accounts endorsing multiple nominees.[127] |
2023 | Burkina Faso | Synthesia (company) used for creating deepfakes | Generative AI, deepfakes | Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.[128] However, the technology can be used to create deepfakes, and videos surface in 2023 in Burkina Faso, claiming international support of the West African Junta.[129] Synthesia bans the accounts that created the 2023 election propaganda videos and strengthens their content review processes.[130] Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.[131] |
2024 | United States | Deepfakes in the 2024 United States presidential election | Generative AI, deepfakes | Deepfakes are not restricted by the United States Federal Election Commission (FEC), and they are circulated during the leadup to the 2024 US election.[132] The Ron DeSantis campaign releases an AI-generated video of Donald Trump hugging Anthony Fauci,[133] the Trump campaign spreads a deepfake of Ron DeSantis in a women’s suit, and the Republican National Commission releases a video of AI-generated images including China invading Taiwan claiming to depict a future if Joe Biden is re-elected.[134] Nearly 5,000 people in New Hampshire receive a call from a voice sounding like Joe Biden (AI generated) telling people not to vote.[135] |
2024 | India | Misinformation and disinformation in the 2024 elections in India | Generative AI, deepfakes | Political parties use AI to generate content leading up to the 2024 elections in India.[136] The Indian National Congress (INC) releases two videos featuring deepfakes of Bollywood stars with large followings criticizing Narendra Modi (who belongs to the Bharatiya Janata Party (BJP)) which garner millions of views.[137] The videos claim Modi fails to keep campaign promises and is harming the economy and the actors whose likeness is used lodge a police complaint against the social media handles.[138] The Dravida Munnetra Kazhagam (DMK) party, which rules the southern state of Tamil Nadu, play AI-generated videos of their dead former leader at political events to garner support.[139] Over half of India’s population of over 1 billion people are active internet users.[140] The 2024 Global Risks Report cites mis and disinformation as the most severe global risk, and India as the country facing the highest risk in this regard.[141] |
2024 | Venezuela | Synthesia (company) used for creating deepfakes to influence 2024 Venezuelan presidential election | Generative AI, deepfakes | Synthesia is a London based company, explicitly offering services for creating marketing material and internal presentations.[142] However, the technology can be used to create deepfakes and is used by the Venezuelan state prior to the 2024 elections to spread disinformation on behalf of the dictator Nicolás Maduro.[143] Two pro-Venezuelan videos surface in 2023 ahead of the 2024 election, featuring AI deepfakes with real actors’ likenesses, boasting a healthy Venezuelan economy.[144] Synthesia would enact policy changes in 2024, allowing the creation and distribution of political content only for users with an enterprise account and a custom AI avatar in hopes of protecting global democratic processes.[131] |
2024 | Pakistan | Deepfakes in the 2024 Pakistani general election | Generative AI, deepfakes | Deepfakes are at the center of the digital debate in the Pakistani elections. In January, 110 million of the 240 million Pakistani people are active internet users and vulnerable to the deepfakes that arise before the election.[145] Imran Khan, the ex-prime minister, campaigns from prison, his team using an AI tool to generate speeches and videos in his likeness.[145] |
Notable software
Pegasus (spyware)
The tool is downloaded onto the target's phone and gives the user full access and control of the device.[81] State governments have used its to spy on dissidents and journalists.[2] Examples of state use:
Perpetrating State | Use |
---|---|
Saudia Arabia | Pegasus was found on the phones of journalist Jamal Khashoggi and his family after the Assassination of Jamal Khashoggi.[82] |
United Arab Emirates | To monitor and detain journalists and dissidents including Ahmed Mansoor.[2] Mansoor, an Emirati human rights defender, openly criticized the government and was arrested in 2017. He would serve a 10-year prison sentence, be kept in solitary confinement, and be denied books, a bed, and basic hygiene.[146] |
Mexico | To monitor over 25 journalists looking into corruption, including Celio Pineda, whose device was marked for surveillance just weeks before his killing in 2017.[82] |
Morocco | To surveil and capture journalist Omar Radi after he openly criticized the government.[2] |
Spain | To spy on Catalan separatists.[2] |
Israel | In the AI-assisted targeting in the Gaza Strip.[2] |
Germany | Purchased the spyware for Federal Criminal Police Office (Germany) use.[147] |
Hungary | Surveilling journalists, including investigative reporter Szabolcs Panyi.[81] |
Belgium | Surveilling journalists.[2] |
Poland | Surveilling journalists.[2] |
Azerbijan | Surveilling over 40 journalists including Sevinj Vagifgizi.[82] |
India | Surveilling over 40 journalists from almost every major media outlet including Siddharth Varadarajan, co-founder of independent news outlet The Wire (India).[82] |
Lavender
A risk assessment tool that relies on a surveillance network.[148]
Perpetrating State | Use |
---|---|
Israel | AI-assisted targeting in the Gaza Strip.[149] |
Gospel
The Gospel is an AI-driven target creation platform that provides autorecommendations for attacking individuals.[150]
Perpetrating State | Use |
---|---|
Israel | AI-assisted targeting in the Gaza Strip.[151] The application accelerates the amount of targets created for the Israeli Defense Forces to locate and assassinate.[152] |
Where’s Daddy
Target tracking software.[153]
Perpetrating State | Use |
---|---|
Israel | AI-assisted targeting in the Gaza Strip.[154] |
Palantir Technologies
A surveillance and tracking tool.[155]
Perpetrating Entity | Use |
---|---|
United States, Department of Health and Human Services | Tracking and surveilling migrants.[156] |
United States, Chicago Police Department | Predictive Policing until shut down in 2019.[157] |
German law enforcement | The police of Hamburg and Hesse use Palantir to surveil and enact predictive policing until ruled unconstitutional in February 2023.[158] |
Clearview AI
Clearview AI is a facial recognition software employed by law enforcement agencies and private companies.[159] While compiling data, the company scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[160]
Perpetrating Entity | Use |
---|---|
U.S. Immigration and Customs Enforcement | Surveilling immigrants.[75] |
Oculus
A surveillance technology that can analyze over 200,000 photos a day (as opposed to 200 a day by a human) and looks for banned content in online images and videos.[161]
Perpetrating State | Use |
---|---|
Russia | Monitoring and surveilling LGBTQ citizens.[2] |
Synthesia (company)
The London based company's stated services are creating marketing material and internal presentations but have been used to create politically charged deep fakes in breach of its terms.[162]
Perpetrating State | Use |
---|---|
China | Chinese state-aligned actors used Synthesia's AI-generated broadcasters to disseminate Chinese Communist Party propaganda.[163] |
Burkina Faso | Deepfakes surface around the 2023 elections in Burkina Faso, claiming international support of the West African Junta.[164] |
See also
See also
- Timeline of AI policy
- Timeline of AI safety
- Timeline of machine learning
- Timeline of ChatGPT
- Timeline of Google Gemini
- Timeline of OpenAI
- Timeline of large language models
References
- ↑ F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
- ↑ 2.00 2.01 2.02 2.03 2.04 2.05 2.06 2.07 2.08 2.09 2.10 2.11 2.12 2.13 2.14 "Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights|" (PDF). europarl.europe.eu. May 2024. Retrieved 6 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024.
- ↑ Belli, Luca (28 March 2024). "Regulating Facial Recognition in Brazil". cambridge.org. Retrieved 6 December 2024.
- ↑ Mari, Angelica (13 July 2023). "Facial recognition surveillance in Sao Paulo could worsen racism". aljazeera.com. Retrieved 6 December 2024.
- ↑ Arcoverde, Leticia (14 October 2024). "Brazilian law enforcement cagey about facial recognition use". brazilian.report. Retrieved 6 December 2024.
- ↑ Mari, Angelica (13 July 2023). "Facial recognition surveillance in Sao Paulo could worsen racism". aljazeera.com. Retrieved 6 December 2024.
- ↑ Liang, Lu-Hai (16 October 2024). "Brazilian groups call forban on facial recognition". biometricupdate.com. Retrieved 6 December 2024.
- ↑ 14.0 14.1 14.2 14.3 14.4 "The gangs matrix*|". stop-watch.org. 2024. Retrieved 6 December 2024.
- ↑ 15.0 15.1 15.2 "What is the Gangs Matrix|". amnesty.org.uk. 18 May 2020. Retrieved 6 December 2024.
- ↑ Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
- ↑ Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
- ↑ Yang, Zeyi (22 November 2022). "China just announced a new social credit law. Here's what it means.". technologyreview.com. Retrieved 27 December 2024.
- ↑ Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
- ↑ Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
- ↑ Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
- ↑ Mistreanu, Simina (3 April 2018). "Life Inside China's Social Credit Lboratory". foreignpolicy.com. Retrieved 27 December 2024.
- ↑ Wang, Maya (12 December 2017). "China's Chilling 'Social Credit' Blacklist". hrw.org. Retrieved 27 December 2024.
- ↑ 24.0 24.1 "Revolutionizing Investigations: The Future of Generative AI in Assisting Law Enforcement to Solve Crimes Faster|". cellebrite.com. 2024. Retrieved 12 December 2024.
- ↑ Greenberg, Andy (14 June 2019). "TITLE". wired.com. Retrieved 19 December 2024.
- ↑ Stanley, Jay (16 May 2017). "Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution". aclu.org. Retrieved 19 December 2024.
- ↑ Stanley, Jay (16 May 2017). "Mobile-Phone Cloning Tools Need to Be Subject to Oversight - and the Constitution". aclu.org. Retrieved 19 December 2024.
- ↑ "Serbia: Authorities using spyware and Cellebrite forensic extraction tools to hack journalists and activists|". amnesty.org. 16 December 2024. Retrieved 12 December 2024.
- ↑ Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
- ↑ Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
- ↑ Mao, Frances (7 July 2023). "Robodebt: Illegal Australian welfare hunt drove people to despair". bbc.com. Retrieved 26 November 2024.
- ↑ Mao, Frances (7 July 2023). "Robodebt: Illegal Australian welfare hunt drove people to despair". bbc.com. Retrieved 26 November 2024.
- ↑ Rinta-Kahila, Tapani; Someh, Ida (2023). "Managing unintended consequences of algorithmic decision-making: The case of Robodebt". sagepub.com. Retrieved 26 November 2024.
- ↑ 34.0 34.1 34.2 34.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024.
- ↑ 35.0 35.1 35.2 35.3 "Myanmar: Facebook's systems promoted violence against Rohingya; Meta owes reparations|". WEB.WEB. 29 September 2022. Retrieved 22 November 2024.
- ↑ 36.0 36.1 36.2 36.3 "The Social Atrocity: Meta and The Right to Remedy for The Rohingya|". amnesty.org. 29 September 2022. Retrieved 22 November 2024.
- ↑ Zaleznik, Daniel (July 2021). "Facebook and Genocide: How Facebook contributed to genocide in Myanmar and why it will not be held accountable". systemicjustice.org. Retrieved 22 November 2024.
- ↑ 38.0 38.1 38.2 38.3 "Automated technologies and the future of Fortress Europe|". amnesty.org. 28 March 2019. Retrieved 4 December 2024.
- ↑ Bacchi, Umberto (5 February 2021). "EU's lie-detecting virtual border guards face court scrutiny". reuters.com. Retrieved 4 December 2024.
- ↑ Bacchi, Umberto (5 February 2021). "EU's lie-detecting virtual border guards face court scrutiny". reuters.com. Retrieved 4 December 2024.
- ↑ Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
- ↑ Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
- ↑ Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
- ↑ Taub, Amanda; Fisher, Max (21 April 2018). "Where Countries Are Tinderboxes and Facebook Is a Match". nytimes.com. Retrieved 16 December 2024.
- ↑ Rajagopalan, Megha (7 March 2018). "Sri Lanka Is Blocking Facebook For Three Days In Response To Violence Against Minorities". buzzfeednews.com. Retrieved 16 December 2024.
- ↑ Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
- ↑ Sudhir, K.; Sunder, Shyam (27 March 2020). "What Happens When a Billion Identities Are Digitized?". insights.som.yale.edu. Retrieved 22 November 2024.
- ↑ Panigrahi, Subhashish (April 2022). "TITLE". interactions.acm.org. Retrieved 22 November 2024.
- ↑ Sinha, Amber (13 March 2024). "The Landscape of Facial Recognition Technologies in India". techpolicy.press. Retrieved 22 November 2024.
- ↑ Banerjee, Arjun (9 April 2023). "India the surveillance state and the role of Aadhaar". countercurrents.org. Retrieved 22 November 2024.
- ↑ Jain, Anushka (3 December 2021). "Crores of pensioners to be verified using UIDAI-linked facial recognition app". medianama.com. Retrieved 22 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ Cagle, Matt; Ozer, Nicole (22 May 2018). "Amazon Teams Up With Government to Deploy Dangerous New Facial Recognition Technology". aclu.org. Retrieved 21 November 2024.
- ↑ Snow, Jacob (26 July 2018). "Amazon's Face Recognition Falsely Matched 28 Members of Congress With Mugshots". aclu.org. Retrieved 21 November 2024.
- ↑ O’Brien, Matt (3 April 2019). "Face recognition researcher fights Amazon over biased AI". apnews.com. Retrieved 21 November 2024.
- ↑ Alexander, Julia (30 September 2019). "YouTube moderation bots punish videos tagged as 'gay' or 'lesbian,' study finds". theverge.com. Retrieved 26 November 2024.
- ↑ Sams, Brandon (29 September 2020). "YouTube's New Age Restriction AI Worries LGBTQ+ Community". lifewire.com. Retrieved 26 November 2024.
- ↑ Bensinger, Greg (14 August 2019). "YouTube discriminates against LGBT content by unfairly culling it, suit alleges". washingtonpost.com. Retrieved 26 November 2024.
- ↑ Sams, Brandon (29 September 2020). "YouTube's New Age Restriction AI Worries LGBTQ+ Community". lifewire.com. Retrieved 26 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ George, Rachel (7 December 2023). "The AI Assault on Women: What Iran's Tech Enabled Morality Laws Indicate for Women's Rights Movements". cfr.org. Retrieved 21 November 2024.
- ↑ Tkeshelashvili, Mariami; Saade, Tiffany (26 September 2024). "Decrypting Iran's AI-Enhanced Operations in Cyberspace". securityandtechnology.org. Retrieved 21 November 2024.
- ↑ Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
- ↑ Hernandez, Joe (1 June 2021). "A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says". npr.org. Retrieved 21 November 2024.
- ↑ Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
- ↑ Kallenborn, Zachary (20 May 2021). "Was a flying killer robot used in Libya? Quite Possibly". thebulliten.org. Retrieved 21 November 2024.
- ↑ Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
- ↑ Leckie, George (30 September 2023). "The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm". bristol.ac.uk. Retrieved 4 December 2024.
- ↑ Leckie, George (30 September 2023). "The 2020 GCSE and A-level 'exam grades fiasco': A secondary data analysis of students' grades and Ofqual's algorithm". bristol.ac.uk. Retrieved 4 December 2024.
- ↑ Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
- ↑ Hao, Karen (20 August 2020). "The UK exam debacle reminds us that algorithms can't fix broken systems". technologyreview.com. Retrieved 4 December 2024.
- ↑ Satariano, Adam (20 August 2020). "British Grading Debacle Shows Pitfalls of Automating Government". nytimes.com. Retrieved 4 December 2024.
- ↑ 75.0 75.1 75.2 "Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software|" (PDF). immigrantdefenseproject.org. 19 October 2020. Retrieved 8 November 2024.
- ↑ Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
- ↑ 81.0 81.1 81.2 81.3 81.4 "About the Pegasus Project|". forbiddenstories.org. 18 July 2021. Retrieved 9 November 2024.
- ↑ 82.0 82.1 82.2 82.3 82.4 82.5 82.6 "TITLE|". amnesty.org. 19 July 2021. Retrieved 9 November 2024.
- ↑ Coates Ulrichsen, Kristian (9 June 2022). "Pegasus as a case study of evolving ties between the UAE and Israel". gulfstateanalytics.com. Retrieved 8 November 2024.
- ↑ Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
- ↑ Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
- ↑ Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
- ↑ Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
- ↑ Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
- ↑ Tyler, Hannah (2 February 2022). "The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions". migrationpolicy.org. Retrieved 19 December 2024.
- ↑ Morley, Priya (28 June 2024). "AI at the Border: Racialized Impacts and Implications". justsecurity.org. Retrieved 19 December 2024.
- ↑ Del Bosque, Melissa (8 February 2023). "Facial recognition bias frustrates Black asylum applicants to US, advocates say". theguardian.com. Retrieved 19 December 2024.
- ↑ Del Bosque, Melissa (8 February 2023). "Facial recognition bias frustrates Black asylum applicants to US, advocates say". theguardian.com. Retrieved 19 December 2024.
- ↑ 93.0 93.1 "HART Attack|" (PDF). immigrantdefenseproject.org. May 2022. Retrieved 12 December 2024.
- ↑ 94.0 94.1 "Russian Disinformation Campaign "DoppelGanger" Unmasked: A Web of Deception|". cybercom.mil. 3 September 2024. Retrieved 6 December 2024.
- ↑ "What is the doppelganger operation? List of Resources|". disinfo.eu. 30 October 2024. Retrieved 6 December 2024.
- ↑ Chawrylo, Katarzyna (13 September 2024). "'Doppelganger': the pattern of Russia's anti-Western influence operation". osw.waw.pl. Retrieved 6 December 2024.
- ↑ Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
- ↑ Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
- ↑ Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024.
- ↑ Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024.
- ↑ "Russia using generative AI to ramp up disinformation, says Ukraine minister|". reuters.com. 16 October 2024. Retrieved 13 November 2024.
- ↑ Litvinova, Dasha (23 May 2023). "The cyber gulag: How Russia tracks, censors and controls its citizens". apnews.com. Retrieved 15 November 2024.
- ↑ Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
- ↑ Buziashvili, Eto (17 February 2023). "Russia takes next step in domestic internet surveillance". dfrlab.org. Retrieved 15 November 2024.
- ↑ Kirchgaessner, Stephanie; Ganguly, Manisha; Pegg, David (14 February 2023). "Revealed: the hacking and disinformation team meddling in elections". WEB. Retrieved 5 December 2024.
- ↑ Andrzejewski, Cecile (15 February 2023). ""Team Jorge": In the heart of a global disinformation machine". forbiddenstories.org. Retrieved 5 December 2024.
- ↑ Benjakob, Omer (15 February 2023). "Hacking, Extortion, Election Interference: These Are the Tools Used by Israel's Agents of Chaos and Manipulation". haaretz.com. Retrieved 27 March 2023.
- ↑ Benjakob, Omer (15 February 2023). "Hacking, Extortion, Election Interference: These Are the Tools Used by Israel's Agents of Chaos and Manipulation". haaretz.com. Retrieved 27 March 2023.
- ↑ Kirchgaessner, Stephanie; Ganguly, Manisha; Pegg, David (14 February 2023). "Revealed: the hacking and disinformation team meddling in elections". WEB. Retrieved 5 December 2024.
- ↑ Andrzejewski, Cecile (15 February 2023). ""Team Jorge": In the heart of a global disinformation machine". forbiddenstories.org. Retrieved 5 December 2024.
- ↑ Funk, Allie (21 November 2023). "The Repressive Power of Artificial Intelligence". freedomhouse.org. Retrieved 5 December 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ 116.0 116.1 "='AI-assisted genocide': Israel reportedly used database for Gaza kill lists |". Aljazeera.com. 4 April 2024. Retrieved 6 November 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ 118.0 118.1 118.2 "Fact Sheet: What We Know about Russia's Interference Operations|". gmfus.org. 2019. Retrieved 12 December 2024.
- ↑ "Exposure to Russian Twitter Campaigns in 2016 Presidential Race Highly Concentrated, Largely Limited to Strongly Partisan Republicans|". nyu.edu. 9 January 2023. Retrieved 12 December 2024.
- ↑ Allen, Andrew (17 August 2018). "Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections". wilsoncenter.org. Retrieved 13 December 2024.
- ↑ Allen, Andrew (17 August 2018). "Bots in Brazil: The Activity of Social Media Bots in Brazilian Elections". wilsoncenter.org. Retrieved 13 December 2024.
- ↑ Olivia, Lilian (5 January 2023). "Disinformation was rife in Kenya's 2022 elections". blogs.Ise.ac.uk. Retrieved 12 December 2024.
- ↑ Olivia, Lilian (5 January 2023). "Disinformation was rife in Kenya's 2022 elections". blogs.Ise.ac.uk. Retrieved 12 December 2024.
- ↑ Mwai, Peter (29 May 2022). "Kenta Elections 2022: Misinformation circulating online". bbc.com. Retrieved 12 December 2024.
- ↑ Kulundu, James (8 August 2022). "Election campaigning ends in Kenya but disinformation battle drags on". factcheck.afp.com. Retrieved 12 December 2024.
- ↑ Davis, Eric (18 March 2024). "Q&A: Hannah Ajakaiye on manipulated media in the 2023 Nigerian presidential elections, generative AI, and possible interventions". securityandtechnology.org. Retrieved 12 December 2024.
- ↑ Orakwe, Emmanuel (3 July 2024). "The challenges of AI-driven political disinformation in Nigeria". africainfact.com. Retrieved 12 December 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
- ↑ Ganguly, Manisha (16 October 2024). "'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda". theguardian.com. Retrieved 11 December 2024.
- ↑ 131.0 131.1 "Introducing new guidelines for political content on Synthesia|". synthesia.io. 2 December 2024. Retrieved 11 December 2024.
- ↑ Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
- ↑ Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
- ↑ Nehamas, Nicholas (8 June 2023). "DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter". nytimes.com. Retrieved 13 December 2024.
- ↑ Beaumont, Hilary (19 June 2024). "'A lack of trust': How deepfakes and AI could rattle the US elections". aljazeera.com. Retrieved 13 December 2024.
- ↑ Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
- ↑ Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
- ↑ Majumdar, Anushree (30 April 2024). "Artificial Intelligence has a starring role in India's 18th General Elections". upgradedemocracy.de. Retrieved 13 December 2024.
- ↑ Sharma, Yashraj (20 February 2024). "Deepfake democracy: Behind the AI trickery shaping India's 2024 election". aljazeera.com. Retrieved 16 December 2024.
- ↑ Mukherjee, Mitali (19 December 2023). "AI deepfakes, bad laws - and a big fat Indian election". reutersinstitute.politics.ox.ac.uk. Retrieved 16 December 2024.
- ↑ Mukherjee, Mitali (19 December 2023). "AI deepfakes, bad laws - and a big fat Indian election". reutersinstitute.politics.ox.ac.uk. Retrieved 16 December 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
- ↑ Ganguly, Manisha (16 October 2024). "'It's not me, it's just my face': the models who found their likenesses had been used in AI propaganda". theguardian.com. Retrieved 11 December 2024.
- ↑ 145.0 145.1 "Deepfakes weaponized to target Pakistan's women leaders|". france24.com. 12 March 2024. Retrieved 16 December 2024.
- ↑ White, Rebecca (12 December 2023). "Ahmed Mansoor: the poet who spoke truth to power and paid a heavy price". securitylab.amnesty.org. Retrieved 8 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Davies, Harry (1 December 2023). "'The Gospel': how Israel uses AI to select bombing targets in Gaza". theguardian.com. Retrieved 4 December 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Davies, Harry (1 December 2023). "'The Gospel': how Israel uses AI to select bombing targets in Gaza". theguardian.com. Retrieved 4 December 2024.
- ↑ Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024.
- ↑ Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024.
- ↑ Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024.
- ↑ Killeen, Molly (16 February 2023). "German Constitutional Court strikes down predictive algorithms for policing". euractiv.com. Retrieved 13 November 2024.
- ↑ Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024.
- ↑ Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.
- ↑ Litvinova, Dasha (23 May 2023). "The cyber gulag: How Russia tracks, censors and controls its citizens". apnews.com. Retrieved 15 November 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.
- ↑ Antoniuk, Daryna (9 February 2023). "Deepfake news anchors spread Chinese propaganda on social media". therecord.media. Retrieved 11 December 2024.
- ↑ Prada, Luis (17 October 2023). "AI Avatars Are Making Fake News in Venezuela, Say Real People". vice.com. Retrieved 11 December 2024.