Difference between revisions of "Timeline of AI ethics violations"

From Timelines
Jump to: navigation, search
({{w|Pegasus (spyware)}})
(Timeline of AI ethics violations)
 
(5 intermediate revisions by the same user not shown)
Line 22: Line 22:
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Onset !! Region !! Perpetrators !! Name !! Details
 
! Onset !! Region !! Perpetrators !! Name !! Details
 +
|-
 +
| {{dts|2008}}  || {{w|United States}} || United States Law Enforcement Agencies || {{w|Predictive Policing}} || Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.<ref>{{cite web |last1=F |first1=Holly |title=Predictive Policing: Promoting Peace or Perpetuating Prejudice|url=https://d3.harvard.edu/platform-rctom/submission/predictive-policing-promoting-peace-or-perpetuating-prejudice/ |website=d3.harvard.edu |access-date=13 November 2024 |language=en |date=13 November 2018}}</ref> However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights:  Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> Predictive policing also threatens the {{w|Fourth Amendment to the United States Constitution}}, requiring reasonable suspicion before arrest.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref> The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref> The {{w|Chicago Police Department}} would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.<ref>{{cite web |last1=F |first1=Holly |title=Predictive Policing: Promoting Peace or Perpetuating Prejudice|url=https://d3.harvard.edu/platform-rctom/submission/predictive-policing-promoting-peace-or-perpetuating-prejudice/ |website=d3.harvard.edu |access-date=13 November 2024 |language=en |date=13 November 2018}}</ref> In 2016, the Rand Corporation would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref> Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.<ref>{{cite web |last1=Lau |first1=Tim |title=Predictive Policing Explained|url=https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained |website=brennancenter.org |access-date=13 November 2024 |language=en |date=1 April 2020}}</ref>
 
|-
 
|-
 
| {{dts|2016}}  || {{w|Xinjiang}}, {{w|China}}|| Chinese Government || {{w|Mass Surveillance in China}} of Ethnic Minorities|| Chinese police and other officials use the AI-powered application {{w|Integrated Joint Operations Platform}} (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref>The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Techno-Authoritarianism Has Gone Global|url=https://www.hrw.org/news/2021/04/08/chinas-techno-authoritarianism-has-gone-global?gad_source=1&gclid=Cj0KCQjwveK4BhD4ARIsAKy6pML5lKn7yVs9RyvOnwyOPf6IlW0fFfot_WApZuywwlLL6Q3OAW9FSpIaAv4yEALw_wcB |website=hrw.org |access-date=23 October 2024 |language=en |date=8 April 2021}}</ref>
 
| {{dts|2016}}  || {{w|Xinjiang}}, {{w|China}}|| Chinese Government || {{w|Mass Surveillance in China}} of Ethnic Minorities|| Chinese police and other officials use the AI-powered application {{w|Integrated Joint Operations Platform}} (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref>The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.<ref name="China’s Surveillance">{{cite web |title=China’s Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App{{!}} |url=https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverse-engineering-xinjiang-police-mass|website=hrw.org |access-date=23 October 2024 |language=en |date=1 May 2019}}</ref> Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.<ref>{{cite web |last1=Wang |first1=Maya |title=China’s Techno-Authoritarianism Has Gone Global|url=https://www.hrw.org/news/2021/04/08/chinas-techno-authoritarianism-has-gone-global?gad_source=1&gclid=Cj0KCQjwveK4BhD4ARIsAKy6pML5lKn7yVs9RyvOnwyOPf6IlW0fFfot_WApZuywwlLL6Q3OAW9FSpIaAv4yEALw_wcB |website=hrw.org |access-date=23 October 2024 |language=en |date=8 April 2021}}</ref>
Line 29: Line 31:
 
| {{dts|2022}}  || {{w|Ukraine}} || {{w|Russia}} || Russia’s Use of AI in the Ukraine Invasion || The February 2022 {{w|Russian invasion of Ukraine}} brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.<ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> The Russian Internet was isolated from the world after the 2019 {{w|Sovereign Internet Law}}, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights:  Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. There are reports of the {{w|Ministry of Defense (Russia)}} using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref> Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref><ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> {{w|OpenAI}} would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.<ref name="Russia using gen AI to ramp up disinformation">{{cite web |title=Russia using generative AI to ramp up disinformation, says Ukraine minister{{!}} |url=https://www.reuters.com/technology/artificial-intelligence/russia-using-generative-ai-ramp-up-disinformation-says-ukraine-minister-2024-10-16/|website=reuters.com |access-date=13 November 2024 |language=en |date=16 October 2024}}</ref>
 
| {{dts|2022}}  || {{w|Ukraine}} || {{w|Russia}} || Russia’s Use of AI in the Ukraine Invasion || The February 2022 {{w|Russian invasion of Ukraine}} brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.<ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> The Russian Internet was isolated from the world after the 2019 {{w|Sovereign Internet Law}}, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights:  Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref> The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. There are reports of the {{w|Ministry of Defense (Russia)}} using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref> Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.<ref>{{cite web |last1=Bendett |first1=Sam |title=Roles and Implications of AI in the Russian-Ukrainian Conflict|url=https://www.russiamatters.org/analysis/roles-and-implications-ai-russian-ukrainian-conflict |website=russiamatters.org |access-date=13 November 2024 |language=en |date=20 July 2023}}</ref><ref>{{cite web |last1=Ashby |first1=Heather |title=From Gaza to Ukraine, AI is Transforming War|url=https://inkstickmedia.com/from-gaza-to-ukraine-ai-is-transforming-war/ |website=instickmedia.com |access-date=13 November 2024 |language=en |date=6 March 2024}}</ref> {{w|OpenAI}} would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.<ref name="Russia using gen AI to ramp up disinformation">{{cite web |title=Russia using generative AI to ramp up disinformation, says Ukraine minister{{!}} |url=https://www.reuters.com/technology/artificial-intelligence/russia-using-generative-ai-ramp-up-disinformation-says-ukraine-minister-2024-10-16/|website=reuters.com |access-date=13 November 2024 |language=en |date=16 October 2024}}</ref>
 
|-
 
|-
| {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israeli Defense Forces}} ||  {{w|AI-assisted targeting in the Gaza Strip}} || Israel implement {{w|AI-assisted targeting in the Gaza Strip}} in the {{w|Israeli bombing of the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The IDF itself has acknowledged the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> Lavender relies on surveillance network and assigns each imputed Gazan a score from 1-100, estimating how likely they are to be a hamas militant.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref> The Lavender scor is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref>
+
| {{dts|October 2023}} || {{w|Gaza Strip}} || {{w|Israeli Defense Forces}} ||  {{w|AI-assisted targeting in the Gaza Strip}} || Israel implement {{w|AI-assisted targeting in the Gaza Strip}} in the {{w|Israeli bombing of the Gaza Strip}}.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> The IDF itself has acknowledged the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.<ref>{{cite web |last1=Katibah |first1=Leila |title=The Genocide Will Be Automated—Israel, AI and the Future of War|url=https://merip.org/2024/10/the-genocide-will-be-automated-israel-ai-and-the-future-of-war/ |website=merip.org |access-date=18 October 2024 |language=en |date=October 2024}}</ref> Lavender relies on surveillance network and assigns each imputed Gazan a score from 1-100, estimating how likely they are to be a hamas militant.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref> The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel usingsecret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref> As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.<ref name="Al Jazeera">{{cite web |title==‘AI-assisted genocide’: Israel reportedly used database for Gaza kill lists {{!}} |url= https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists |website= Aljazeera.com|access-date=6 November 2024 |language=en |date=4 April 2024}}</ref>
 
|}
 
|}
  
Line 35: Line 37:
  
 
==={{w|Pegasus (spyware)}}===
 
==={{w|Pegasus (spyware)}}===
The tool is downloaded onto the targets phone and gives the user full access and control of the device.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> Its has been used by state governments to spy on dissidents and journalists<ref name="European Parliment In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights:  Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref>. Examples of state use:
+
The tool is downloaded onto the target's phone and gives the user full access and control of the device.<ref name="Pegasus Project: Forbidden Stories">{{cite web |title=About the Pegasus Project{{!}} |url=https://forbiddenstories.org/about-the-pegasus-project/|website=forbiddenstories.org |access-date=9 November 2024 |language=en |date=18 July 2021}}</ref> Its has been used by state governments to spy on dissidents and journalists<ref name="European Parliament In-Depth Analysis">{{cite web |title=Artificial intelligence (AI) and human rights:  Using AI as a weapon of repression and its impact on human rights{{!}} |url=https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf|website=europarl.europe.eu |access-date=6 November 2024 |language=en |date=May 2024}}</ref>. Examples of state use:
 
{| class="sortable wikitable"
 
{| class="sortable wikitable"
 
! Perpetrating State !! Use  
 
! Perpetrating State !! Use  
Line 61: Line 63:
  
 
===Lavender===
 
===Lavender===
 +
A risk assessment tool that relies on a surveillance network.<ref>{{cite web |last1=Echols |first1=Connor |title=Israel using secret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref>
 +
{| class="sortable wikitable"
 +
! Perpetrating State !! Use
 +
|-
 +
| {{w|Israel}} || {{w|AI-assisted targeting in the Gaza Strip}}
 +
|}
 
===Where’s Daddy===
 
===Where’s Daddy===
 +
Target tracking software<ref>{{cite web |last1=Echols |first1=Connor |title=Israel using secret AI tech to target Palestinians|url=https://responsiblestatecraft.org/israel-ai-targeting/ |website=responsiblestatecraft.org |access-date=13 November 2024 |language=en |date=3 April 2024}}</ref>
 +
{| class="sortable wikitable"
 +
! Perpetrating State !! Use
 +
|-
 +
| {{w|Israel}} || {{w|AI-assisted targeting in the Gaza Strip}}
 +
|}
 
===Palantir===
 
===Palantir===
===Clearview AI===
+
A surveillance and tracking tool.<ref>{{cite web |last1=Del Villar |first1=Ashley |last2=Hayes |first2=Myaisha |title=How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now|url=https://www.aclu.org/news/privacy-technology/how-face-recognition-fuels-racist-systems-of-policing-and-immigration-and-why-congress-must-act-now |website=aclu.org |access-date=8 November 2024 |language=en |date=22 July 2021}}</ref>
 +
{| class="sortable wikitable"
 +
! Perpetrating State !! Use
 +
|-
 +
| {{w|Department of Health and Human Services}} || Track and surveille migrants
 +
|-
 +
| {{w|Chicago Police Department}} || {{w|Predictive Policing}}<ref>{{cite web |last1=Peteranderl |first1=Sonja |last2=Spiegel |first2=Der |title=Under Fire: The Rise and Fall of Predictive Policing|url=https://www.acgusa.org/wp-content/uploads/2020/03/2020_Predpol_Peteranderl_Kellen.pdf |website=acgusa.org |access-date=13 November 2024 |language=en |date=January 2020}}</ref>
 +
|}
 +
 
 +
==={{w|Clearview AI}}===
 +
The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.<ref>{{cite web |last1=Lyons |first1=Kim |title=ICE just signed a contract with facial recognition company Clearview AI|url=https://www.theverge.com/2020/8/14/21368930/clearview-ai-ice-contract-privacy-immigration |website=theverge.com |access-date=9 November 2024 |language=en |date=14 August 2020}}</ref>
 +
{| class="sortable wikitable"
 +
! Perpetrating State !! Use
 +
|-
 +
| {{w|U.S. Immigration and Customs Enforcement}} || Surveilling immigrants
 +
|}
 +
 
 
===Oculus===
 
===Oculus===
 
Used in Russia to monitor LGBTQ)
 
Used in Russia to monitor LGBTQ)

Latest revision as of 16:47, 13 November 2024

Big picture

STATUS: Unfinished


Full timeline

Inclusion criteria

The information included in the timeline outlines incidents of human rights violations in which AI was involved. Here is a list of criteria on what rows were included:

  • AI involvement: The incident must involve the significant use of AI technologies.
  • Human rights impact: the incident must have violated human rights defined by international law and standards such as the Universal Declaration of Human Rights (UDHR) and subsequent treaties. Examples of human rights abuses include unlawful killings or injuries, arbitrary detention or torture, discrimination based on race, ethnicity, religion, gender, or other protected characteristics, freedom of expression or assembly restrictions, privacy violations
  • State of Corporate Responsibility: The incident must involve a state or corporate entity that has used AI technology to abuse human rights.
  • Verifiable evidence: include only incidents with credible and verifiable evidence from sources such as news articles, human rights reports, official documents, and academic research.
  • The geographical range is global.
  • Relevance or significance: incidents with significant human rights violations will be prioritized.


Timeline of AI ethics violations

Onset Region Perpetrators Name Details
2008 United States United States Law Enforcement Agencies Predictive Policing Predictive policing refers to the use of algorithms to analyze past criminal activity data, identify patterns, and predict and prevent future crimes.[1] However, police departments are only able to use data from reported crimes, leading to the accentuation of past prejudices in arrests and over-policing of Black and Latinx communities.[2] Predictive policing also threatens the Fourth Amendment to the United States Constitution, requiring reasonable suspicion before arrest.[3] The LA Police Department starts working with Federal Agencies to explore predictive policing in 2008; the New York and Chicago Police Departments would start testing their systems in 2012.[4] The Chicago Police Department would create the Strategic Subject List (SSL) algorithm in 2012 that assigns individuals a score based on the likelihood of involvement in a future crime.[5] In 2016, the Rand Corporation would find that people on this list were no more or less likely to be involved in a shooting than a control group but were more likely to be arrested for one.[6] By 2018, almost 400,000 people had an SSL risk score, disproportionately men of color.[7] Predictive policing would be shut down in Chicago and LA in 2019 and 2020 due to evidence of its inefficacy.[8]
2016 Xinjiang, China Chinese Government Mass Surveillance in China of Ethnic Minorities Chinese police and other officials use the AI-powered application Integrated Joint Operations Platform (IJOP) for mass surveillance of the predominantly Turkic Muslim and Uyghur population of Xinjiang.[9]The IJOP collects personal information, location, identities, electricity and gas usage, personal relationships, and DNA samples (which can be used to gather ethnicity) then flags suspicious individuals, activities, or circumstances.[9] The IJOP defines foreign contacts, donations to mosques, lack of socialization with neighbors, and frequent usage of the front door as suspicious.[9] Individuals deemed suspicious are investigated and can be sent to mass political education camps and facilities where millions of Turkic Muslims and Uyghurs are subjected to movement restriction, political indoctrination, and religious repression.[9] Techno-authoritarian surveillance occurs throughout China, contrary to the internationally guaranteed rights to privacy. The Central Bank in China has adopted a digital currency to allow Beijing to exclude blocklisted individuals from social services and control financial transactions.[10]
2020 United States U.S. Immigration and Customs Enforcement ICE uses Clearview The American Civil Liberties Union (ACLU) files a FOIA (Freedom of Information Act (United States)) after US Immigration and Customs Enforcement (ICE) purchases Clearview AI technology.[11] Clearview AI is a facial recognition software.[12] The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[13] Using the controversial data scraping tool, ICE can now deploy mass surveillance to identify and detain immigrants.[11] United States government agencies have a history of mass surveillance. In 2017, the DHS, ICE, and the Department of Health and Human Services used Palantir technology to tag, track, locate, and arrest 400 people in an operation that targeted family members and caregivers of unaccompanied migrant children.[14] The FBI and ICE searched state and federal driver’s license databases to find undocumented immigrants using facial recognition.[15][2][16] Facial recognition technology is proven to be less accurate in identifying women and individuals with darker skin,[2] therefore discriminating against women and minorities.
2022 Ukraine Russia Russia’s Use of AI in the Ukraine Invasion The February 2022 Russian invasion of Ukraine brings a new age of AI in wartime. While the cyber-attacks against Ukraine predated the invasion, Russia deploys AI-driven cyber attacks on Ukrainian infrastructure, communications, and allies at an increased rate.[17] The Russian Internet was isolated from the world after the 2019 Sovereign Internet Law, amping up AI tools for domestic repression and surveillance, content-blocking mechanisms, and sifting through dissent.[2] The isolation gives Russia enhanced censorship and monitoring of the Russian public and information landscape in regards to the invasion. There are reports of the Ministry of Defense (Russia) using AI to provide data analysis and decision-making in the battlespace and prioritizing autonomous weapons research.[18] Russia is suspected of utilizing unmanned aerial vehicles (UAVs) equipped with AI-powered cameras and sensors for reconnaissance missions and using neural networks to identify strike targets.[19][20] OpenAI would report in May 2024 two covert influence operations from Russia using AI to spread information on social media, defending the invasion.[21]
October 2023 Gaza Strip Israeli Defense Forces AI-assisted targeting in the Gaza Strip Israel implement AI-assisted targeting in the Gaza Strip in the Israeli bombing of the Gaza Strip.[22] The IDF itself has acknowledged the use of AI to accelerate targeting, increasing the tempo of operations and the pool of targets for assassination.[23] The Israeli military uses the AI tool Pegasus to locate and collect data on individuals. It feeds this data through automated targeting platforms like Where’s Daddy, Gospel, and Lavender, which use facial recognition, geolocation, and cloud computing to generate targets, including journalists, human rights defenders, academics, diplomats, union leaders, politicians, and heads of state.[24] Lavender relies on surveillance network and assigns each imputed Gazan a score from 1-100, estimating how likely they are to be a hamas militant.[25] The tool is responsible for generating a kill list of as many as 37,000, and Israeli Intelligence officials report the tool has a 10% error rate (this error rate could be greater, dependent on the IDF’s classification of Hamas militants).[26] The Lavender score is fed into “Where’s Daddy” which uses AI to determine when the individual has returned home, marking them for assassination.[27] As of April 2024 the Israeli military is hoping to sell their targeting tools to foreign entities.[26]

Notable software

Pegasus (spyware)

The tool is downloaded onto the target's phone and gives the user full access and control of the device.[28] Its has been used by state governments to spy on dissidents and journalists[2]. Examples of state use:

Perpetrating State Use
Saudia Arabia The Assassination of Jamal Khashoggi
United Arab Emirates To monitor and detain Ahmed Mansoor
Mexico To monitor journalists looking into corruption
Morocco To surveille and capture journalist Omar Radi after he criticized the government
Spain To spy on Catalan separatists
Israel In the AI-assisted targeting in the Gaza Strip
Germany Purchased the spyware for Federal Criminal Police Office (Germany) use
Hungary Surveilling journalists
Belgium Surveilling journalists
Poland Surveilling journalists

Lavender

A risk assessment tool that relies on a surveillance network.[29]

Perpetrating State Use
Israel AI-assisted targeting in the Gaza Strip

Where’s Daddy

Target tracking software[30]

Perpetrating State Use
Israel AI-assisted targeting in the Gaza Strip

Palantir

A surveillance and tracking tool.[31]

Perpetrating State Use
Department of Health and Human Services Track and surveille migrants
Chicago Police Department Predictive Policing[32]

Clearview AI

The technology, employed by law enforcement agencies and private companies, scoured the internet for over 3 billion images, including those from social media sites, often in violation of platform rules.[33]

Perpetrating State Use
U.S. Immigration and Customs Enforcement Surveilling immigrants

Oculus

Used in Russia to monitor LGBTQ)

See also

See also

References

  1. F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024. 
  2. 2.0 2.1 2.2 2.3 2.4 "Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights|" (PDF). europarl.europe.eu. May 2024. Retrieved 6 November 2024. 
  3. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024. 
  4. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024. 
  5. F, Holly (13 November 2018). "Predictive Policing: Promoting Peace or Perpetuating Prejudice". d3.harvard.edu. Retrieved 13 November 2024. 
  6. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024. 
  7. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024. 
  8. Lau, Tim (1 April 2020). "Predictive Policing Explained". brennancenter.org. Retrieved 13 November 2024. 
  9. 9.0 9.1 9.2 9.3 "China's Algorithms of Repression Reverse Engineering a Xinjiang Police Mass Surveillance App|". hrw.org. 1 May 2019. Retrieved 23 October 2024. 
  10. Wang, Maya (8 April 2021). "China's Techno-Authoritarianism Has Gone Global". hrw.org. Retrieved 23 October 2024. 
  11. 11.0 11.1 "Freedom of Information Act request regarding use of Clearview AI Facial Recognition Software|" (PDF). immigrantdefenseproject.org. 19 October 2020. Retrieved 8 November 2024. 
  12. Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024. 
  13. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024. 
  14. Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024. 
  15. Scott, Jeramie (17 March 2022). "Is ICE Using Facial Recognition to Track People Who Allegedly Threaten Their Agents?". epic.org. Retrieved 8 November 2024. 
  16. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024. 
  17. Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024. 
  18. Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024. 
  19. Bendett, Sam (20 July 2023). "Roles and Implications of AI in the Russian-Ukrainian Conflict". russiamatters.org. Retrieved 13 November 2024. 
  20. Ashby, Heather (6 March 2024). "From Gaza to Ukraine, AI is Transforming War". instickmedia.com. Retrieved 13 November 2024. 
  21. "Russia using generative AI to ramp up disinformation, says Ukraine minister|". reuters.com. 16 October 2024. Retrieved 13 November 2024. 
  22. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024. 
  23. Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024. 
  24. Katibah, Leila (October 2024). "The Genocide Will Be Automated—Israel, AI and the Future of War". merip.org. Retrieved 18 October 2024. 
  25. Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024. 
  26. 26.0 26.1 "='AI-assisted genocide': Israel reportedly used database for Gaza kill lists |". Aljazeera.com. 4 April 2024. Retrieved 6 November 2024. 
  27. Echols, Connor (3 April 2024). "Israel usingsecret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024. 
  28. "About the Pegasus Project|". forbiddenstories.org. 18 July 2021. Retrieved 9 November 2024. 
  29. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024. 
  30. Echols, Connor (3 April 2024). "Israel using secret AI tech to target Palestinians". responsiblestatecraft.org. Retrieved 13 November 2024. 
  31. Del Villar, Ashley; Hayes, Myaisha (22 July 2021). "How Face Recognition Fuels Racist Systems of Policing and Immigration — And Why Congress Must Act Now". aclu.org. Retrieved 8 November 2024. 
  32. Peteranderl, Sonja; Spiegel, Der (January 2020). "Under Fire: The Rise and Fall of Predictive Policing" (PDF). acgusa.org. Retrieved 13 November 2024. 
  33. Lyons, Kim (14 August 2020). "ICE just signed a contract with facial recognition company Clearview AI". theverge.com. Retrieved 9 November 2024.