Serviceeinschränkungen vom 12.-22.02.2026 - weitere Infos auf der UB-Homepage

Treffer: Bias and discrimination in ML-based systems of administrative decision-making and support.

Title:
Bias and discrimination in ML-based systems of administrative decision-making and support.
Authors:
MAC, Trang Anh1 mactranganh@gmail.com
Source:
Computer Law & Security Review. Nov2024, Vol. 55, pN.PAG-N.PAG. 1p.
Database:
Business Source Premier

Weitere Informationen

In 2020, the alleged wilful and gross negligence of four social workers, who did not notice and failed to report the risks to an eight-year-old boy's life from the violent abuses by his mother and her boyfriend back in 2013, ultimately leading to his death, had been heavily criticised. 1 1 *Trang Anh MAC, LLM. Digital Law, University of Paris XII Est-Créteil, reporter at AstraIA Gear. This paper is the English version of her master thesis, under supervision of Dr. Laurie MARGUET and Prof. Florent MADELAINE A. Reyes-Velarde, Charges dismissed against social workers linked to Gabriel Fernandez's killing, Los Angeles Times, 16 Jul 2020, available online at https://www.latimes.com/california/story/2020-07-15/charges-against-the-social-workers-linked-to-gabriel-fernandez-killing-will-be-dropped The documentary, Trials of Gabriel Fernandez in 2020, 2 2 https://www.imdb.com/title/tt11822998/ has discussed the Allegheny Family Screening Tool (AFST 3 3 Allegheny County, Allegheny Family Screening Tool, available online at https://www.alleghenycounty.us/Services/Human-Services-DHS/DHS-News-and-Events/Accomplishments-and-Innovations/Allegheny-Family-Screening-Tool), implemented by Allegheny County, US since 2016 to foresee involvement with the social services system. Rhema Vaithianathan 4 4 Bio of Prof. Rhema Vaithianathan. Available online at https://academics.aut.ac.nz/rhema.vaithianathan , the Centre for Social Data Analytics co-director, and the Children's Data Network 5 5 Our team, Children's Data Network. Available online at https://www.datanetwork.org/people/ members, with Emily Putnam-Hornstein 6 6 Bio of PhD. Emily Putnam-Hornstein. Available online at https://www.datanetwork.org/people/#emily-putnam-hornstein , established the exemplary and screening tool, integrating and analysing enormous amounts of data details of the person allegedly associating to injustice to children, housed in DHS Data Warehouse 7 7 Allegheny County, DHS Data Warehouse. Available online at https://www.alleghenycounty.us/Services/Human-Services-DHS/DHS-News-and-Events/Accomplishments-and-Innovations/DHS-Data-Warehouse. They considered that may be the solution for the failure of the overwhelmed manual administrative systems. However, like other applications of AI in our modern world, in the public sector, Algorithmic Decisions Making and Support systems, it is also denounced because of the data and algorithmic bias. 8 8 N. LaGrone, Can AI Reduce Harm to Children?: Gabriel Fernandez and the Case for Machine Learning, 9 April 2020, available online at https://www.azavea.com/blog/2020/04/09/can-ai-reduce-harm-to-children/ This topic has been weighed up for the last few years but not has been put to an end yet. Therefore, this humble research is a glance through the problems - the bias and discrimination of AI based Administrative Decision Making and Support systems. At first, I determined the bias and discrimination, their blur boundary between two definitions from the legal perspective, then went into the details of the causes of bias in each stage of AI system development, mainly as the results of bias data sources and human decisions in the past, society and political contexts, and the developers' ethics. In the same chapter, I presented the non-discrimination legal framework, including their application and convergence with the administration laws in regard to the automated decision making and support systems, as well as the involvement of ethics and regulations on personal data protection. In the next chapter, I tried to outline new proposals for potential solutions from both legal and technical perspectives. In respect to the former, my focus was fairness definitions and other current options for the developers, for example, the toolkits, benchmark datasets, debiased data, etc. For the latter, I reported the strategies and new proposals governing the datasets and AI systems development, implementation in the near future. (1) For the last decade, we have witnessed a considerable growth of Artificial Intelligence (AI), 9 9 The term Artificial Intelligence (AI) was coined at a workshop organised by John McCarthy in 1955 at the Dartmouth Summer Research Project on Artificial Intelligence. An official definition has never been advised and universally accepted. Such lack of a precise definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. (One Hundred Year Study on Artificial Intelligence (AI100), 2016 Report - Stanford University, https://ai100.stanford.edu/2016-report). Here are some useful and comprehensible definitions: According to the Oxford English Dictionary, AI is defined as "the capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. In later use also: software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data. Also as a count noun: an instance of this type of software; a (notional) entity exhibiting such intelligence"; ISO/IEC 22989:2022 determined that AI is "a technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives"; or, " Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. " (Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, UK: Cambridge University Press, 2010), finally, the EU's AI Act provides the definition of 'AI system' as "the software that is developed with one or more of the techniques and approaches listed in Annex I [which are machine learning, logic- and knowledge-based approaches, statistical approaches, Bayesian estimation, search and optimisation methods] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with" - Article 3(1) AI Act) especially its main branch, Machine Learning (ML). 10 10 Machine Learning is defined by the Oxford Dictionary of Computing (6 Ed.) as "a branch of AI concerned with the construction of programs that learn from experience. Learning may take many forms, ranging from learning from examples and learning by analogy to autonomous learning of concepts and learning by discovery. Incremental learning involves continuous improvement as new data arrives while one-shot or batch learning distinguishes a training phase from the application phase. Supervised learning occurs when the training input has been explicitly labelled with the classes to be learned. Most learning methods aim to demonstrate generalisation whereby the system develops efficient and effective representations that encompass large chunks of closely related data". Also, ISO/IEC 23053:2022 Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML) describes ML as " a branch of AI that employs computational techniques to enable systems to learn from data or experiences. In other words, ML systems are developed through the optimisation of algorithms to fit (to) training data, or improve their performance based through maximising a reward. ML methods include deep learning, which is also addressed in this document ". The 2021 survey by McKinsey & Company 11 11 McKinsey, Global survey The state of AI in 2021, 8 Dec 2021, available online at Global survey: The state of AI in 2021 | McKinsey has shown a solid ascent, 56 % of respondents reportedly adopt at least one function. In 2020, this number was 50 %. The respondents majorly are in service-operations optimisation, product and/ or service development, followed by marketing and sales functional activities with significant cost decreases. Last year, the private sector invested in AI around $93.5 billion, more than double the amount that they invested in 2020. 12 12 Artificial Intelligence Index Report 2022, Stanford University, available online at 2022-AI-Index-Report_Master.pdf (stanford.edu) In the public sector, OECD initiated a mapping determining 50 countries (counting the European Union) that have inaugurated, or run projects to initiate, nationwide AI approaches, allowing the governments to incorporate AI into the whole process of making policy and designing public service. 13 13 Berryhill, J., et al. (2019), Hello, World : Artificial intelligence and its use in the public sector, Documents de travail de l'OCDE sur la gouvernance publique, n° 36, Éditions OCDE, Paris, available online at https://doi.org/10.1787/726fd39d-en In the 2020 report of AI Watch, 14 14 " European Commission knowledge service to monitor the development, uptake and impact of Artificial Intelligence for Europe ", M. Manzoni, R. Medaglia, L.Tangi, C. Van Noordt, L. Vaccari, D. Gattwinkel, AI Watch, road to the adoption of Artificial Intelligence by the public sector, Joint Research Centre (European Commission), 25 mai 2022, available online at AI Watch, road to the adoption of Artificial Intelligence by the public sector - Publications Office of the EU (europa.eu) mapping the use of artificial intelligence in the EU Member States' public services, the "Algorithmic Decision Making", which could be based on ML, 15 15 D. R. Amariles, Algorithm Decision Systems: Automation and Machine Learning in the Public Administration, The Cambridge Handbook of the Law of Algorithms, 2020. Available online at https://papers.ssrn.com/sol3/papers.cfm?abstract%5fid=3974564 "Machine Learning, Deep Learning" and "Security Analytics and Threat Intelligence" powered by ML are in the top ten (10) common AI typologies frequently appearing in the census. 16 16 G. Misuraca, C. Van Noordt, AI Watch - Artificial Intelligence in public services, EUR 30255 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-19540-5, doi:10.2760/039619, JRC120399, available online at JRC Publications Repository - AI Watch - Artificial Intelligence in public services (europa.eu) (2) Beside multiple uses of AI in our everyday life, 17 17 What is artificial intelligence and how is it used?, available online at What is artificial intelligence and how is it used? | News | European Parliament (europa.eu) with considerably great examples (face ID, social media, email/ message sending, Google search, etc.), 18 18 B. Marr, The 10 Best Examples Of How AI Is Already Used In Our Everyday Life, 16 Dec 2019, available online atThe 10 Best Examples Of How AI Is Already Used In Our Everyday Life (forbes.com) many substantive issues and questions are addressed, including policy, legal, governance, and ethical considerations. Policy makers have addressed the concerns and challenges of AI typology "Decision Making" : 19 19 U. Gasser, V. A. F. Almeida, A layered model for AI governance, IEEE Internet Computing 21[6] Nov. 58-62 Doi:10.1109/mic.2017.4180835. • Justice and equality ○ Legal perspective: Intellectual Property Rights Protection, Liability (for physical, punitive damages...), etc. ■ Public services: Good Administration (Governance) Principle, etc. 20 20 T. Timan, A. F. Van Veenstra, G. Bodea, Artificial Intelligence and public services, Policy, Department for Economic, Scientific and Quality of Life Policies, PE 662.936, July 2021, available online at Artificial Intelligence and public services (europa.eu) ○ Ethical perspective: Transparency, Accountability, Explainability, Fairness, etc. ■ Public services: Human Rights - i.e. Non-Discrimination (Direct and Indirect) Principle, Rights to Explanation (black-box), Protection of Personal Data, etc. 21 21 Idem • Use of force (e.g. autonomous weapons) • Safety and certification • Privacy (in relation with the personal data protection as above mentioned) • Displacement of labour and taxation. (3) These risks and challenges as mentioned are founded on the algorithmic bias and discriminations of AI based systems, mainly of the automated decision support and making processes. The reality of the use of Machine Learning has proved the existence of bias and discrimination in its applications in both private and public sectors, including executive (penal and administrative) and judicial branches. Created in 2010, PredPol, renamed as Geolitica last year, a predictive policing software, based on COMPSTAT data, 22 22 https://www.predpol.com/about/ attempting to predict property crimes using predictive analytics was one of the most common predictive policing tools in the US, 23 23 W. D. Heaven, Predictive policing algorithms are racist. They need to be dismantled. July 17, 2020, available online at Predictive policing algorithms are racist. They need to be dismantled. | MIT Technology Review beside Palantir and HunchLab. It received enormous criticisms of the bias in its predictions, for instance, according to an investigation of predictions provided to 38 agencies, where Black and Latino neighbourhoods would have been targeted by its algorithm, likely four times higher than Indianapolis' white inhabitants, 24 24 G. Mohler, R. Raje, J. Carter, M. Valasik and J. Brantingham, A Penalized Likelihood Method for Balancing Accuracy and Fairness in Predictive Policing, 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2018, pp. 2454-2459, doi: 10.1109/SMC.2018.00421. Available online at A Penalized Likelihood Method for Balancing Accuracy and Fairness in Predictive Policing | IEEE Conference Publication | IEEE Xplore and in the areas having the most forecasts, there are fewest white locals. 25 25 A. Sankin, D. Mehrotra, S. Mattu, D. Cameron, A. Gilbertson, D. Lempres, and J. Lash, Crime Prediction Software Promised to Be Free of Biases. New Data Shows It Perpetuates Them, 12 Feb 2021, available online at https://gizmodo.com/crime-prediction-software-promised-to-be-free-of-biases-1848138977 In the EU, two Netherlandish algorithmic predictive policing tools, 26 26 G. V. TIL, Automating Society 2019, Netherlands, Algorithm Watch, available at NETHERLANDS - AlgorithmWatch the first called Prokid 12-SI, based on the (semi-)automated risk profiling technologies (predictive identification) for pointing out the problem of child endangerment and anti-social act of children and the second named the "Crime Anticipation System", attempting to predict place-based specific crimes (predictive mapping), raised the concerns about personal data protection, right to non-discrimination, 27 27 K. La Fors, Legal Remedies For a Forgiving Society: Children's rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities, Computer Law & Security Review,Volume 38, September 2020, 105430, available online at https://doi.org/10.1016/j.clsr.2020.105430 presumption of innocence, etc. 28 28 European Digital Rights, Use cases: Impermissible AI and fundamental rights breaches, August 2020, available online at Case-studies-Impermissible-AI-biometrics-September-2020.pdf (edri.org) In Belgium, a predictive policing project has been launched by the Zennevallei police zone and Ghent University. A response section of the Belgian Senate addressed the important basis which is that the data and the limit of police statistics can produce prejudices, misinterpretation of the facts since they primarily reflect police activities, not the reality of the given territory. 29 29 Question écrite n° 7-591 de Peter Van Rompuy au ministre de la Sécurité et de l'Intérieur, chargé du Commerce extérieur, available online at SÉNAT Question écrite n° 7-591 - SENAAT Schriftelijke vraag nr. 7-591 (senate.be) (4) In addition to the ML application in the criminal executive branch, the automated decision support and/ or making, are also used in order to facilitate or directly make administrative decisions in public services. One of the related scandals of the Dutch government is the failure of SyRI, a risk profiling system. The fiscal watchdog used it to "identify" the allegation of tax and contribution deceit which were absolutely unfair in connection to the citizen's ethnic origin, dual nationality. They unjustifiably impeached about 26 000 parents due to fraudulent application for the aides of childcare, causing financial hardship, unemployment, personal bankruptcies, divorce etc., one of them committed suicide. 30 30 N. J. REVENTLOW, Automated racism: How tech can entrench bias, March 2, 2021, available online at Automated racism: How tech can entrench bias – POLITICO In the Hague District Court's decision of 5 Feb 2020, it confirmed the SyRI's violation of the European Convention on Human Rights, 31 31 Full text of the court's decision available at: ECLI:NL:RBDHA:2020:1878, Rechtbank Den Haag, C-09-550982-HA ZA 18-388 (English) (rechtspraak.nl) constituting a disproportionate invasion of the citizens' private lives. It also indicated the actual risk of discrimination and stigmatisation of citizens, particularly the risk of prejudice which cannot be controlled. 32 32 European Digital Rights, Use cases: Impermissible AI and fundamental rights breaches, August 2020, available online at Case-studies-Impermissible-AI-biometrics-September-2020.pdf (edri.org) In Austria, an NGO, Epicenter Work, has investigated an algorithm deployed by a state-owned enterprise of the Austrian employment Agency (AMS) to determine potential job opportunities for the unemployed. According to the public analysis, a certain model showed the discrimination between men with children and women having children. It judged women as negatively as the disabled and over-30 persons. The lack of transparency and justification were also addressed. 33 33 Nicolas Kayser-Bril, Austria's employment agency rolls out discriminatory algorithm, sees no problem, 6 October 2019, available online at Austria's employment agency rolls out discriminatory algorithm, sees no problem - AlgorithmWatch Having a huge impact on citizen's lives, the SCHUFA score provided by this leading credit bureau in Germany may play a significant role as important criteria to apartment renting, credit card application, execution of a new network services contract. As a black box, without data donation, no one is actually able to access the database and assess the bias of this AI based system. 34 34 OpenSCHUFA – shedding light on Germany's opaque credit scoring - AlgorithmWatch It is still not clear to jump into any conclusion soon, but this example raises a great concern in public. Consequently, the Federal Minister of Justice and Consumer Protection raised this issue and asked the governmental agency for more transparent scoring and more creditworthy appraisals. 35 35 OpenSCHUFA (5) In Poland, since the beginning of 2018, 374 courts across the country used a system called System Losowego Przydziału Spraw (Random Allocation of Judges System, or SLPS) provided by the Ministry of Justice. It explained the intention of SLPS is to guarantee judicial impartiality. " Assigning cases to individual judges must be completely transparent and free from manual control ," as announced by the Ministry in October. However, the Minister could not provide the transparency for the source codes by rejecting a request to publish these details filed in October by the ePanstwo Foundation, based on the legal ground of Polish jurisprudence that the algorithm is part of a source code that cannot be accessed or reused. 36 36 G. Hillenius, Polish justice ministry refuses to show code for assigning judges, 16 Feb 2018, available online at Not fully transparent | Joinup (europa.eu) (6) It is not easy to be aware of the algorithmic bias and discrimination in the public sector, where its impact may not apparently occur on a massive scale to be observed. Usually, it must be detected under the microscope of the press and activists. The administration has not been transparent regarding the way they apply the technologies and does not provide any public access which allows anyone to study and assess how well they work. 37 37 W. D. Heaven, Predictive policing algorithms are racist. They need to be dismantled. July 17, 2020, available online at https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/ In the private sector, it is much more common to learn about the incident, not because it is easily perceived but because customers, Big Tech companies' employees can notice and address the bias in output of AI based services, which they regularly use or access. For example, the annotators' insensitivity to differences in dialect can eventually result in the possibility that Twitter's algorithms of censoring the hate speech cause ethnic preference, which potentially raise harm against black Americans. 38 38 M. Sap, D. Card, S.Gabriel, Y. Choi, N. A. Smith, The Risk of Racial Bias in Hate Speech Detection, 2019, ACL, available online at https://homes.cs.washington.edu/∼msap/pdfs/sap2019risk.pdf An experiment of AlgorithmWatch showed that Google Vision Cloud classified a photo of a thermometer taken by a hand with dark skin tone as "gun". However, it determined the same object of which the photo was modified with lighter tone skin as an "electronic device". 39 39 N. Kayser-Bril, Google apologizes after its Vision AI produced racist results, 7 Apr 2020, available online at https://algorithmwatch.org/en/google-vision-racism/ Amazon detected by their engineers the bias against women in their AI recruitment tool, which is unsurprisingly a consequence of historical data. The program used 10 year period data which is predominated by male profiles. 40 40 Jeffrey Dastin, Amazon scraps secret AI recruiting tool that showed bias against women, 11 October 2018, available online at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G. Also see I. A. Hamilton, Why it's totally unsurprising that Amazon's recruitment AI was biased against women, 13 Oct 2018, available online at https://www.businessinsider.com/amazon-ai-biased-against-women-no-surprise-sandra-wachter-2018-10?r=US&IR=T In 2018, a proposed class-action lawsuit was filed over job advertisements on Facebook, targeting certain demographics, including age, gender. 41 41 N. Scheiber, Facebook Accused of Allowing Bias Against Women in Job Ads, 18 Sep 2018, available online at https://www.nytimes.com/2018/09/18/business/economy/facebook-job-ads.html However, this platform allegedly ignored racial bias research conducted by its employees, attempting to reduce the discriminatory moderation practices. 42 42 O. Solon, Facebook ignored racial bias research, employees say, 23 July 2020, available online at https://www.nbcnews.com/tech/tech-news/facebook-management-ignored-internal-research-showing-racial-bias-current-former-n1234746 The bias of IA systems harms the minority ethnic communities in modern society and causes the imbalance in many life aspects but also the entities using the AI based technologies as well. A survey conducted in collaboration between DataRobot and World Economic Forum 43 43 DataRobot, Report State of AI Bias, 2021, available online at https://www.datarobot.com/wp-content/uploads/2022/01/DataRobot-Report-State-of-AI-Bias%5fV5.pdf demonstrated 36 % organisations have suffered due to algorithmic bias. For instance, the gain of 62 % of them dropped off, 61 % had customers walking away, 43 % had the staff members quitting their company and over a third of companies suffered from litigation costs. (7) Thanks to the involvement of many NGOs, activists, for instance, EFF, Noyb, AlgorithmWatch, Amnesty, etc. a greater awareness of bias of AI based systems amongst the population has increased over the years. However, the absence of reaction by governments, ignorance of the companies may inflict severe and widespread damage to the democratic society. An overwhelming majority (81 %) of IT leaders, 44 44 See DataRobot, Report State of AI Bias, 2021. Big Tech companies like Microsoft, Google are calling for more AI regulations. 45 45 A. Kharpal, Big Tech's calls for more regulation offers a chance for them to increase their power, 28 Jan 2020, available online at https://www.cnbc.com/2020/01/28/big-techs-calls-for-ai-regulation-could-lead-to-more-power.html We are remarking the emergence of a package of regulations in the EU 46 46 F. Candelon, R. Charme di Carlo, M. De Bondt, T. Evgeniou, AI Regulation Is Coming, the Magazine, Sep - Oct 2021, available online at https://hbr.org/2021/09/ai-regulation-is-coming and the United State 47 47 Legislation Related to Artificial Intelligence, 5 Jan 2022, available online at Legislation Related to Artificial Intelligence (ncsl.org) in relation to AI based systems, especially in data (big data, personal data) and algorithm governance, requirements of transparency, accountability, explainability, fairness and global governance. Furthermore, for the first time the ethics capture the attention of lawmakers and practitioners. It is often considered as a non-binding tool supporting the law. That results in its misconduct and wrongdoing, significantly restraining its effects on individuals and society in general. 48 48 A. Rességuier, R. Rodrigues, AI ethics should not remain toothless! A call to bring back the teeth of ethics, 22 July 2020, available online at ​​ https://doi.org/10.1177/2053951720942541 (8) The technical and regulatory framework is required to build up the "infrastructure" of continual evolution of AI based systems, especially in automated administrative decision support and making. This paper shall analyse the algorithmic bias of ML applications in automated administrative decision making/ supporting systems in the EU and France jurisdictions from two distinct perspectives: technology and regulations. (Chapter I) The bias and discrimination shall be determined from legal and technical perspective during the life cycle of a ML model, followed by a legal and ethical framework governing automated administrative decision support and making. (Chapter II) Subsequently, the most commonly used methods to mitigate bias and discrimination in ML shall be examined. The negative effects of AI bias remain explicitly linked to statistical and computational aspects, such as representativeness in databases, equity of the algorithm. Furthermore, the personal, intrinsic institutional and social elements are significant causes of algorithmic bias, discrimination. This research shall provide some current technical and legal solutions suggested by the scientists, activists, and governments in order to prevent and confront the bias and discrimination of AI systems. [ABSTRACT FROM AUTHOR]

Copyright of Computer Law & Security Review is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)