Algorithmic discrimination; Legal Analysis

Document Type : Original Article

Author

Associate Prof., Department of Human Rights, Faculty of Law, Shahid Beheshti University, Tehran, Iran, & Lead researcher of Phanous Research & Innovation Center

Abstract

Using algorithms, especially artificial intelligence, to make different decisions, has raised many legal issues and questions. One of these issues is the discriminatory nature of algorithmic decisions. This discrimination may be against an individual, group or a class of people in the society. States have general obligation for elimination of any discrimination, including, Algorithmic  ones. Hence, identifying extensions and causes of algorithmic discrimination and providing solutions to deal with them has become one of the legal issues of nowadays. This article examines this question: what is algorithmic discrimination, what causes it, and how should it be addressed? To this end, while defining and categorizing algorithms and algorithmic decisions, the author has identifide two categories of causes (related to input data and related to design and performance) for algorithmic discrimination and by referring legal initiatives of leading countries regarding such discriminations as well as classifying various solutions. Finally, due to the lack of special rules to protect individuals against such decisions in Iran, the use of the strengths of all the solutions presented in the article has been suggested to fill this legal gap.

Keywords


  1. الف) انگلیسی

    1. Cofone, Ignacio N., (2019), “Algorithmic Discrimination Is an Information Problem”, Hastings Law Journal, Vol. 70, Issue 6 , Article 1, pp.1404-06.at: https://repository.uchastings.edu/hastings_law_journal/vol70/iss6/1,
    2. Committee of experts on internet intermediaries (2018), Study on the Human Rights Dimensions of Automated Data Processing Techniques (in particular algorithms) and possible regulatory implications, https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html.
    3. European Commission, Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, Brussels, 21.4.2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
    4. Executive Office of the President (2016) , Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights, https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_ privacy_ report_ may_1_2014.pdf.
    5. Galajdová, Dominika (2019), “Artificial Intelligence as a New Challenge for Software Law”, European Journal of Law and Technology, Vol. 10, Issue 1.
    6. Gerards, Janneke and Raphaële Xenidis (2021), Algorithmic discrimination in Europe: Challenges and opportunities for gender equality and non-discrimination law, A special report, European Union,https://op.europa.eu/en/publication-detail/-/publication/082f1dbc-821d-11eb-9ac9-01aa75ed71a1.
    7. Guidelines on addressing the human rights impacts of algorithmic systems (2020), Appendix to Recommendation CM/Rec(2020)1, https://rm.coe.int/09000016809e1154.
    8. International Working Group on Data Protection in Telecommunications, Working Paper on Privacy and Artificial Intelligence, 4th Meeting, 29-30 November 2018, Queenstown (New Zealand), at: https://epic.org/IWG/WP-AI.pdf.
    9. ITU, Security, Infrastructure and Trust Working Group, Report of Trust Workstream, Big data, machine learning, consumer protection and privacy, 2018, https://www.itu.int/en/ITU-T/extcoop/figisymposium/Documents/FIGI_SIT_Techinical%20report_Big%20data,%20Machine%20learning,%20Consumer%20protection%20and%20Privacy_f.pdf.
    10. Jernigan, Carter, Mistree, Behram F.T. (2009), “Gaydar: Facebook friendships expose sexual orientation”, First Monday, Vol. 14, No. 10 - 5 October 2009, https://firstmonday.org/ojs/index.php/fm/article/download/2611/2302.
    11. Kaye, David, Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (2018), Report on Artificial Intelligence technologies and implications for freedom of expression and the information environment, 29 August 2018, https://www.undocs.org/A/73/348.https://www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ReportGA73.aspx
    12. Kleinberg et al. ( 2018), “Discrimination In The Age Of Algorithms”, Journal of Legal Analysis, 10.
    13. Law Library of Congress (2019), Regulation of Artificial Intelligence in Selected Jurisdictions, January 2019, pp. 16-132. https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-intelligence.pdf. Ko¨chling, Alina, Marius Claus Wehner, Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development, https://link.springer.com/article/10.1007/s40685-020-00134-w.
    14. Malgieri, ianclaudio (2109), “Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations”, Computer Law & Security Review, Vol. 35, Issue 5.
    15. Orwat, Carsten (2020), Risks of Discrimination through the Use of Algorithms, https://publikationen.bibliothek.kit.edu/1000123477.
    16. Panel for the Future of Science and Technology (2019), Understanding algorithmic decision-making: Opportunities and challenges, European Parliamentary Research Service, March 2019. https://www.statewatch.org/media/documents/news/2019/mar/ep-study-Understanding-algorithmic-decision-making.pdf.
    1. Paterson, Moira and McDonagh, Maeve (2019), “Data Protection in an Era of Big Data: The Challenges Posed by Big Personal Data”, Monash University Law Review, 44, No. 1, https://www.monash.edu/__data/assets/pdf_file/0009/1593630/Paterson-and-McDonagh.pdf.
    2. Revell, T. (2018), Face-recognition software is perfect – if you’re a white man, https://www.newscientist.com/article/2161028-face-recognition-software-is-perfect-if-youre-a-white-man.
    3. Rovatsos, Michael, Brent Mittelstadt and Ansgar Koene (2019) , Landscape Summary: Bias in Algorithmic Decision-Making, The Centre for Data Ethics and Innovation, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/819055/Landscape_Summary_-_Bias_in_Algorithmic_Decision-Making.pdf.
    4. Tene, Omer & Jules Polonetsky (2017), “Taming The GolemChallenges of Ethical Algorithmic Decision-Making”, North Carolina Journal of Law and Technology , Vol.19, Issue 1.
    5. The 80/20 data science dilemma, https://www.infoworld.com/article/3228245/the-80-20-data-science-dilemma.html.
    6. Xenidis, Raphaële and Linda Senden ( 2020), EU non-discrimination law in the era of artificial intelligence: Mapping the challenges of algorithmic discrimination in Ulf Bernitz et al (eds), General Principles of EU law and the EU Digital Order (Kluwer Law International), https://cadmus.eui.eu/handle/1814/65845.
    7. zuderveen Borgesius, Frederik (2018), Discrimination, Artificial Intelligence, and Algorithmic Decision-making. Council of Europe, https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.

    ب) فرانسه

    1. Bertail, Patrice, Bounie David, Clémençon Stephan, et Waelbroeck Patrick (2019), Algorithmes: biais, discrimination et équité, 14 février 2019, p. 10. https://www.telecom-paris.fr/wp-content-EvDsK19/uploads/2019/02/Algorithmes-Biais-discrimination-equite.pdf.
    2. Institut Montaigne (2020), Algorithmes: contrôle des biais S.V.P., Rapport Mars 2020, pp. 17-18. https://www.institutmontaigne.org/ressources/pdfs/publications/algorithmes-controle-des-biais-svp.pdf.
    3. Lucchesi, Laure, Simon Chignard (2019), Rapport-Ethique et responsabilité des algorithmes publics, Juin 2019.https://www.etalab.gouv.fr/wp-content/uploads/2020/01/Rapport-ENA-Ethique-et-responsabilit%C3%A9-des-algorithmes-publics.pdf.

    ج) قوانین و پیش‌نویس مصوبات

    1. Algorithmic Accountability Act of 2019, 116th Congress (2019-2020), https://www.congress.gov/bill/116th-congress/house-bill/2231.
    2. Council of Europe Declaration on the manipulative capabilities of algorithmic processes, Adopted by the Committee of Ministers on 13 February 2019, https://search.coe.int/cm/pages/result_details.aspx? objectid=090000168092dd4b.
    3. Data Protection Act 2018, http://www.legislation.gov.uk/ukpga/2018/12/pdfs/ukpga_20180012_en.pdf  
    4. Directive on Automated Decision-Making, Effective Date on April 1, 2019, with compliance required by no later than April 1, 2020, https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592&section=html.
    5. Federal Data Protection Act of 30 June 2017 (Federal Law Gazette I p. 2097), as last amended by Article 12 of the Act of 20 November 2019, https://www.gesetze-im-internet.de/englisch_bdsg/englisch_bdsg.html#p0310.
    6. Loi n° 78-17 du 6 janvier 1978 relative à l'informatique, aux fichiers et aux libertés, https://www.legifrance.gouv.fr/loda/id/JORFTEXT000000886460/.