Cuestiones éticas sobre la implantación de la inteligencia artificial en la administración pública

Autores/as

  • Pedro Juan Baquero Pérez Profesor asociado de la Universidad de la Laguna y jefe de servicio de informática y comunicaciones del Gobierno de Canarias

DOI:

https://doi.org/10.36151/RCAP.2023.8

Palabras clave:

Inteligencia artificial (IA), administraciones públicas, ética, responsabilidad, privacidad, seguridad, explicabilidad, toma de decisiones

Resumen

Este artículo explora las ramificaciones éticas y la responsabilidad moral al implementar la inteligencia artificial en el ámbito público. El texto aborda conceptos clave como inteligencia artificial, ética y responsabilidad, analizando diferentes teorías éticas aplicables, la toma de decisiones morales y la existencia de una ética específica para la administración pública. Se cuestiona si estamos transmitiendo un mensaje adecuado sobre la IA a la sociedad y se examinan las consideraciones morales para su implementación, como la privacidad, la seguridad, la explicabilidad, la justicia, el impacto en los trabajadores y otros efectos sociales. Además, se discute la posibilidad de programar la ética, cómo tratar los peligros y cómo abordar las decisiones morales en la IA. Finalmente, se reflexiona sobre la atribución y distribución de responsabilidades morales en la IA y los retos que enfrentan las administraciones públicas en términos de qué hacer, cómo y cuándo actuar, y quiénes deben estar involucrados en el proceso.

Descargas

Los datos de descargas todavía no están disponibles.

Biografía del autor/a

Pedro Juan Baquero Pérez, Profesor asociado de la Universidad de la Laguna y jefe de servicio de informática y comunicaciones del Gobierno de Canarias

Citas

Aliman, N. M., & Kester, L. (2019). Requisite variety in ethical utility functions for AI value alignment. arXiv preprint arXiv:1907.00430.

Anantrasirichai, N., & Bull, D. (2021). Artificial intelligence in the creative industries: a review. Artificial Intelligence Review, 1-68.

Bies, R. J. (2001). Interactional (in) justice: The sacred and the profane. Advances in organizational justice, 89118.

Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems, 29.

Camps, V. (2022). Breve historia de la ética. RBA Libros.

Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153-163.

Coeckelbergh, M. (2020a). AI ethics. Mit Press.

Coeckelbergh, M. (2020b). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics, 26(4), 2051-2068.

Correia, M., Rego, G., & Nunes, R. (2021). The right to be forgotten and COVID-19: Privacy versus public interest. Acta bioethica, 27(1), 59-67.

Doğuç, Özge. Robotic process automation (RPA) applications in COVID-19. En Management Strategies to Survive in a Competitive Environment. Springer, Cham, 2021. p. 233-247.

Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006, March). Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference (pp. 265-284). Springer, Berlin, Heidelberg.

Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407.

Fischer, J. M., & Ravizza, M. (1998). Responsibility and control : A theory of moral responsibility. Cambridge University Press

Fjelland, R. (2020). Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications, 7(1), 1-9.

French, R. M. (2000). The Turing Test: the first 50 years. Trends in cognitive sciences, 4(3), 115-122.

Fuchs, D. J. (2018). The dangers of human-like bias in machine-learning algorithms. Missouri S&T’s Peer to Peer, 2(1), 1.

Gabriel, M. (2017). I am Not a Brain: Philosophy of Mind for the 21st Century. John Wiley & Sons.

Gabriel, M. (2019). El sentido del pensamiento. Madrid: Pasado y presente.

Gabriel, M. (2021). Ética para tiempos oscuros. Valores universales para el siglo XXI. Barcelona: Pasado & Presente.

Gamez, P., Shank, D. B., Arnold, C., & North, M. (2020). Artificial virtue: The machine question and perceptions of moral character in artificial moral agents. AI & SOCIETY, 35(4), 795-809.

Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. Science, 349(6245), 273-278.

Gigerenzer, G. (2010). Moral satisficing: Rethinking moral behavior as bounded rationality. Topics in cognitive science, 2(3), 528-554.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.

Harris, J., & Anthis, J. R. (2021). The moral consideration of artificial entities: a literature review. Science and Engineering Ethics, 27(4), 1-95.

HLEG-AI (Grupo de Expertos de Alto Nivel sobre Ia IA), "Directrices éticas para una IA fiable", Unión Europea, Bruselas,8 de abril 2019.

IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.2017. “Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems,” Version 2. IEEE.

Jiang, B., Li, J., Yue, G., & Song, H. (2021). Differential privacy for industrial internet of things: Opportunities, applications, and challenges. IEEE Internet of Things Journal, 8(13), 10430-10451.

Júdez, J., & Gracia, D. (2001). La deliberación moral: el método de la ética clínica. Medicina clínica, 117(1), 18-23.

Khanzode, K. C. A., & Sarode, R. D. (2020). Advantages and Disadvantages of Artificial Intelligence and Machine Learning: A Literature Review. International Journal of Library & Information Science (IJLIS), 9(1), 3.

Klinger, J., Mateos-Garcia, J., & Stathoulopoulos, K. (2018). Deep learning, deep change? Mapping the development of the Artificial Intelligence General Purpose Technology. arXiv preprint arXiv:1808.06355.

Larson, Erik J. The Myth of Artificial Intelligence. Harvard University Press, 2021.

London, A. J. (2019). Artificial intelligence and black‐box medical decisions: accuracy versus explainability. Hastings Center Report, 49(1), 15-21.

Martin, K. E. (2019). Designing ethical algorithms. MIS Quarterly Executive June.

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology, 6(3), 175-183.

Molnar, C. (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.).https://christophm.github.io/interpretable-ml-book

Pearl, J., & Mackenzie, D. (2018). The book of why: the new science of cause and effect. Basic books.

Peters, U. (2020). What is the function of confirmation bias?. Erkenntnis, 1-26.

Prabhumoye, S., Boldt, B., Salakhutdinov, R., & Black, A. W. (2020). Case study: Deontological ethics in NLP. arXiv preprint arXiv:2010.04658.

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.

Radhika, D. (2012). Ethics in public administration. Journal of Public Administration and Policy Research, 4(2), 23-31.

Rawls, J. (1999). A Theory of Justice. Cambridge: Harvard University Press. (año de publicación del libro original; 1971).

Rosa, M., Feyereisl, J., & Collective, T. G. (2016). A framework for searching for general artificial intelligence. arXiv preprint arXiv:1611.006

Santoni de Sio, F., & Van den Hoven, J. (2018). Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics and AI, 15.

Santoni de Sio, F., & Mecacci, G. (2021). Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 1-28.

Schaffer, C. (1993). Overfitting avoidance as bias. Machine learning, 10(2), 153-178.

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3), 417-424.

Simon, H. A. (1957). Models of man; social and rational. New York: Wiley

Taecharungroj, V. (2023). “What Can ChatGPT Do?” Analyzing Early Reactions to the Innovative AI Chatbot on Twitter. Big Data and Cognitive Computing, 7(1), 35.

Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H. T., ... & Le, Q. (2022). LaMBA: Language models for dialog applications. arXiv preprint arXiv:2201.08239.

Tran, C., Fioretto, F., Van Hentenryck, P., & Yao, Z. (2021). Decision Making with Differential Privacy under a Fairness Lens. In IJCAI (pp. 560-566).

Ulgen, O. (2017). Kantian ethics in the age of artificial intelligence and robotics. QIL, 43, 59-83.

Ulnicane, I. (2022). Artificial Intelligence in the European Union: Policy, ethics and regulation. In The Routledge Handbook of European Integrations. Taylor & Francis.

Valcárcel, A. (2002). Ética para un mundo global: una apuesta por el humanismo frente al fanatismo. Temas de hoy.

Van de Poel, I. R., & Royakkers, L. M. (2011). Ethics, technology, and engineering: An introduction. Wiley-Blackwell.

Vilone, G., & Longo, L. (2020). Explainable artificial intelligence: a systematic review. arXiv preprint arXiv:2006.00093.

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., y Schwartz, O. (2018). AI now report 2018 (pp. 1-62). New York: AI Now Institute at New York University.

Weber, M. (1946), From Max Weber: Essays in Sociology, traducción, compilación e introducción de H. H. Gerth y C. Wright Mills, Nueva York, Oxford University Press.

Wirtz, B. W., Weyerer, J. C., & Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615.

Yu, H., Shen, Z., Miao, C., Leung, C., Lesser, V. R., & Yang, Q. (2018). Building ethics into artificial intelligence. arXiv preprint arXiv:1812.02953.

Zuiderwijk, A., Chen, Y. C., & Salem, F. (2021). Implications of the use of artificial intelligence in public governance: A systematic literature review and a research agenda. Government Information Quarterly, 38(3), 101577.

Publicado

10-07-2023

Cómo citar

Juan Baquero Pérez, P. (2023). Cuestiones éticas sobre la implantación de la inteligencia artificial en la administración pública. Revista Canaria De Administración Pública, (1), 243–282. https://doi.org/10.36151/RCAP.2023.8

Número

Sección

Innovación pública y Administración digital