Artificial Intelligence applied to language and its potential application to the AA.PP domain

Authors

  • Víctor Fresno Fernández Profesor Titular de Universidad del Departamento de Lenguajes y Sistemas Informáticos Universidad Nacional de Educación a Distancia (UNED)

DOI:

https://doi.org/10.36151/RCAP.ext.4

Keywords:

Language Technologies, Artificial Intelligence, Neural Networks, Language Models, Public Administrations, Applications

Abstract

In this article, a reflection is presented on how Artificial Intelligence and Natural Language Processing can be applied to the domain of Public Administrations. Initially, a historical review of the evolution of computational linguistics from its origins to the present day is presented, now being widely discussed due to the emergence of the latest impressive conversational applications such as ChatGPT, emphasizing the profound leap that neural networks and self-learning mechanisms have meant in this field. The concept of language modeling is introduced, and it is shown how access to massive volumes of data and the exponential growth in computing capacity have acted as necessary accelerators for this development. Subsequently, possible specific applications of these technologies to the context of Public Administration are proposed, in areas ranging from optimization in project management and the enhancement of citizen services, to administrative digital transformation, covering crucial aspects such as cybersecurity, drafting and control of contractual specifications, advanced data management, and simplification of legal language. The document concludes with a reflection on the imminent transformation and the added value that these technologies can bring to the reality of Public Administrations.

Downloads

Download data is not yet available.

References

Brown, Tom B., Mann, Ben, Ryder, Nick, et al. (2020): «Language models are few-shot learners».

de la Rosa, Jordi, Ponferrada, Esteban G., Villegas, Pablo, et al. (2022): «Bertin: Efficient pre-training of a spanish language model using perplexity sampling».

Devlin, Jacob, Chang, Ming-Wei, Lee, Kenton, et al. (2018): «BERT: pre-training of deep bidirectional transformers for language understanding», CoRR, abs/1810.04805.

Du, Heng, Zhang, Rui, Liu, Yang, et al. (2023): «Beyond deep reinforcement learning: A tutorial on generative diffusion models in network optimization».

Fuchs, Catherine y Victorri, Bernard (1994): «Continuity in Linguistic Semantics», Lin-guisticae investigationes, J. Benjamins.

Hebb, Donald O. (1949): The organization of behavior: A neuropsychological theory, Nue-va York: Wiley.

Hupkes, Dieuwke, Dankers, Verna, Mul, Mathijs, et al. (2020): «Compositionality decom-posed: How do neural networks generalise?», Journal of Artificial Intelligence Re-search, 67, 757–795.

Kim, Nancy y Linzen, Tal (2020): «Cogs: A compositional generalization challenge based on semantic interpretation».

Lake, Brenden M. y Baroni, Marco (2018): «Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks».

Linzen, Tal, Dupoux, Emmanuel y Goldberg, Yoav (2016): «Assessing the ability of LST-Ms to learn syntax-sensitive dependencies», Transactions of the Association for Com-putational Linguistics, 4, 521–535.

Liu, Yinhan, Ott, Myle, Goyal, Naman, et al. (2019): «Roberta: A robustly optimized BERT pretraining approach».

Mikolov, Tomas, Chen, Kai, Corrado, Greg, et al. (2013): «Efficient estimation of word representations in vector space», CoRR, abs/1301.3781.

Mitchell, Tom M. (1997): Machine learning, Nueva York: McGraw-hill.

OpenAI (2023): «Gpt-4 technical report».

Pires, Telmo, Schlinger, Eva y Garrette, Dan (2019): «How multilingual is multilingual bert?», CoRR, abs/1906.01502.

Rosenblatt, Frank (1958): «The perceptron: A probabilistic model for information storage and organization in the brain», Psychological Review, 65 (6), 386–408.

Vaswani, Ashish, Shazeer, Noam, Parmar, Niki, et al. (2017): «Attention is all you need», CoRR, abs/1706.03762.

Wei, Jason, Tay, Yi, Bommasani, Rishi, et al. (2022): «Emergent abilities of large language models», Transactions on Machine Learning Research. Survey Certification.

Published

2024-01-09

How to Cite

Fresno Fernández, V. (2024). Artificial Intelligence applied to language and its potential application to the AA.PP domain. Revista Canaria De Administración Pública, (Extraordinario), 91–116. https://doi.org/10.36151/RCAP.ext.4

Issue

Section

Gobierno abierto, datos y nuevas tecnologías en la Administración Pública