In its quest for truth, science combines processes of discovery with research methods that produce theoretical knowledge in various formats. The application of this knowledge as a transformative tool has the potential to improve human life in various areas, but also to generate collateral damage, environmental impact and complex social dilemmas. To reflect on the ethics of a particular science involves questioning not only about the context and means for obtaining this knowledge, but also about its limits and the responsibilities that emerge from its development and application. In this regard, the ethics of science serve as a guiding framework that seeks to ensure that research and its applications are aligned with fundamental values such as integrity, equity and the common good.
In new disciplines such as artificial intelligence, where the technical challenges are profound and the frontiers of knowledge are yet to be demarcated, there is a tendency to at least partially identify theoretical and even epistemological discussions with the discussion of the ethical implications of the field, when contrary to this tendency, the aim should be to delve into the quid est of AI as the first and necessary question preceding any axiological assessment. This “flight to ethics” often leaves aside the underlying scientific problems and shifts the debate towards “more accessible” practical issues. But focusing on some ethical dilemmas without establishing a solid theoretical framework –one capable of making explicit the technical challenges that persist as unresolved obstacles– results in an exchange of “discourses” that is of little benefit to real ethics.
In line with this, Nick Bostrom argues that the development of artificial general intelligence (AGI) or systems that truly emulate human intelligence remains an outstanding challenge and numerous “ethical” debates are actually preempting as yet unresolved technical issues. Despite advances in deep learning (DL), artificial neural networks (ANN) and natural language processing (NLP), the problems of intelligence, intentionality and common sense remain slow progressing and still opaque areas of research. Faced with this difficulty, some theorists tend to move towards the ethical aspect of AI, where discussions can be more speculative and do not require rigorous empirical validation.
Stuart Russell points out that while the study of the ethics of AI is valuable, it can sometimes overshadow more pressing issues related to the technical control of systems. Rather than addressing concrete challenges, such as optimizing algorithms or solving alignment problems, ethical reflection allows theorists to explore abstract dilemmas without any commitment to tangible results. In this regard, debates about “the rights of robots” or “artificial consciousness” represent an intellectual escape route from the inherent complexity of improving systems that still face difficulties in basic reasoning and planning tasks. Luciano Floridi argues that much of the ethics of AI is based on anthropomorphic analogies that, although appealing, do not address the real technical problems of the field. AI popularizers, influenced by popular culture narratives, tend to project human qualities onto AI, attributing intentions and values to it. This tendency to anthropomorphize results in ethical debates about hypothetical scenarios –such as a machine takeover– that seem more appealing than the analysis of the inherent limitations of current models of representation or reinforcement learning.
In addition, many of these ethical debates lack the necessary technical rigor and focus on futuristic scenarios instead of addressing immediate challenges. Influenced by popular culture and science fiction narratives, the discourse about AI often leans toward speculative dimensions, pushing real structural problems into the background. Media attention is more easily captured by the discussion of ethical issues than by strictly technical debates.
The development of an Ethics of AI with its own field of application is fundamental to guarantee a humanistic vision of AI and to favor responsible innovation. It should stem from a primary consideration: that all available efforts and technical means be used to ensure that AIs meet criteria of governability, explainability, traceability and transparency. In this context, it is equally necessary to consider that the full power of AIs, if used for harmful or criminal activities such as cybercrime, can result in enormous moral and material damages. It is for this reason that some regulation of AIs –i.e.: sets of applicable and enforceable rules and laws for operations involving processes with artificial intelligences–, is a necessity that cannot be postponed for much longer. At the same time, this is an issue with broad implications ranging from geopolitics and sovereignty of nations to data protection, entity ownership and the privacy of individuals and their personal data. The EU has taken firm steps in this direction, although perhaps somewhat restrictive and only based on risk assessment protocols. Other countries and communities have moved in a different direction, such as the USA –although definitions at the state level are still pending– , but in general the issue is still under debate.