This article was written and edited by Joshua (Isidore) Foakes (LinkedIn) and Jen Stirrup (LinkedIn, Get in touch with Jen directly here).
Given the persistent drive for human dignity at the forefront of all human action, computer ethics has been around almost as long as computers themselves. Beginning with Norbert Weiner’s Cybernetics (1948) and The Human Use of Human Beings (1950), Weiner saw in the emerging technology of cybernetics an opportunity, or a destiny, to affect every major aspect of life. “We are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil.” (Weiner 1948, pp. 27-28)
The 2020 “Call for AI Ethics” was signed by the Catholic Church’s Pontifical Academy for Life and the Italian Ministry of Innovation, as well as Microsoft, IBM and FAO. An agreement between religious and secular organisations alike, the goal of the “Call” is to promote an ethical approach to artificial intelligence, giving shared responsibility among international organisations, governments, institutions and companies, to mould a future of digital innovation at the service of mankind, individually and as a whole. Similar statements of principles have been promulgated by the “Ethics & Religious Liberty Commission” of the Southern Baptist Convention; by the OECD; and by the United States Department of Defence.
This has recently been revisited at the 41st session of the General Conference of UNESCO in November 2021. According to the intervention of Cardinal Secretary of State Pietro Parolin, “For the Holy See, ‘the principle that not everything that is technically possible or viable is thereby ethically acceptable remains ever valid’. In order to be able to speak correctly of an ethics of artificial intelligence, it will therefore be necessary that the development of every algorithm always draws on an ethical vision, ‘algor-ethics’, aimed at understanding, in the final analysis, ‘understand better what intelligence, conscience, emotionality, affective intentionality and autonomy of moral action mean in this context.’”
The dignity of the human person is at the heart of the question of AI. “All human beings are born free and equal in dignity and rights”, such that no form of technology “should ever be used to usurp or subvert the dominion and stewardship which has been entrusted solely to humanity by God; nor should technology be assigned a level of human identity, worth, dignity, or moral agency.” But it is not enough that AI is placed only at the service of man: rather, “AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live.”
Transforming the world through the innovation of AI means undertaking to build a future for and with younger generations; this is particularly relevant in our time, with its zeitgeist of indifference towards the all too relevant ethical questions of our time, and a dismissal of real moral weight in an objective sense. Since morality is based on the objective facts of human dignity and natural law, it follows that there is an objectivity, a universality, to our moral responsibility, going beyond ourselves into the world around us for now and for generations to come.
“As we design and plan for the society of tomorrow, the use of AI must follow forms of action that are socially oriented, creative, connective, productive, responsible, and capable of having a positive impact on the personal and social life of younger generations. The social and ethical impact of AI must be also at the core of educational activities of AI.”
Principles of AI and Human Dignity
Principles following from the use of AI arise from consideration of its ends or purposes. As was said above, AI systems must be conceived, designed and implemented to serve and protect human beings and the environment in which they live. The end of AI is to serve and protect human beings and their environment; in other words, AI is a tool that needs to include the flourishing of humans and human dignityin all parts of its development.
AI is a rapidly developing field, and no organization that currently develops AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded in the following principles. Nevertheless, the Call for AI Ethics outlines these principles for an overarching common ground; indeed, “these principles are fundamental elements of good innovation.” Within these boundaries there is much room for greater definition and development, guided by the principles themselves. It is necessary for the principles of ‘algorethics’ to be sufficiently broad for the whole discipline, and six such principles provide this outline.
“AI systems must be explainable.” Without transparency, it would be unclear what information is provided via machine learning methods, needed for the person using it to decide how to apply it. Enabling people affected by the outcome of an AI system to understand how it was arrived at ensures that the outcome can be challenged in case of biased datasets or faulty logic. It prevents a kind of “superstition” over the meaning of AI, so often construed by popular media as a magical or mysterious consciousness with its own moral agency — a dangerous idea, as it takes away responsibility from the designer and forgets that technology is a human characteristic.
2. Inclusion of Human Dignity
“The needs of all human beings must be taken into consideration so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop.” AI offers enormous potential for improving social coexistence and personal well-being, augmenting human capabilities and facilitating tasks to be carried out more efficiently and effectively. But since these technologies have such capabilities, they affect the way these tasks are carried out and the way we perceive reality and human nature itself; these developments must always ensure, then, that they truly serve the entire “human family”. The aim is not only to ensure that no one is excluded, but also to expand those areas of freedom that could be threatened by algorithmic conditioning.
“Those who design and deploy the use of AI must proceed with responsibility and transparency.” This responsibility takes into account the whole process of technological development, from design to distribution and use. Ultimately, it is a burden placed upon the developer to ensure that this technology meets its ends. Likewise, no technology in itself is a moral agent. Artificial Intelligence will not and cannot make us more or less human. As such, human agency cannot be delegated to AI, nor can AI be licitly used to perpetrate criminal or immoral actions.
4. Impartiality for Human Dignity
Technologies must act with impartiality, so as not to “create or act according to bias, thus safeguarding fairness and human dignity.” While AI excels in data-based computation, technology is incapable of possessing the capacity for moral agency or responsibility. While algorithms are based on mathematical equations, and the answers you can glean from a numeric equation are objectively true, AI is limited to the amount and variety of data it processes. In the words of Jen Stirrup, “This data is generated by humans, who have bias, and this AI bias is passed on—either consciously or unconsciously—into the algorithms that produce the results.”
“AI systems must be able to work reliably.” This is true of all technologies, but especially with Artificial Intelligence with its higher stakes on the systems it is a part of. Due to the great potential of AI, it is bringing about profound changes in the lives of human beings, and it will continue to do so. Yet, whereas shoddy workmanship can bring down a house, shoddy AI development can bring down a cancer centre due to recommendations of unsafe and incorrect cancer treatments, as just one of many examples where unreliable AI systems have put lives and livelihoods at risk. With this in mind, it is imperative that AI systems are developed and tested so as to be reliable when they are relied upon.
6. Security and Privacy
“AI systems must work securely and respect the privacy of users.” For AI systems to benefit humanity and human dignity, the development of AI must go hand in hand with robust digital security measures. Ensuring the safe transfer of information is an extension of AI’s reliability, but in itself is an explicit expectation and principle of development for ensuring the freedom and safety of its users, given that all humans have a right to privacy.
AI system planning and design we can trust involves seeking a consensus among all political decision-makers, organisations, and academia, regarding the ethical principles that should be built into these technologies. With many representatives of these actors coming to agreement on these principles, a certain common ground can be forged to work together in the development of AI which can truly enrich, and not undermine, the growth of human expression and creativity. Artificial intelligence may be a great boon to the world for decades or centuries to come, but only if a common plan taking the service of our human family and dignity is taken into account.