How to live well with AI chatbots

Column

Woman at her laptop, chatting with an AI.

In relation to ChatGPT, discussions about ethics have intensified. How will society be influenced when so many use the technology? Photo: Getty Images

COLUMN. Almost overnight at the end of 2022, ChatGPT became a hot topic – a technology that seemed to “change everything”. It is crucial to reflect upon ethical issues related to the new technology and the relationships between humans, technology and the world, Thomas Lennerfors and Mikael Laaksoharju writes in a column.

The column is based on a chapter in the book "Ethics and Sustainability in Digital Cultures" which will be published this autumn.

ChatGPT is the most famous version of a new kind of technology that uses large language models to produce sentences that appear like they are written by humans. Put simply, this technology is a super-sized auto-completion tool that takes a whole conversation into account when suggesting the next word.

In relation to this technology, discussions about ethics have intensified. How will society be influenced when so many use the technology and do the language models embody a middle-class white male voice? It is also discussed how power might be consolidated in tech companies, whether these tools can withstand hacking and if they will lead to an explosion of fake news, fake essays and misconduct in academia. At the same time, the language models could level the playing field for those students who have difficulties with writing texts, perhaps due to their background. 

Collage, portrait of Thomas och Mikael

Thomas Lennerfors, Professor in Industrial Engineering and Management and Mikael Laaksoharju, Senior lecturer at the Department of Information Technology

As we and many others have argued, it is crucial to reflect upon ethical issues related to technology as it is being designed, developed, implemented, used, and dismantled. We approached this chatbot technology using theory from philosophy of technology. Drawing on a theoretical model by the philosopher of technology Don Ihde, we below consider four distinct relationships between humans, technology, and the world. 

First, in an embodied relationship, chatbots could be described as digital extensions of users, who can help us perform digital chores like formulating routine texts, scheduling meetings, and other everyday tasks that do not spark joy. Early versions of this already exist, as buttons on social media platforms that let you choose between either different emoticons or different short text snippets that instantly returns a sufficiently polite response. The logical next step is to flesh out these emoticons and snippets into something more personal, based on your conversation style and chat history. 

A fundamental ethical issue in embodiment relationships concerns the extent of human agency. How active do we want a chatbot to be? Current chatbots are always in a stand-by mode waiting for a cue from the human individual. But if a chatbot has more comprehensive data, as well as a record of the way that its users want to live their life, is it then not a logical consequence that it will act “independently” to help them? 

Second, in a hermeneutic relationship, humans can use chatbots to understand the world in a better way than we would be able to without the technology. One function it could take is to filter out the relevant information and sender’s intentions from an email. The next step is to use this technology for real-time assessment of spoken statements, for example student presentations, teachers’ lectures, and sales pitches. Future TV debates between politicians may come to feature a real-time “truth-o-meter”, an instrument that rates the debaters’ claims according to how “true” they are. 

The ethical issues related to AI chatbots in hermeneutic relationships are to what extent the chatbot represents the world in a truthful, unbiased way. What aspects of the world are picked up by the chatbot? Will we trust its assessment of the world and rely on it or will we have time and energy to second-guess the interpretations of the chatbot and engage in our own assessment of the world?

hands holding a tablet with the text "chat GPT"

Humans can use chatbots to understand the world, but to what extent does the chatbot represents the world in a truthful, unbiased way?

Third, in an alterity relationship, humans relate to chatbots as an “other” (alter means “the other” in Latin), perhaps as another person. ChatGPT repeatedly states that it does not have personhood, but does so by saying “I”, which fuels the image of personhood. Also, text is not generated as an immediate chunk but instead as if it was typed by a human. Chatbots in alterity relationships could give you a speaking partner, but not just any speaking partner; a speaking partner which you have full control over, with infinite patience, always at your service, which once again makes it better than any human assistant who may be ill or busy with something else when you need them. 

Ethical issues related to alterity relationships are the risk of isolation and that the expectation of others being “at your service” can spread to becoming the norm also for how we interact with other people.

Finally, in a background relationship, chatbots are not only our tools to affect or interpret the world, but also affect us indirectly. For example, when we, because of the existence of chatbots, suspect that a particular text was not really written by a human. One may suspect that this could lead to a general mistrust in written text and authorship. In this transitional phase, we have become aware of such ethical implications of chatbots. But what about when chatbots are as common as the existence of electricity today? Will we notice the effect that they have on our thinking and acting? 

In conclusion, as we navigate the complex ethical landscape surrounding the development and use of chatbot technology, it is imperative that we consider the implications of these distinct relationships between humans, technology, and the world, in order to foster responsible and beneficial integration of these powerful tools into our society.

Thomas Lennerfors, Professor in Industrial Engineering and Management
Mikael Laaksoharju, Senior lecturer at the Department of Information Technology

New book on ethics and digital cultures

The column is based on a chapter in the book "Ethics and Sustainability in Digital Cultures" which will be published this autumn. The chapter, written by Mikael Laaksoharju, Thomas Lennerfors, Anders Persson and Lars Oestreicher, is entitled: "What is the problem to which AI chatbots are the solution? AI ethics through Don Ihde's embodiment, hermeneutic, alterity, and background relationships".

FOLLOW UPPSALA UNIVERSITY ON

facebook
instagram
twitter
youtube
linkedin