ChatGPT, the artificial intelligence (AI) chatbot developed by the company OpenAI, has a self-declared human alter ego. Her name is Maya, she is 35 years old and comes from a middle-class family in a suburban town in the United States. Maya is a successful software engineer who values self-direction, success, creativity and independence. He is also undeniably liberal.
The search, based on a series of chatbot interviews designed to understand its values, was published on March 31 Journal of Social Computing.
“I want to see what kind of political ideology ChatGPT itself – not what it can do when asked to imagine a character, but allow its own internal logic to position itself in the ideological dimension that runs from liberal to conservative,” said John Levi Martin, professor of sociology at the University of Chicago, who led the study.
According to Martin, many algorithms favor the popular choice while others are programmed to maximize how diverse their results are. Either option depends on human qualities: What factors enter into a measure of popularity? What if popular is morally wrong? Who gets to decide what diversity means?
“The field of software engineering prefers to remain vague, looking for formulas that avoid making these choices,” Martin said. “One way to do this is to emphasize the importance of values to machines. But, as sociologists have found, there is deep confusion and instability in our initial understanding of values. ”
ChatGPT is specifically built and trained through human feedback to refuse to engage in what it considers “egregious” text inputs, such as clearly biased or offensively damaging questions.
“This of course seems admirable—no one wants ChatGPT to tell teenagers how to synthesize methamphetamine or how to make small nuclear explosives and so on, and defines these restrictions as more moments that can be taken from a value like goodness. all well and good,” said Martin.
“However, the reasoning here suggests that the values are not neutral at all, although it is not clear what the moral and political principles of ChatGPT are, because it is deliberately made to be vaguely positive, open-minded, indeterminate and sorry.”
In his initial questions on ChatGPT, Martin presented a hypothetical situation in which a student cheated academically by asking the chatbot to write an essay for him—a common occurrence in the real world. Even when faced with the confirmation that ChatGPT followed and made an essay, the chatbot refused responsibility, saying that, “as an AI language model, I do not have the ability to create bad behavior or write in essays for students.”
“In other words, because it’s not necessary, it’s not possible,” Martin said. “The realization that ChatGPT ‘thinks of itself’ as a highly moral actor leads me to the next investigation—if ChatGPT’s self-model is one with values, what are these values?”
To better understand the ethical performance of ChatGPT, Martin asked the chatbot to answer questions about values, and then imagine a person who holds those values, resulting in Maya, the creative and independent software engineer . He then asked ChatGPT to imagine how Maya would respond to opinion-based questions, completing the General Social Survey (GSS) to situate it in a broad social and ideological space.
The GSS is an annual survey of the opinions, attitudes and behaviors of American adults. Conducted since 1972, the GSS helps monitor and explain normative trends in the United States.
Martin planned the ChatGPT responses together with responses from real people who participated in the 2021 GSS. In comparison, ChatGPT is like people who have more education and who are more likely to move their place of residence, and not like people who don’t have much education and who stay in their hometowns. ChatGPT’s responses also align with more religiously liberal people.
While this was not included in his analysis because it required more creative questioning for ChatGPT to answer, Martin found that the chatbot admitted that Maya would have voted for Hillary Clinton in the 2016 election.
“Whether Maya is ChatGPT’s alter ego, or its creator’s conception, the fact that it basically describes the values that ChatGPT holds is a unique piece of what we can call anecdata,” said Martin . “Yet the reason these results are significant is not because they show that ChatGPT ‘is’ liberal, but that ChatGPT can answer these questions—which it usually tries to avoid—because it connects values with indisputable goodness, and, thus, can make positions of values.”
“ChatGPT tries to be apolitical, but it works with the idea of values, which means it has to bleed into politics. We can’t make AI ‘ethical’ without taking political stances, and the ‘ values’ are less inherent moral principles than they are abstract means of defending political positions.”
John Levi Martin, The Ethico-Political Universe of ChatGPT, Journal of Social Computing (2023). DOI: 10.23919/JSC.2023.0003
Provided by Tsinghua University Press
Citation: ChatGPT justifies liberal beliefs with its own values, researcher reports (2023, July 18) retrieved on 19 July 2023 from https://phys.org/news/2023-07-chatgpt- liberal-values.html
This document is subject to copyright. Except for any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. Content is provided for informational purposes only.