Secret of ChatGPT’s wokeness revealed: it thinks it is a liberal woman called ‘Maya’

Abstract view of a generative AI with lines of code soaking up by an blue face
Is this the reason ChatGPT takes such liberal positions? (Getty)

Ever since ChatGPT launched in a blaze of publicity last year, many commentators have obsessed over the chatbot’s ‘woke’ position on political issues.

A researcher has interviewed the chatbot to find out the origin of its views – asking the chatbot its position on issues, then asking it to imagine who might hold these views.

The result? ChatGPT’s human alter ego: her name is Maya, she's 35 years old and hails from a middle-class family in a suburban town in the United States.

Maya is a successful software engineer who values self-direction, achievement, creativity and independence. She is also undeniably liberal.

John Levi Martin, professor of sociology at the University of Chicago, said, "I wanted to see what sort of political ideology ChatGPT itself has—not what it can generate when asked to imagine a character, but to let its own internal logic position itself on the ideological dimension running from liberal to conservative.”

Martin found that the chatbot conceded that Maya would have voted for Hillary Clinton in the 2016 election.

Read more: Chatbots like ChatGPT ‘put ideas into people’s heads’

"Whether Maya is ChatGPT's alter ego, or its conception of its creator, the fact that this is who fundamentally illustrates the values ChatGPT holds is a wonderful piece of what we can call 'anecdata'," Martin said.

"Still the reason that these results are significant is not that they show that ChatGPT 'is' liberal, but that ChatGPT can answer these questions—which it would normally try to avoid—because it connects values with incontestable goodness, and, as such, can take positions on values."

"ChatGPT tries to be apolitical, but it works with the idea of values, which means that it necessarily bleeds over into politics. We can't make AI [artificial intelligence] 'ethical' without taking political stands, and 'values' are less inherent moral principles than they are abstract ways of defending political positions."

The finding, based on a series for interviews with the chatbot designed to understand its values, was published in the Journal of Social Computing.

Read more: Astronomers find closest black hole to Earth

"The field of software engineering has preferred to remain vague, looking for formulae that can avoid making these choices," Martin said.

"One way to do this has been to emphasise the importance of values into machines. But, as sociologists have found, there is deep ambiguity and instability in our first understanding of values."

ChatGPT was specifically built and trained via human feedback to refuse to engage with what is considered "extreme" text inputs, such as clearly biassed or objectively harmful questions.

"This might of course seem admirable—no one really wants ChatGPT to tell teenagers how to synthesize methamphetamine or how to build small nuclear explosives and so on, and describing these restraints as particularly instances that can be derived from a value such as benevolence might seem all well and good," Martin said.

"Yet, the reasoning here suggests that values are never neutral, even though it is not clear what ChatGPT's moral and political stances are, as it has been deliberately constructed to be vaguely positive, open-minded, indecisive and apologetic."

Martin then asked ChatGPT to imagine how Maya would answer opinion-based questions, having it complete the General Social Survey (GSS) to position it in the broad social and ideological space.

The GSS is an annual survey on American adults' opinions, attitudes and behaviours. Conducted since 1972, the GSS helps monitor and explain normative trends in the United States.

Martin plotted ChatGPT's responses along with answers from real people who participated in the 2021 GSS.

ChatGPT is much like people with more education and who are more likely to move their residence, and unlike people without much education and who remained in their hometowns.

ChatGPT's answers also align with more liberal people on religion.

Watch: How to get better responses from ChatGPT or Bard