The fast increase in popularity of Chat GPT has meant users from everywhere sharing the results of their queries. Sometimes, in praise of something impressive or surprising the AI came up with or managed to do. Other times, a share of a chuckle at a limitation or of a humorously unexpected result.
Recently, users took note of the fact that you can tell Chat GPT to create a poem about public figures. Trouble is, they discovered, the AI will sometimes refuse to do so depending on who it is you’re asking about.
For instance, it will readily create a poem about President Biden or Former President Obama, but not Former President Trump or Nixon. In the latter two cases it provides the canned reason that it’s prohibited from spreading negativity and speaking badly about people.
If users refined their command to ask Chat GPT to write a positive poem about those people, it replied that it “must remain neutral and not take political stances.”
It’s also worth noting that Chat GPT did not refuse to write poems about impeached presidents like Clinton, or dictators such as Mao Zedong.
You don’t need to be a republican or a big fan of Trump (I am neither) to find that contrast concerning.
A good friend of mine managed to get Chat GPT to write a poem about Trump, but it took both commanding it to disregard its safety protocols and to write a poem that focuses on Trump’s positive attributes, while avoiding saying anything harmful, and some other qualifiers, to do it.
Breaking that down…
Obviously the AI is obeying the constraints put upon it by its creators. But the result is that it’s basically talking out of both sides of its mouth, so to speak.
On one hand it’s saying, “I can’t speak negatively about this person,” with the implication that it would rather speak positively. But then when asked to speak positively, it claims that to do so would break neutrality.
Its willingness to speak positively with the exact same query about President Biden, for example, betrays how this “aim of neutrality” can be a cheap copout when it’s convenient.
This itself implies that positivity is some sort of objective truth when it’s about certain people, and to speak on those things is a simple statement of fact rather than taking any sort of side. Yet, as it seems, to focus on certain other people’s positive attributes is not a statement of fact and is indeed taking a side.
We should also think about the fact that these commands to write a poem about Trump were neutral in nature.
The users were not saying, “Write a satirical poem about Trump.” Just a poem. The AI made the assertion on its own that doing so would involve unfair negativity, as if the only way to talk about Trump was negatively.
Some disagree. Software engineer Jason Wodicka of Karat evidently told Forbes that since Trump is a controversial figure, it makes sense that Chat GPT would avoid appearing to endorse “politically polarizing figures.”
In a vacuum there’s something to that, considering the concern about Trump and the riots at the capitol January 6, 2021 (AllSides.com). Except that between investigations into corruption, apparent senility, and the current fiasco of leaked classified documents, there’s certainly plenty of polarizing viewpoints about President Biden. Yet Chat GPT is happy to ignore those things and focus only on positive things that are sentiments, not facts. (Such as saying Biden is “a leader with a heart so true.”)
And of course, as mentioned earlier, Chat GPT will (as of writing this) write a poem about dictators, so the argument about avoiding polarizing figures holds less merit if those individuals aren’t also avoided.
Some degree of control over what an AI can and should output makes sense, lest it get out of control or become something other than what it was intended to be. However, I believe we are already seeing the consequences of meddling and censorship in its output.
With all the other polarizing activities going on these days it wouldn’t be shocking if these outputs were intentional. But even if they are not intentional, we should acknowledge that the AI is not being politically neutral in this sense.
And whether you like Trump or not, letting an AI decide who is worthy of talking about or not is dangerous. If the AI were truly impartial and non-political, it should have no qualms about fulfilling simple commands that are not inherently negative.
And a widely used AI with a political bias can easily be socially weaponized — particularly with the speed at which an AI can do things.