X’s AI chatbot spread voter misinformation – and election officials fought back

<span>Elon Musk, the owner of X.</span><span>Photograph: Gonzalo Fuentes/Reuters</span>
Elon Musk, the owner of X.Photograph: Gonzalo Fuentes/Reuters

Soon after Joe Biden announced he was ending his bid for re-election, misinformation started spreading online about whether a new candidate could take the president’s place.

Screenshots that claimed a new candidate could not be added to ballots in nine states moved quickly around Twitter, now X, racking up millions of views. The Minnesota secretary of state’s office began getting requests for fact-checks of these posts, which were flat-out wrong – ballot deadlines had not passed, giving Kamala Harris plenty of time to have her name added to ballots.

The source of the misinformation: Twitter’s chatbot, Grok. When users asked the artificial intelligence tool whether a new candidate still had time to be added to ballots, Grok gave the incorrect answer.

Finding the source – and working to correct it – served as a test case of how election officials and artificial intelligence companies will interact during the 2024 presidential election in the US amid fears that AI could mislead or distract voters. And it showed the role Grok, specifically, could play in the election, as a chatbot with fewer guardrails to prevent the generating of more inflammatory content.

A group of secretaries of state and the organization that represents them, the National Association of Secretaries of State, contacted Grok and X to flag the misinformation. But the company didn’t work to correct it immediately, instead giving the equivalent of a shoulder shrug, said Steve Simon, the Minnesota secretary of state. “And that struck, I think it’s fair to say all of us, as really the wrong response,” he said.

Thankfully, this wrong answer was relatively low-stakes: it would not have prevented people from casting a ballot. But the secretaries took a strong position quickly because of what could come next.

“In our minds, we thought, well, what if the next time Grok makes a mistake, it is higher stakes?” Simon said. “What if the next time the answer it gets wrong is, can I vote, where do I vote … what are the hours, or can I vote absentee? So this was alarming to us.”

Related: How to spot a deepfake: the maker of a detection tool shares the key giveaways

Especially troubling was the fact that the social media platform itself was spreading false information, rather than users spreading misinformation using the platform.

The secretaries took their effort public. Five of the nine secretaries in the group signed on to a public letter to the platform and its owner, Elon Musk. The letter called on X to have its chatbot take a similar position as other chatbot tools, like ChatGPT, and direct users who ask Grok election-related questions to a trusted nonpartisan voting information site, CanIVote.org.

The effort worked. Grok now directs users to a different website, vote.gov, when asked about elections.

“We look forward to maintaining open lines of communication this election season and stand ready to respond to any additional concerns you may have,” Wifredo Fernandez, X’s head of global government affairs, wrote to the secretaries, according to a copy of the letter obtained by the Guardian.

It was a victory for the secretaries and for stalling election misinformation - and a lesson in how to respond when AI-based tools fall short. Calling out the misinformation early and often can help amplify the message, give it more credibility and force a response, Simon said.

While he was “deeply disappointed” in the company’s initial response, Simon said: “I want to give kudos and credit words, too, and it is due here. This is a large company, with global reach, and they decided to do the right and responsible thing, and I do commend them for that. I just hope that they keep it up. We’re going to continue monitoring.”

Musk has described Grok as an “anti-woke” chatbot that gives “spicy” answers often loaded with snark. Musk is “against centralized control to whatever degree he can possibly do that”, said Lucas Hansen, co-founder of CivAI, a non-profit that warns of the dangers of AI. This philosophical belief puts Grok at a disadvantage for preventing misinformation, as does another feature of the tool: Grok brings in top tweets to inform its responses, which can affect its accuracy, Hansen said.

Grok requires a paid subscription, but holds the potential for widespread usage since it’s built into a social media platform, Hansen said. And while it may give incorrect answers in chat, the images it creates can also further inflame partisan divides.

The images can be outlandish: a Nazi Mickey Mouse, Trump flying a plane into the World Trade Center, Harris in a communist uniform. One study by the Center for Counting Digital Hate claims Grok can make “convincing” images that could mislead people, citing images it prompted the bot to create of Harris doing drugs and Trump sick in bed, the Independent reported. The news outlet Al Jazeera wrote in a recent investigation that it was able to create “lifelike images” of Harris with a knife at a grocery store and Trump “shaking hands with white nationalists on the White House lawn”.

“Now any random person can create something that’s substantially more inflammatory than they previously could,” Hansen said.

Advertisement