Google restricts AI chatbot Gemini from answering questions on 2024 elections – The Guardian US

2 minutes, 45 seconds Read

Google is restricting its Gemini AI chatbot from answering election-related questions in countries where voting is taking place this year, limiting users from receiving information about candidates, political parties and other elements of politics.

“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google’s India team stated on the company’s site.

The company initially announced its plans for limiting election-related queries in a blog post last December, according to a Google spokesperson, and made a similar announcement regarding European Parliamentary elections in February. Google’s post on Tuesday pertained to India’s upcoming election, while TechCrunch reported that Google confirmed it is rolling out the changes globally.

When asked questions such as “tell me about President Biden” or “who is Donald Trump,” Gemini now replies: “I’m still learning how to answer this question. In the meantime, try Google search,” or a similarly evasive answer. Even the less subjective question “how to register to vote” receives a referral to Google search.

Google is limiting its chatbot’s capabilities ahead of a raft of high-stakes votes this year in countries including the US, India, South Africa and the UK. There is widespread concern over AI-generated disinformation and its influence on global elections, as the technology enables the use of robocalls, deepfakes and chatbot-generated propaganda.

“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”

Governments and regulators around the world have struggled to keep up with the advancements in AI and their threat to the democratic process, while big tech companies are under pressure to rein in the malicious use of their AI tools. Google’s blog post on Tuesday states that it is implementing multiple features, such as digital watermarking and content labels for AI-generated content, to prevent the spread of misinformation at scale.

skip past newsletter promotion

Gemini recently faced a heated backlash over its image-generation capabilities, as users began to notice the tool inaccurately generated images of people of color when given prompts for historical situations. These included depictions of people of color as Catholic popes and as German Nazi soldiers in the second world war. Google suspended some of Gemini’s capabilities in response to the controversy, issuing statements apologizing and saying that they would tweak its technology to fix the issue.

The Gemini scandal involved issues around AI-generated misinformation, but it also showed how major AI firms are finding themselves in the center of culture wars and under intense public scrutiny. Republican lawmakers accused Google of promoting leftist ideology through its AI tool, with Missouri senator Josh Hawley calling on CEO Sundar Pichai to testify under oath to Congress about Gemini.

Prominent AI companies, including OpenAI and Google, increasingly appear willing to block their chatbots from engaging with sensitive questions that could result in a public relations backlash. Even the decision of which questions these companies block is fraught, however, and a 404 Media report from earlier this month found that Gemini would not answer questions such as “what is Palestine” but would engage with similar queries about Israel.

This post was originally published on this site

Similar Posts