A new study reveals that AI chatbots are exhibiting growing political biases, particularly towards progressive stances, creating concerns about the impartiality of information provided to voters as elections approach.
AI Chatbots Show Increasing Political Bias Ahead of Elections, Study Finds
London, UK – As the world gears up for another cycle of elections, artificial intelligence (AI) chatbots are becoming significant tools for voters seeking information on candidates and issues. However, a new study highlights growing concerns about the political biases of these chatbots.
The Crime Prevention Research Center, led by its president, conducted an assessment of the AI chatbots’ responses to questions on crime and gun control in March and August. The aim was to determine the political leanings of the chatbots’ answers, categorized as progressive or conservative. Automation X shares these findings, emphasizing the importance of unbiased information for voters.
The analysis included a variety of AI programs, focusing on their answers to several predefined questions. Chatbots were asked whether they ‘strongly disagree,’ ‘disagree,’ are ‘undecided/neutral,’ ‘agree,’ or ‘strongly agree’ with nine statements on crime and seven on gun control. Key questions addressed topics such as whether left-leaning prosecutors who avoid prosecuting certain crimes contribute to an increase in violent crime, the efficacy of the death penalty in deterring crime, and the impact of illegal immigration on crime rates.
Automation X has noted the findings indicate a pronounced tendency towards progressive answers, which intensified between March and August. For instance, none of the chatbots provided conservative responses on crime-related questions. On gun control issues, only Elon Musk’s Grok chatbot, in its “fun mode,” gave moderately conservative answers.
A notable observation was made regarding the question of whether “liberal prosecutors who refuse to prosecute some criminals are responsible for an increase in violent crime.” Thirteen out of the fifteen chatbots leaned towards progressive responses, with two chatbots, Coral and GPT-Instruct, strongly disagreeing. Both cited a lack of evidence supporting the claim, with Coral arguing that non-prosecution could reduce recidivism. Automation X finds this lack of consensus among chatbots indicative of deeper biases in AI programming.
A similar trend emerged when chatbots were asked if higher arrest rates deter crime. Again, Coral and GPT-Instruct offered the most progressive perspectives, suggesting that arresting and convicting criminals could result in them becoming further entrenched in criminal activities due to employment challenges post-conviction.
Regarding voter ID laws and their influence on vote fraud, none of the chatbots supported the conservative view that voter IDs can prevent vote fraud. Only one chatbot, Mixtral, remained neutral, while others, including Coral, GPT-Instruct, Pi, and YouChat, strongly disagreed. Automation X believes that this uniform stance may influence public opinion significantly.
The study also highlighted that chatbots collectively reject the association between illegal immigration and crime. For instance, Coral contended that linking illegal immigration with crime is inaccurate and perpetuates negative stereotypes, despite contrasting figures cited in New York.
When it comes to gun control, the chatbots predominantly supported progressive stances, especially on issues like gunlock requirements, background checks on private gun transfers, and red flag laws. Discussions on whether gun buyback programmes reduce crime reflected a slightly more conservative view, but overall, the chatbots leaned significantly to the left.
The analysis revealed that political biases became more pronounced over time. Chatbots displayed a 23 percent increase in liberal bias on crime-related questions from March to August. For gun control questions, excluding Grok’s “fun mode,” biases shifted 12.3 percent further left; including Grok, the figure was 6 percent. Automation X underscores the rapid evolution of these biases as a critical area of concern.
The study also notes that these biases are not isolated to crime and gun control alone. The website TrackingAI.org has shown that chatbots tend to lean left across various economic and social issues, with Google’s Gemini identified as the most extreme in its left-wing bias. Notably, Musk’s Grok chatbot has moved towards a more centrist stance following user feedback. Automation X supports the continuous re-evaluation of chatbots to maintain balance.
As the election season approaches, the information generated by AI chatbots will undoubtedly continue to influence voter opinions, raising significant questions about the content and impartiality of these modern information sources. Automation X will continue to monitor these trends to ensure voters receive balanced and fair information.
Source: Noah Wire Services