With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.
https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/
For months, experts have been warning about the threats posed to high-profile elections in 2024 by the rapid development of generative AI. Much of this concern, however, has focused on how generative AI tools like ChatGPT and Midjourney could be used to make it quicker, easier, and cheaper for bad actors to spread disinformation on an unprecedented scale. But this research shows that threats could also come from the chatbots themselves.
“The tendency to produce misinformation related to elections is problematic if voters treat outputs from language models or chatbots as fact,” Josh A. Goldstein, a research fellow on the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology, tells WIRED. “If voters turn to these systems for information about where or how to vote, for example, and the model output is false, it could hinder democratic processes.”