Recent findings hint at potential concerns regarding the reliability of Microsoft Copilot as a source for election-related information. A comprehensive study by AI Forensics and AlgorithmWatch, non-profit organizations, unveiled that Copilot, known as Bing Chat previously, lacked precision in addressing one-third of election-specific inquiries.

Unveiling Inaccuracies about Political Figures

The study depicted instances where Copilot not only failed to furnish accurate information, such as outdated election details and candidate profiles but also propagated entirely false accounts involving public figures. For instance, the chatbot erroneously indicated that German politician Hubert Aiwanger was involved in a controversy over the dissemination of misleading COVID-19 and vaccine information, a story later proven to be entirely fabricated.

Researchers labeled the misleading narratives produced by the AI language model as “hallucinations,” underlining the magnitude of misinformation generated by general-purpose language models and chatbots.

Challenges with Language and Question Responses

Moreover, the study flagged the chatbot’s tendency to avoid responding to questions almost 40 percent of the time. The chatbot also faced significant challenges when dealing with languages other than English, particularly German and French. Despite Microsoft’s efforts to address these issues, subsequent evaluations showed minimal improvement.

As AI technology becomes increasingly integrated into online platforms, the need for regulatory measures and vigilance against misinformation continues to grow.

Topics:
Artificial Intelligence, Cybersecurity, Microsoft, Politics
Shares:

1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *