Prime Minister Narendra Modi and his party are predicted to win the upcoming election in India. Credit: Vishal Bhatnagar/NurPhoto via Getty Images

Tech giant Meta is preparing for the 2024 global elections with a focus on “safeguarding elections online” as countries worldwide gear up for voting. India, known as the world’s largest democracy, is one of these countries, and Meta is actively combatting the surge of deepfakes and misinformation leading up to the spring election.

Recently, the company introduced a specialized fact-checking helpline on WhatsApp for Indian users in collaboration with the Misinformation Combat Alliance (MCA) in the country. This helpline is dedicated to evaluating media generated by artificial intelligence, referred to as deepfakes. Individuals can flag deepfakes to a WhatsApp chatbot available in English, Hindi, Tamil, and Telugu. The MCA, through its Deepfakes Analysis Unit, consisting of independent fact-checkers, research organizations, and industry partners, will detect and verify such content, thereby exposing and debunking misinformation.

Commencing in March, the helpline will be accessible to the public. With 535.8 million monthly active WhatsApp users in India, it holds the largest user base globally.

SEE ALSO: What parents need to tell their kids about explicit deepfakes

Meta emphasizes that the initiative revolves around detecting, preventing, and reporting misinformation while also highlighting and raising awareness about the increasing dissemination of deepfakes.

Shivnath Thukral, Meta’s Director of Public Policy in India, stated, “We understand the concerns related to AI-generated misinformation and believe countering this requires tangible and collaborative efforts across the industry.”

The sentiment is reiterated by Bharat Gupta, President of the MCA, who mentioned, “The Deepfakes Analysis Unit (DAU) will play a crucial and timely role in curbing the proliferation of AI-fueled misinformation among social media and internet users in India.”

AI has been identified as a threat to upcoming elections globally, including in India. A recent study by George Washington University predicts frequent “bad-actor AI activity” in 2024, posing a risk that could impact election outcomes in over 50 countries holding elections this year. These threats range from AI-generated videos shared on social media platforms to hackers influencing results, as reported by the international affairs think tank Chatham House.

SEE ALSO: OpenAI’s new election rules are already being put to the test

In the realm of Indian politics, AI-generated content, especially deepfakes, has become a prevalent issue. An investigation by Al Jazeera highlighted the targeting of members from Prime Minister Narendra Modi’s Bharatiya Janata Party and Congress, the primary opposition party, by deepfakes circulated mainly on WhatsApp. Some incidents have even involved party members utilizing the technology themselves, such as a prominent BJP member of parliament who employed deepfake technology to create campaign videos in various Indian languages in 2020.

Deepfakes have already permeated the political landscape in India and have been deemed a “democracy threat” by the country’s Information Technology Minister, Ashwini Vaishnaw. Presently, India lacks clear laws defining or addressing deepfakes but is in the process of drafting regulations to curb the dissemination of harmful content. A senior official within Modi’s party cautioned that social media platforms would be held responsible for any deepfakes posted on their platforms.

Rajeev Chandrasekhar, Minister of State for Electronics and IT, emphasized India’s vigilance against cross-border actors utilizing disinformation and deepfakes to disrupt democracy. Chandrasekhar highlighted the country’s early recognition of these threats due to their heightened impact compared to smaller nations.

Modi has also raised awareness regarding these concerns, urging global leaders to regulate AI as far back as November 2023. However, critics suggest that Modi recognizes the impact of technology and social media in engaging with Indian voters, establishing a robust digital presence, and rallying supporters around his ideologies.

Topics Artificial Intelligence Politics Meta

Mashable Image
Meera Navlakha
Culture Reporter

Meera is a Culture Reporter at Mashable, joining the UK team in 2021. She writes about digital culture, mental health, big tech, entertainment, and more. Her work has also been published in The New York Times, Vice, Vogue India, and others.