Shaping Robust AI Regulation: Lessons from India’s ‘Deepfake’ Election

0
342
Photo licensed under Creative Commons 2.0.

What would you do if your senator called you, addressed you by name, and asked you which political issues you care most about? 

During election season in India this year, tens of millions of people received similar phone calls and video messages from local, state, and national candidates. These AI-enhanced videos and calls, known as deepfakes, were just one way candidates harnessed AI and emerging technology to strengthen their voter base.

Indian politicians have always been tech-forward. In 2014, Prime Minister Narendra Modi created a hologram of himself so he could speak at rallies across the country. And in 2019, India had its ‘WhatsApp election,’ where information (and misinformation) were circulated over the popular messaging app WhatsApp. So it’s no surprise that politicians latched onto deepfakes during this election. 

Anxiety around the potentially negative role of AI in the Indian election was running high this year, especially considering India’s high susceptibility to disinformation (and past experience with fake news in the 2019 election).

But experts say it wasn’t as bad as they expected. AI, in fact, was a net positive in the election, leading to an increase in voter turnout and allowing more voters to understand the platforms candidates were running on. 

Still, AI aided in the spread of hate speech, misinformation, and explicit content during election season. Users utilized AI to stoke violence against religious minorities and, without seeking permission, made fake videos of pop culture figures endorsing candidates. And India’s large population of voters who have less than a high school education or who live in a rural setting often struggled to tell what was real or fake in the campaigning videos and calls they received.

Examining India’s election, it is evident that democracies must construct a robust AI governance framework to corral bad actors and contain the spread of harmful narratives.

The rise of deepfakes

In India, deepfake technology has been used for both good and bad.

Some have made scandalous deepfakes of Indian public figures, swapping their faces onto adult film actors or other individuals. For example, a deepfake of Rashmika Mandanna in a low-cut black top — scandalous for India’s conservative culture — getting into an elevator went viral across the internet in November of 2023. 

And this April, the Indian National Congress (INC), the country’s major opposition party, put out deepfake videos of Bollywood actors Ranveer Singh and Aamir Khan criticizing Prime Minister Narendra Modi, which garnered half a million views. The actors lodged police cases against the creators of the deepfakes.

These videos are low-quality and easy to make; there are dozens of apps which sync a person’s lip movement with a customized script within minutes.

“We call these low-quality deepfake videos ‘cheapfakes,’” AI policy expert Sagar Vishnoi told the HPR. “They have become democratized, like the new Canva or Photoshop. Cheapfakes are not inherently bad. But the people who make them can choose to use the technology in a good or bad way.”

How a deepfake is made

Divyendra Singh Jadoun is the founder and CEO of Polymath Synthetic Media Solutions, a company which creates deepfake videos for politics and ads. Jadoun has made deepfakes and even a propaganda song for the chief minister of the northeastern state of Sikkim, Prem Singh Tamang. He brought an iconic Andhra Pradesh politician YS Rajasekhara Reddy back to life to endorse his son. And he created a jingle for the chief minister of Maharashtra. 

Jadoun charges far less than the market rate, which makes him a highly-desired consultant. It costs 125,000 rupees ($1,500) for a deepfake video, and 60,000 rupees ($720) for an audio clone.

He spoke to the HPR about his process of creating deepfake videos.

“We take the 180-degree face data of the candidate, and we make him sit and say something,” Jadoun said. “We record around 15 to 30 minutes of his video data and a consent video that he is okay with us using his video and audio. We clean up the data, then detect the lower part of the face and train the model to ‘talk’ by moving the mouth realistically.”

The AI model can generate videos and audio samples where the politician addresses over 100,000 people by name, and these can be sent to people in a minute. Especially in last-mile India, having a major politician address you by name gives people a feeling of importance and enfranchisement; these calls and videos, Vishnoi said, are one factor that made so many people brave the scorching summer heat to vote.

Applications in the 2024 election

Deepfakes were used by political parties of all sizes and scopes in this election, including regional parties like the Dravida Munnetra Kazhagam (DMK) in the southern state of Tamil Nadu. 

The DMK used deepfake technology to create an eight-minute speech of its former leader M. Karunanidhi. Karunanidhi praised his party and his son, the current Chief Minister of Tamil Nadu M.K. Stalin. Karunanidhi was dressed in his trademark white shirt, golden shawl, and thick sunglasses, and spoke in his classic husky voice.

Only two things gave away that this was a deepfake. One, an odd flicker around the politician’s head. And two, Karunanidhi passed away in 2018. 

India is also known for its diehard fan culture. This makes ‘resurrecting’ leaders a political masterstroke. Harnessing past leaders’ charisma, especially when they’re as iconic as Karunanidhi and Jayalalithaa, is a masterstroke to get votes.

Additionally, parties like the BJP and INC used personalized calls to reach voters in areas they couldn’t visit personally. 

Politicians either use in-house AI analysts or work with third-party consultants like Jadoun to create personalized calls or videos to send to people in the target area. The videos and calls would be customized to include the name and city or neighborhood of the person they were sent to. 

“This was a good way to get a lot of data on what people’s problems are. With AI calls and outreach, politicians can get a lot of high-quality data and make new schemes based on that,” Jadoun said. 

Another major benefit of sending out deepfake calls and videos was that it was cost-effective, especially compared to in-person rallies. Helped by Jadoun or other consultants, a politician can send 10 million people personalized calls or video messages for 50 lakh rupees ($59,521), where an in-person rally would cost them five crore rupees ($595,218). 

More than 50 million AI-generated voice clone calls were made in the two months leading up to the start of the 2024 election in April. It’s a $60 million business opportunity.

Deepfakes in smear campaigns

Deepfake videos have largely been used to add humor into election season (as in this video of PM Modi dancing, which the Prime Minister praised), or, as discussed above, to reach more voters. But parties have also used deepfakes to smear opponents. A video of INC politician and Leader of Opposition Rahul Gandhi being sworn in was layered with AI-generated audio, making it appear as though Gandhi resigned from the party.

In the edited video posted by the BJP, Gandhi says: “I can no longer pretend to be Hindu for the sake of elections.”

A move like this is in step with the BJP’s public image — the party brands itself as the defender of Hindus and calls the INC the party of dynastic politics (because of the dominance of the Gandhi family) and fakery (because of a belief that the INC defends minority rights more than those of Hindus).

The INC isn’t innocent either; the party has spread the largest number of deepfake videos on social media. Besides doctored videos of Bollywood actors, they also created a deepfake of BJP politician and Home Minister Amit Shah saying he was curtailing the reservation system, India’s version of affirmative action for historically disenfranchised castes and ethnic groups. 

“This was just a simple lip-sync but it seriously impacted public sentiment about the BJP,” Vishnoi said. “What was good about it was that within one day, the perpetrator of this deepfake was caught and punished. There needs to be more urgency about finding and punishing the creators putting out this negative content, even party workers.”

Jibu Elias, Responsible Computing Fellow at Mozilla, told the HPR that the volume of deepfake videos used for smear campaigns was less than expected.

“I was glad there were no videos created as though it was footage from a CCTV camera of a politician doing something bad or hurting religious sentiments,” Elias said. “If there is a deepfake video of a politician disrespecting a church or temple, for example, that would cause a huge uproar.”

Jadoun said he received 250 requests for deepfakes during election season, of which the vast majority were intended to insult the opposition.

It’s largely up to the deepfake creators to set their own code of ethics. Jadoun too created a set of guidelines for his company, which is ever-evolving. All content put out by Jadoun’s company has a watermark on it, and Jadoun ensures the script of the videos they create is not manipulative or defamatory.

Blueprint for AI regulation

There is currently no active legislation in India for AI regulation, much like the US. Examining the various use cases of AI in the Indian election, the biggest democratic exercise in the world shows that the country, and other democracies like it, need strong regulatory AI frameworks.

India has been in favor of AI innovation for the past many years, and the Ministry of Electronics and Information Technology (MeitY) has only put out advisories to govern the use of AI. 

In April 2023, MeitY informed Parliament it was not eyeing any legislation to regulate AI. But this approach has been walked back in the past few months. In November 2023, IT Minister Ashwini Vaishnaw announced plans to regulate the spread of deepfakes. And on March 1, 2024, MeitY issued an advisory mandating that “unreliable” or “under-tested” generative AI models or tools be labeled as such. The ministry has also warned big tech companies not to create tools that “threaten the integrity of the electoral process.”

“If you’re making billions of dollars off of your platform, or like Meta, you are using user data to train your AI, then you do have a responsibility to work against bad actors,” Elias said. “It’s like a bad boyfriend — I want all the good, new stuff, all the data and innovation, but I can’t do the mundane stuff of regulating anything.”

Currently, the MeitY is drafting up an AI-specific law requiring social media platforms to label and watermark AI-generated content.

Beyond legislation, workshops for election officials and AI professionals on how to detect and respond to deepfakes are key. 

“There is a knowledge gap. We need to teach them how to handle a harmful deepfake when it comes out, what’s going to be their reaction, how they should detect it, and what kind of criminal rulings to make,” Vishnoi said.

To fill this gap, legions of fact-checkers were working overtime during the election this year, with the most prominent coalitions being the Misinformation Combat Alliance and Project Shakti.

Vishnoi told the HPR he believes the ideal governance framework for AI would warn deepfake creators against creating content that would promote misinformation or that would tarnish someone’s reputation at three stages.

“First, at the development stage, where they’re taking source code from GitHub and creating a deepfake, there should be a warning from the big tech company and the government detailing the things that violate the constitution,” he said. “Then, when you are going to deploy or put out a video which can involve harming any person’s image, there should be a warning. And then again, at the distribution stage, when they are pushing it out on a platform, there should be a final warning.”

After that, Vishnoi said, it is the job of the Indian justice system to mete out the appropriate punishments.

Big takeaways 

The biggest lesson from the Indian election for democracies around the world is that AI’s effect on democracy depends on whose hands it falls into. The same technology that can be used to help non-Hindi speakers understand candidate speeches can be used to undermine their credibility and create smear campaigns. This is where AI legislation comes into play.

India can look abroad for examples of effective legislation with regard to AI in elections. The European Union’s AI Act is perhaps one of the most robust frameworks worldwide. 

The AI Act, which is yet to be fully enforced, classifies AI according to its risk, and bans some it considers to be simply too high risk. The majority of obligations fall on developers of high-risk AI systems, similar to the framework suggested by Vishnoi. Chatbots, deep fakes and general-purpose AI models, in general, are considered “low risk” but if used to deliver political advertising, profile voters, or provide voter assistance, they fall under the “high risk” category and must adhere to strict operational and reporting guidelines: establishing a risk management system, conducting data governance, keeping detailed technical documentation and operational records, implementing human oversight, ensuring high security, and establishing a compliance mechanism.

The Ministry of Electronics and Information Technology has a new AI & Emerging Technologies Group, which can be modeled off of the EU’s AI Office, which governs ethical use of AI across the 27 member states and stays up to date on the most effective methods of AI governance through forums with experts. India has unique needs for its AI governance. Deepfake calls, according to experts, are received better in India than most other geographies. If crackdowns on spam robocalls by the Telecom Regulatory Authority of India extend to deepfake calls, this could actually limit their beneficial effects. When coming up with a plan for AI governance in India, a balance between regulation and innovation is essential: curbing the misuse of AI while supporting its constructive applications in democratic processes.