National

AI And The Indian Elections: What Does Human Connection Look Like In The Digital Age?

How artificial intelligence is transforming voter engagement and challenging democratic norms in India

What Does Human Connection Look Like In The Digital Age?
Representational Image
info_icon

“I, Rahul Gandhi, am resigning from the Congress. I am tired of being a Hindu for the sake of elections. I did the nyay yatra and released the manifesto (nyay patra), but Modi keeps sending corrupt people to jail… Under his leadership, we will keep sending corrupt people to jail. That is why I am going to my grandfather’s house in Italy.”

Rahul Gandhi spoke these words in April 2024, right before the beginning of the Lok Sabha (Indian Parliamentary) elections, the largest and longest democratic exercise in the world, which were marked by an unprecedented use of technology.

But, when the elections ended, Gandhi reclaimed his place as the leader of the Opposition in the Lok Sabha. How could this occur after his resignation?

The issue: the original video was an AI deepfake.

The AI video of Gandhi, which circulated on Twitter, used an original video of Gandhi filing his nomination in Wayanad, Kerala which was overlaid with an AI voice clone. Voice cloning is a method by which data sets of real recordings are broken down into individual soundwaves and reconfigured using AI, speech learning models, and machine learning algorithms so that, when these soundwaves are put together using a text-to-speech model, they sound exactly like original voice recordings.

This unexpected turn of events sparked questions about the authenticity of candidate and campaign videos, as well as the greater use of AI in the Indian elections, which has set off a wave of confusion and misinformation amongst voters. India’s 2024 election, dubbed the first ‘AI Election’ (NBC News), was filled with deepfake videos, AI chatbots, and auto-generated, personalized messaging from politicians. While the technology itself is not new – artificial intelligence found its roots in the 1950s – recent developments have allowed costs to plummet and artificial intelligence to be used as a new, commonplace tool for communication.

This lowered barrier to entry for AI usage allowed for a wealth of creative campaigning strategies. Ahead of the elections, Meta approved a series of AI images that promoted the “valorisation of Modi as a Hindu leader,” as an attempt to “impart a simultaneous sage-like and warrior-like quality to Modi, both of which create the aura of a political leader who is indefatigable, undefeatable, beyond reproach and thus worthy of our unquestioned loyalty.”

While that imagery was created and disseminated through social media and individual actors, the Indian government was also engaging with AI via the Ministry of Electronics and Information Technology’s creation of Bhashini, a “National Language Translation Mission” (Bhashini) that uses automatic speech recognition, text-to-speech, and machine translation to translate Prime Minister Narendra Modi’s speeches from Hindi to other languages such as Tamil, Malayalam, and Kannada. (The Diplomat)

Yet what I really want to focus on is the usage of AI by political parties and their respective candidates. In this Indian election cycle, AI usage reached an all-time high and revolutionized how candidates communicated and connected with their potential constituencies. These strategies serve as both an example and a warning to the future of Indian politics and democracy.

During the election season, political parties and their candidates utilized AI avatars of themselves to engage one-on-one with voters. A NYT Report detailed how “An A.I.-generated version of Prime Minister Narendra Modi that has been shared on WhatsApp shows the possibilities for hyperpersonalized outreach in a country with nearly a billion voters.” Videos of Prime Minister Modi directly addressing voters by their names and asking for their votes flew across WhatsApp and social media channels in a flurry of excitement, as Indians – many of whom did not speak Hindi – felt Narendra Modi was speaking to them, about their concerns and the importance of their vote, in their mother tongue.

Members of Modi’s party, the BJP, also disseminated the same kinds of AI-generated messaging throughout their local constituencies. In some ways, this may have had an even stronger impact than Modi’s messaging, as offcials would deliver personalized messages detailing the government benefits they have received and then ask for their vote. In addition to avatars, there were even personalized generative chatbot phone calls; calling voters by name, in their language, and in the voice of political candidates.

Prateek Waghre, the executive director of the New Delhi-based Internet Freedom Foundation, referred to these developments as “the Wild West”. (NYT) Waghre’s analogy may not be all that far off; the AI revolution – particularly in the realm of weapons of mass destruction and warfare — has brought attention to the systemic Raw that technology always develops quicker than the law. The virtual landscape – and the real world implications that come with it – of AI lend itself to a Wild West-esque lawlessness, with no legal protections in place and each to their own.

The concern that Waghre, myself, and many others who have studied the field of AI fall into is that the hyper-personalized messaging deployed in the Indian elections blurs the line between reality and fantasy and further stokes the fire of misinformation.

Hyper-personalization through generative AI creates a twisted reality in which “...voters [are shown] that the candidates are attuned to each voter’s specific concerns.” This creates a highly unrealistic standard in which politicians begin to morph into a companion-like, even friend, figures, in which voters can confide in candidate chatbots about their political worries and anxieties – and may forget that no one is actually listening and taking in the input on the other end of the line.

As Nilesh Christopher writes, “Feigning a personal connection with voters through AI could act as the stepping stone toward the real risk of targeted manipulation of the public. If personalized voice clones become normal, more troubling uses of the technology may no longer seem out of bounds. Similarly, a barrage of mostly innocuous AI content could still damage trust in democratic institutions and political structures by fuzzing the line between what’s real and what’s not.”

That damaged trust has already begun. The Asia Pacific Foundation of Canada reported that, “Meta, the parent company of WhatsApp and Facebook, reportedly approved 14 AI-generated electoral ads that contained Hindu supremacist language and called for the killing of Muslims and a key opposition leader while elections were underway. Some of the videos contained the false claim that the leader ‘wanted to erase Hindus from India." Despite these hate-inciting, right-wing, supremacist ads, Narendra Modi still secured another term in office. This leads one to wonder: can the ‘buddy-buddy’ rhetoric being pushed through AI hyperpersonalized messaging allow a candidate to build a (one-sided) relationship of trust and rapport with a voter, thus making them more open to new, and even harmful, ideas?

A core part of our nature, as human beings, is the desire to connect. Connection is what allows us to develop meaningful relationships and ideas, to produce change, to see the world we want to believe in. However, in the context of the BJP’s usage of AI, the desire for human connection is being manipulated to build trust and push a nationalist, exclusionary, and right-wing agenda.

The blatant irony of all of this is that while the BJP is utilizing the human desire for connection to streamline their messaging strategies, they’re also actively working to destroy the Indian Constitution, which was founded upon the very values that foster connection and growth – inclusivity, freedom of religion, co-existence. The BJP’s very agenda threatens that.

AI, like any tool, is dependent on who wields it. What would it look like to imagine a world in which AI is utilized to build connection – grounded in inclusivity, freedom of religion, and coexistence, rather than Hindutva and Islamophobia?

The key: harnessing the power of nostalgia

During election season, there were multiple incidents in which deepfakes were utilized to create videos of deceased individuals discussing current events. On one such occasion, Duwaraka, daughter of Velupillai Prabhakaran, the Tamil Tiger militant chief, spoke to Tamilians across the world, encouraging them to seize their political freedom. Duwaraka had died more than a decade earlier during the Sri Lankan civil war at just 23, and her body was never recovered; but now she appeared on livestream, as a middle-aged woman. Another incident occurred in which Muthuvel Karunanidhi, former chief minister of the southern state of Tamil Nadu for two decades, appeared via video at a youth wing conference – despite the fact that he had died in 2018.

The videos of Duwaraka and Karunanidhi harnessed nostalgia in the same way: they appealed to loved figures of the past to request a bid for the future. These videos utilized the power of connection – using individuals people both missed and related to – to then place agency in the viewer and encourage them to look to the past in order to decide the future they wanted for India.

AI deepfakes aren’t going anywhere. So why not use them for good?

Imagine a speech delivered by Mahatma Gandhi, addressing the Indian citizenry on the need for peaceful solutions to Hindutva. Or attending a webinar with a keynote by Dr. B.R. Ambedkar, calling for a return to India’s constitutional values and the annihilation of caste. Or perhaps a live talk from Sarojini Naidu, discussing issues of women’s violence and the need for women’s representation in politics. Or even a live simulation experience, in which one could walk through a street in which Holi and Eid were being celebrated side by side, Hindus, Muslims, Christians and Sikhs walked alongside one another, laughing and smiling, and people congregated outside of the masjid or the mandir, regardless of faith. AI can meld the secular values of the Indian constitution with the contemporary Indian landscape to show a future that is bright, promising, safe, and inclusive for all.

In its current iteration, AI’s ability to blur the boundaries between objective information and persuasive messaging makes the tool vulnerable to authoritarian manipulation. As guidelines and standards begin to develop around AI usage, it is imperative that we call for laws that ensure transparency in AI usage, as well as a key focus on ethics that promote democratic values and ensure trust. The legal institutionalization of tech ethics and the protection of freedom of speech is traditionally attached to democratic institutions; when we develop legalistic guidelines for widespread AI usage we are re-affirming our faith in democratic values even while allowing for freedom and self-expression through AI.

As we peer into the future of AI, let us remember that the law is a tool, not a barrier, to ethical AI development. It is our responsibility as the global citizenry to demand AI usage that is focused on people over products, profits, and propaganda; if our technology is people-centered, it can eIectively promote and deliver the core values, traditions, and histories that brought us together in our best moments and inspire a more prosperous future. With the right combination of AI and strong, people-focused legislation, we can now look to the past better than ever before – and even bring it to life right before our very eyes.

(Views expressed are personal)

(Faria Rehman is a writer and researcher fascinated by the intersection of AI, storytelling, and human rights. She holds an MSc in Human Rights from the London School of Economics, where her dissertation analyzed Silicon Valley’s collaboration with the Department of Defense in developing AI-based weaponry and its human rights implications. She works in communications at Hindus for Human Rights, where she is committed to promoting ethical standards in the use of technology and advocating for social justice.)

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement