National

Govt Issues Advisory To Social Media Platforms On Deepfakes, Warns Of Legal Consequences

According to the advisory, platforms like Instagram and X (formerly known as Twitter) are obligated to ensure users comply with the "prohibited content" rule of the IT Act.

Getting your Trinity Audio player ready...
Representational Image
info_icon

NEW DELHI— On Tuesday, the Ministry of Electronics and Information Technology issued an advisory to social media platforms, instructing them to adhere to existing IT rules in addressing the rising issue of deepfakes.

According to the advisory, platforms like Instagram and X (formerly known as Twitter) are obligated to ensure users comply with the "prohibited content" rule of the IT Act. This move aims to counter the growing concern of deepfakes targeting actors, businesspersons, and other celebrities.

The advisory, specifically addressing concerns related to the manipulation of images and videos powered by Artificial Intelligence (AI), stressed the need for social media platforms to "clearly communicate" the nature of content not permitted to users in precise language.

Violations by intermediaries, which include the failure to report users, will face legal consequences. Intermediaries are also required to inform users about possible strikes against their accounts.

IT Minister Rajeev Chandrasekhar stated, "Misinformation represents a deep threat to the safety of Internet users, and deepfakes, which are AI-powered misinformation, further amplify the threat."

This advisory comes amid growing concerns about AI-powered misinformation saturating the Internet. Notably, deepfakes, edited images or videos, have targeted various actors, including Rashmika Mandanna, Katrina Kaif, Alia Bhatt, and Priyanka Chopra Jones.

What is a deepfake?

A deepfake is a type of fake content, like videos, pictures, or audio, created using advanced artificial intelligence techniques. Deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake. 

In September 2019, Deeptrace, an AI company, identified a surge in deepfake videos, reaching 15,000 online—a nearly doubled count over nine months. 

Notably, 96% of these videos were explicit, with 99% of them featuring the faces of female celebrities overlaid on to porn stars.