Skip to content Skip to sidebar Skip to footer

Deepfakes and India’s Tightened Stance

Recent events in India have catapulted the issue of deepfakes into the public consciousness, illustrating the urgent need to grapple with this emerging digital challenge. One notable incident involved a deepfake video of the popular actress Rashmika Mandanna, which went viral on social media. In this unsettling video, a woman’s face was edited using AI technology to resemble Mandanna, causing significant public uproar. This case, among others like it involving celebrities such as Deepika Padukone and Kajol, underscores the alarming potential of deepfakes to deceive and disrupt. The Delhi Police’s investigation into these incidents, particularly the Mandanna case, ran into challenges with social media platforms like Meta (formerly Facebook) reportedly not cooperating fully with the probe​​​​​​.

In response to these and other troubling instances, the Indian government has stepped up its efforts to address the deepfake dilemma. Recognizing the capacity for these AI-generated videos to cause mischief and mayhem, it has targeted the titans of social media like Facebook and YouTube, demanding action.

Deepfakes: A Growing Global Concern

Deepfakes, a term combining “deep learning” and “fake,” are synthetic media creations that skillfully manipulate one person’s likeness with that of another. They harness powerful machine learning and artificial intelligence techniques to craft visual and audio content that can deceive more easily than ever before. Deepfakes are particularly potent because of their use of deep learning methods, involving training generative neural network architectures like autoencoders or generative adversarial networks (GANs). This technology has brought a new dimension to the creation of fake content, elevating the potential for deception to unprecedented levels​​. Read also: Decoding Deepfakes: Corporate Security in the Age of Digital Deception

The implications of deepfakes extend far beyond mere trickery or entertainment. They have raised global concerns due to their potential use in creating harmful materials, such as child sexual abuse content, celebrity pornographic videos, revenge porn, and more insidiously, spreading fake news, hoaxes, bullying, and financial fraud. The capacity of deepfakes to spread disinformation and hate speech poses a significant threat to the core functions and norms of democratic systems. It interferes with people’s ability to make informed decisions, ultimately affecting their participation in democratic processes and expression of political will.


Image Credit: Regula

Addressing this global challenge, Prime Minister Narendra Modi, in his remarks at the G20 Virtual Summit held on November 22, 2023, emphasized the need for international cooperation in regulating AI and deepfakes. He highlighted the negative impacts of deepfakes on society, calling for a joint global effort to manage these technologies.

“The world is worried about the negative effects of AI. India thinks that we have to work together on the global regulations for AI. Understanding how dangerous deepfake is for society and individuals, we need to work forward. We want AI should reach the people, it must be safe for society.” – Prime Minister Narendra Modi

The Government Strikes Back

Deputy IT Minister Rajeev Chandrasekhar, in a closed-door meeting, issued a firm warning to major social media platforms, including giants like Facebook and YouTube. The warning was straightforward: take immediate action to combat the spread of deepfakes and content that violates local laws on obscenity and misinformation. Chandrasekhar expressed dissatisfaction with the compliance level of many social media companies regarding the 2022 rules. These rules explicitly prohibit the dissemination of content deemed harmful to children, obscene, or involving impersonation.

Highlighting the alarming trend of deepfakes, which are lifelike videos generated by AI algorithms trained on online content, Chandrasekhar emphasized the urgency of addressing this growing concern. He instructed the social media firms to actively remind users of the legal prohibitions on posting deepfakes and content spreading obscenity or misinformation, through repeated notifications upon login or other methods. The deputy IT minister stressed the need for these companies to update their usage terms to reflect these regulations, warning that failure to voluntarily comply would lead to official directives mandating such compliance​​.

Further emphasizing the gravity of the situation, Chandrasekhar deemed the demand a “non-negotiable” requirement from the Indian government, showing its commitment to safeguarding the online space from potentially harmful content. He expressed his readiness to issue official orders if social media platforms failed to implement the necessary measures voluntarily. In response, the IT ministry released a press statement confirming that all social media platforms present at the meeting had agreed to align their content guidelines with the government’s stipulated rules. While the statement did not specify the consequences for non-compliance, it underscored the government’s determination to ensure a safer digital environment for its citizens​​.

“We plan to complete drafting the regulations within the next few weeks.” – IT Minister Ashwini Vaishnaw

Industry Responses and Steps Forward

As India tightens its grip on regulating deepfakes, the industry’s response and the steps moving forward are pivotal in shaping a safer digital landscape.

Responses from Social Media Companies
In response to the government’s directives, major social media companies, including Google, have expressed their commitment to this cause. Google, owning YouTube, stated its dedication to responsible AI development and assured robust policies and systems to identify and remove harmful content across its products and platforms. This assurance from one of the leading tech giants marks a significant step towards aligning with the government’s objectives in combating the issue of deepfakes​​.

The IT ministry reported that all platforms present at the meeting had agreed to align their content guidelines with the government’s rules. This alignment is a crucial step in ensuring that the digital content ecosystem remains safe and trustworthy. It shows a collective effort from both the government and the industry to create a digital environment where innovation does not compromise safety and authenticity​​.

New Regulatory Considerations
Going a step further, the government is considering implementing new regulations, such as watermarking all AI-generated content. This approach aims to make it easier to identify AI-generated content, thus adding a layer of transparency and accountability. The regulations will also focus on deepfake detection and establishing stringent rules for data bias and privacy, reflecting a comprehensive approach to tackling the various facets of this complex issue​​.

Final Thoughts

India’s proactive stance on deepfakes marks a crucial juncture. The journey ahead is not just about curbing digital deception but also about shaping a future where technology serves humanity without compromising its core values. With governments, industries, and communities coming together, the path forward is one of vigilant innovation, ethical AI use, and a reimagined digital ecosystem. As we embrace this new era, India’s role in steering this global conversation may well set the precedent for a safer, more trustworthy digital world for generations to come.

____________

Written By: TEChquity India

Share

Let The Posts Come To You.

Get the best blog stories delivered to your inbox!

Techquity © 2024. All Rights Reserved.