This article has been written by Aryan Dash and Rishita Sinha, 4th year students at National Law University, Odisha
Introduction
The world has created an alternate realm of deepfakes, which is a technological tool that can produce artificial synthetic images, videos, and audio. Its noticeable beginning was when filmmakers used deepfake to preserve Paul Walker’s character in a movie after his untimely demise. However, what began as a positive force in storytelling has, regrettably, metamorphosed into a tool with a dark underbelly.
Deepfake, a word that is a fusion of “deep learning” and “fake,” is technology capable of blurring lines of reality, and blurring people’s identity, where faces and voices are swapped with another and what comes out is a morphed illusion. It functions through the process of deep learning, involving neural networks with multiple layers to analyse and understand patterns from vast datasets. Despite its potential, the evolution of deepfake has revealed a precarious aspect, sparking ethical worries regarding its potential for generating misleading and deceptive content.
Amidst the whirlwind of this AI-led digital drama, there is a growing number of incidents involving deepfakes sweeping across the globe, where the most favourite targets are celebrities, public figures, and politicians. The malicious use of deepfake technology to tarnish and exploit their public image, often turning them into pornographic commodities, has become a prevalent trend. The most recent reports involve explicit content falsely attributed to global sensation Taylor Swift, and, Rashmika Mandana, along with misleading depictions of prominent figures such as Indian Prime Minister Narendra Modi, former U.S. President Donald Trump, and cricket legend Sachin Tendulkar. Astonishingly, the creators have extended their reach to religious figures as well. In the context of the inauguration of the renowned Ram Mandir in India, synthetic videos depicting God Ram have been disseminated.
In response to such escalating deepfake cases, the Ministry of Electronics and Information Technology (MeitY) has issued a crucial advisory for intermediaries to adhere to IT rules in relation to deepfake. In this blog, we delve into the dark side of deepfake, dissecting the implications of Meity’s advisory, navigating the legal landscape for deepfake within existing Indian laws, and providing a succinct comparison with global legislation while pinpointing the legal gaps in our jurisdiction.
The Dark Side of Deepfake
The primary impact or consequence of deepfake lies in its ability to reshape our perceptions. It possesses the power to alter our understanding of reality, even going so far as to manipulate our memories or implant entirely new ones. A prime illustration of this is witnessed in Paul Walker’s reappearance in his movie or the portrayal of Princess Leia’s character, persisting on screen long after the actual actor’s demise. While audiences may not react with immediate alarm, considering it mere fiction, the crux of the matter lies in the significant question – are they, in fact, subconsciously questioning the reality of events?
Secondly, Deepfake’s nefarious application lies in the pornographic sector for example revenge porn. In this landscape, women are increasingly becoming the primary targets, transforming these tools into cruel weapons against them. This should be viewed not just as a privacy invasion but as a severe sexual offense against women, as their body images and faces are used as digital resources in the absence of content and without their consent.
Lastly, a pervasive issue that significantly impacts any average household or family revolves around modification and alteration of audio through deepfakes. In this unsettling scenario, offenders mimic family members voices to deceive loved ones into believing they’re in urgent need of financial help, therefore exploiting their trust to manipulate them into sending money.
However, this manipulation extends beyond personal spheres; corporations and firms can fall victim to similar tactics. By impersonating executives or managerial figures, deepfakes can influence decision-making processes, potentially resulting in financial losses, scam or fraud.
Addressing the Menace of Deep Fakes in India and Global Legislation
The MeITY’s Advisory and Regulatory Framework
The Ministry of Electronics and Information Technology (MeITY) sparked a significant confrontation in late 2023, signalling the urgent need for robust legislation to combat the escalating threat of deep fakes in India. This was manifested through a concise advisory issued under Rule 3 of the IT Rules 2021. Rule 3 mandates digital intermediaries to exercise reasonable due diligence concerning the content uploaded on their platforms. Additionally, it emphasizes the necessity of informing users about the platform’s policies and the legal implications of publishing the 11 prohibited content categories outlined in Rule 3(1)b of the IT Rules 2021. The Ministry further mandated intermediaries to take swift action against violators, with complaints requiring resolution within 15 days and acknowledgment within the first 24 hours.
Privacy-Centric Laws
The rise of AI-generated ads or content featuring celebrities without explicit consent poses a significant threat to their commercial interests. Unauthorized use of a celebrity’s voice, face, or likeness for marketing products without proper compensation not only undermines their economic interests but also has the potential to damage their public image.
India’s existing legal framework, centred around privacy, draws strength from Article 19 of the constitution, granting citizens absolute control over their identity and data processing. Section 6 of The Digital Privacy and Data Protection Act 2023 reinforces this, making user consent paramount in data processing. Violations, especially concerning personal sensitive information, attract penalties under Section 66 of the Information Technology Act 2000. The same section applies to critical sensitive information misuse, encompassing identity theft offenses potentially attracting charges of forgery under IPC. For cases involving cyber defamation, Section 499 IPC can be invoked.
Personality Rights and IP Protection
The proliferation of deep fakes has led to a concerning infringement of personality rights, particularly impacting celebrities. While Indian laws currently lack specific protections for personality rights, existing legislation such as the Copyright Act has been invoked in recent cases involving Anil Kapoor and Amitabh Bachchan. Courts have taken measures to restrict the unauthorized use of their likeness, voice, or image, thereby safeguarding the economic interests of these celebrities.
Section 2(d)(vi) of the Copyright Act recognizes the author as anyone creating original work via computer, but this doesn’t permit the use of someone else’s intellectual property without consent. Authors hold moral rights under Section 57, protecting their works from unauthorized use, especially for celebrities whose distinct features are valuable for future commercial use. Using a person’s likeness, voice, or identity without permission violates both their copyright and their right to publicity under Article 21, as affirmed in the Puttuswamy judgment.
The Adequacy of Existing Laws
On the surface, existing Indian legislation seems adequate in addressing the surge in deep fake incidents. However, a critical examination reveals that these laws are reactive rather than preventive. They come into play post-offense, leading to content takedowns or fines under Sections 66 and 67 of the IT Act 2000 for creators. The lack of comprehensive guidelines leaves a void in stopping such practices at their roots. The burden on intermediaries is compounded, raising concerns about potential overreach into the executive domain, impacting digital rights and free speech. The MeITY Advisory mandates strict adherence to existing IT Rules, placing significant responsibility on intermediaries. With deepfakes classified as misinformation, intermediaries are obligated under Rule 3 of the Intermediary Guidelines to exercise rigorous due diligence in identifying and removing deepfake content.
Global Comparative Analysis: Legislative Gaps
Globally, legislative and regulatory gaps persist in addressing the use of AI synthetic software to detect, prevent, and protect creators’ and audiences’ rights. China and the EU focus on early-stage intervention, targeting technology, providers, or the content itself. In contrast, the U.S. prioritizes later-stage control, urging audiences to report problematic content. All three regions stress the responsibility of intermediaries, with variations in content censorship approaches.
Balancing Responsibility and Oversight: The Way Forward
The rise of deepfake technology presents a multifaceted challenge that transcends borders, impacting individuals, celebrities, and democratic processes. The European Parliament’s research underscores the need for a comprehensive legislative approach, advocating real-time tagging of manipulative deepfake content. This strategy involves a combination of human-led detection by intermediaries and advanced software for early identification. Legislation should extend beyond political and sexual content, categorizing AI deepfake systems as high-risk, necessitating constant supervision.
Striking a balance between censorship and safeguarding digital rights is crucial. Instead of relying solely on takedowns and penalties, early detection with shared responsibilities between intermediaries and lawmakers is proposed. Emphasis should be placed on slowing the release of potentially harmful content, as technology might outpace existing frameworks. Greater international cooperation and a uniform AI regulation framework are advocated to counter the deepfake menace effectively.
User awareness is essential, encouraging individuals to exercise caution and take charge of their digital safety, security, rights, and privacy until comprehensive legislation is enacted. The complex landscape of combating deepfakes demands a proactive and globally coordinated approach, balancing the prevention of misuse with the preservation of fundamental rights.
While India’s MeITY has made progress with an advisory, the existing legal framework remains reactive, necessitating a more preventive approach. Legislative gaps persist globally, highlighting the need for a comprehensive, proactive strategy that balances responsibility, oversight, and international cooperation. As the digital landscape evolves, user awareness becomes paramount in safeguarding privacy and rights. A coordinated global effort is essential to effectively counter the deepfake menace while preserving fundamental rights in this complex technological era.