This article is written by Vidhi Damani, 3rd Year Student, B.B.A. LL.B. (Hons.), National Law University Jodhpur.
- Introduction
The biggest mediums for circulation of defamatory content are social media platforms such as Twitter and Facebook, and communication applications such as WhatsApp. This is attributable to the fact that uploading or publishing defamatory content has never been easier than under the encrypted garb of supposed anonymity that internet intermediaries provide to its users. The far-reaching effects of mass dissemination of unregulated and defamatory content through such platforms have compelled governments to contemplate the regulation of the Internet in order to accord greater responsibility to internet intermediaries to regulate the content they host.
In India, “Intermediary” is defined in Section 2(1)(w) of the Information and Technology Act, 2000 as, “any person who on behalf of another person receives, stores, or transmits that electronic record or provides any service with respect to that record.” This definition is broad enough to cover a diverse set of service providers including social networking platforms, internet service providers, search engines and e-commerce platforms.
While cyber defamation has not been recognized explicitly in any statute, the tortious concept of defamation can be applied to online defamation as well. Defamation, simply understood as an injury to a person’s reputation, is predominantly in the form of libel on the internet- an essential element of which is publication. Intermediaries are often contended to be liable as publishers rather than being mere conduits or carriers of the data. Yet, it is nearly impossible for online platforms where the data is uploaded and received in huge quantities, to verify whether the content is defamatory at the time of its publication by a user and to subsequently remedy the situation before publication. With increasing complex technological rubrics and growing privacy concerns surrounding intermediaries, it has become imperative to determine if they are truly only drivers for their consumers’ content, and what extent of liability they would attract if they are found not be so.
- Regulations on Intermediary Liability in India
The liability of the internet intermediaries has been codified in matters regarding any communication or data hosted or conducted by them. Section 79 of the IT Act, 2000 provides a safe harbor to the intermediaries from third party content similar to the safe harbor rules of the European Union. However, it makes such immunity conditional on the intermediary merely hosting or transmitting the content, and not interfering (initiating, modifying or selecting the receiver) with the content in any manner. Further, this exemption can be availed only if the intermediary did not have “actual knowledge” of the third-party content, and they complied with the various due diligence requirements promulgated by the government as listed in sub-sections 79(2) and (3).
In Shreya Singhal vs. UOI [AIR 2015 SC 1523], the Supreme Court read down the notice and takedown requirement of Sec. 79(3)(b) to restrict the meaning of receipt of actual knowledge to a court order or notification by the appropriate government or its agency to expeditiously remove or disable access to defamatory materials, in furtherance of Art. 19(2) of the Constitution. The blocking and takedown provisions under Sec. 69 of the Act as well as Rule 3(4) of the Information Technology (Intermediary Guidelines) Rules, 2011 provide that this mandated takedown of the defamatory content must be effected within 36 hours of receipt of such orders to absolve the intermediary of any liability. However, rule 3(8) of the draft 2018 Amendment to the Rules reduce this timeframe to 24 hours. Section 67 read with Section 79 of the IT Act further grants immunity to service providers where the obscene material is published and transmitted through its agency.
The case of Avnish Bajaj v. State [150 (2008) DLT 769] initiated the need for amendments in the Act of 2000 since the intermediary Baazee.com and its owner, the plaintiff was held liable for publication of obscene content on website as it failed to employ content filters. This strict liability approach was quashed by the Supreme Court on appeal and was subsequently replaced with a safe harbor regime in the 2008 Amendment of the IT Act. In Myspace Inc. v. Super Cassettes Industries Ltd. [236 (2017) DLT 478], the Delhi HC held that intermediaries are only a conduit for the exchange of information between users and cannot be obligated to pre-screen and verify all the content that is stored on their websites under due diligence requirements. This ratio also upheld in Kent RO Systems v. Amit Kotak & Ors. [2017 (69) PTC 551 (Del)] in furtherance of the conditional immunity. With the Shreya Singh judgment, the conditions for seeking this immunity has further reduced by eliminating the need for intermediaries to themselves determine the defamatory nature of hosted content. However, the draft Rules of 2018, show the intention of the government to make additions of safeguards and liability of such intermediaries.
- Lacunae in the Law
Although the government intends to take proactive steps to regulate the intermediary liability in accordance with international standards, there persist some pressing issues with the present law. Sec. 79 of the Act patently states that an intermediary must be passive in order to avail immunity from defamation proceedings. However, this clause requires the nature of an intermediary to change from passive to active, which would effectively dilute the safe harbour afforded to the intermediary, especially without defining unlawful content and its qualifiers. This proactive filtering might also have a chilling effect on the freedom of speech and expression due to arbitrary censoring or filtering of content, which has further compounded with new draft regulations under the Rule 3 of the 2018 Amendment of the Intermediary Guidelines that in essence require the intermediaries to assume a Big Brother status for their users.
The question of jurisdiction of courts has also not been answered by the courts. In Frank Finn Management Consultants v. Subhash Motwani & Anr. [CS (OS) 367/2002], the Delhi HC found that since the impugned material was available on the Internet, a cause of action had arisen in Delhi although technically publication only took place in Mumbai. Technically, Sec. 19 of the CPC permits this since it allows plaintiffs to sue either where the wrong was done or within the jurisdiction where the defendant “resides, or carries on business, or personally works for gain”. However, if the wrong is considered to be done in any jurisdiction where online content can be accessed, defendants could be called upon to defend themselves in any jurisdiction. Presently, India does not currently have any guidelines that allow its courts to exercise jurisdiction over foreign publications of defamatory content which further magnifies the issue.
Further, the law presently identifies all intermediaries to perform the same function and does not differentiate between the different roles played by them. Electronic Commerce Directive by the EU has an intermediary liability regime that successfully differentiates between types of intermediaries on the basis of the different functions they perform. Mere conduits, as recognized by Canada and UK, are those intermediaries that merely passively transmit the content and in no way authors or stores it. Such intermediaries such as ISPs enjoy a blanket immunity from defamation. This distinction is however not made under the IT Act which accords the same level of immunity and liability to all intermediaries. In each case in which intermediaries invoke the immunity, courts must thus scrutinize the manner in which they elicits user content, as well as the extent to which those providers exploit the information.
- Policy Suggestions
The IT Act does not provide any checks and balances to be implemented in order to ensure that the power to censor is not abused or overused by the government. Google’s transparency reports shows that there has been a sharp increase in the number of content takedown requests received from governments in recent times. There is a lack of mechanism to check the grounds on which the government may order a takedown of allegedly unlawful or defamatory content. Principle 2 of the globally endorsed and acclaimed Manila Principles, provides that intermediary restrictions on access to content should only occur through order by a judicial authority to ensure as much as objectivity as possible. The intermediaries may further be directed to aid the Courts in providing the identity of an anonymous user if the Court deems it so necessary in the case of defamation.
Proactive filtering of content is against Principle 3 of Manila Principles which prescribe that intermediaries should not have to evaluate whether content is lawful or not and hence when the provisions for unlawful content are undefined, it must not adjudge the defamatory nature of any content. Yet, a plausible manner of ensuring content regulation could be for the government to frame key regulations for search engines, for example, a list of impermissible search terms, setting up a mechanism for reviewing requests related to such videos and blocking such keywords inter alia such as rape, child pornography and real-life violence.
Further, the government may be advised to shift from the traditional “Notice and Takedown” to a “Notice and Notice” regime in furtherance of due process prescribed in the Manila Principles. Under a “notice and notice” regime, the intermediary is required only to forward any complaints regarding content to the creator of such content to absolve itself from liabilities arising out of court orders since it has no direct role to play in the content of defamatory communication. This allows the creator and the aggrieved party to directly resolve the dispute if the content creator is identifiable, and further permits the content creator to contest the legality of the content, if he wishes to do so. Intermediaries would still be required to take down the content the court order persists and would still prevail in cases where the content creator is not identifiable or is unresponsive. This would significantly increase transparency of and accountability for takedowns orders to minimize arbitrariness in takedown of allegedly defamatory content.
- Conclusion
While Sec. 79 of the IT Act simply states that internet intermediaries are granted a conditional immunity and safe harbor from being liable for any content that they did not have knowledge of or made any substance related modifications to, loopholes persist in the present legal regime for attaching liability to intermediaries. The present law identifies all intermediaries to perform the same function, does not differentiate between the different roles played by them and subsequently awards the same responsibility and duty of care to them. Moreover, the courts must firstly also carefully evaluate their jurisdiction for each case.
There is still a need for implementation of changed regulations to provide legitimacy to the Shreya Singhal case. Promulgation of a “notice and notice” further as well as non-implementation of proactive filtering or adding unwarranted due diligence regime would ensure that there are procedural safeguards that can protect intermediaries in cases of defamation and provide complainants with faster dispute resolution.