back to top
OpinionsThe battle for online privacy

The battle for online privacy

Date:

Kaushik Deka

award-winning filmmaker Utpal Borpujari is a confused man. As a former journalist, he understands the importance of free speech in a democracy. At the same time, he acknowledges the need for a regulatory mechanism to weed out misinformation and other objectionable content from social media platforms that have admittedly given an empowering voice to millions of users. As a filmmaker, Borpujari is a strong votary of creative freedom, though he concedes that a sense of ethics and responsibility should guide such liberties.

So, on February 25, when the Centre released the Information (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, for digital and OTT platforms, he did not question the rationale behind the guidelines (ostensibly to provide the ordinary citizen with a grievance redressal system). But a careful reading of the new rules left Borpujari, like many others, worried. “From experience, we know we can't trust Twitter or Facebook to guard our privacy, but these gui­de­lines do little to help me as an internet user,” he says.

The new IT rules are at the heart of a big battle between the Union government and big tech companies now. Aimed at making the foreign companies comply with Indian laws—which they often evade, abdicating their responsibilities and accountability to millions of users—the rules also give the executive machinery the power to censor any digital content with very little judicial supervision. “The publication of a transparency report along with clearly defined processes of grievance redressal will make the internet a safer place for users. But making the government the final arbiter has raised the hackles of many,” says Subimal Bhattacharya, a cyber security expert and former head of General Dynamics.

Why did the Centre frame the rules?

There is unanimity among all stakeholders that a transparent and accountable mechanism is needed to prevent the rampant abuse of social media platforms to spread misinformation, defame individuals, transmit sexually explicit material and violate copyrights. In the absence of a data privacy law, the Information Technology Act, 2000, did not have adequate provisions to regulate the evolved functioning of the digital platforms, particularly social media intermediaries such as Twitter, Facebook and Instagram and instant messenger services such as WhatsApp, Telegram and Signal.

What added to the complications was the abse­nce of a transparent grievance redressal mechanism within social media platforms. The IT Act of 2000 also provided “safe harbour” to these ‘intermediary' companies—a passive conduit of information without any active role in the content—and exempted them from any criminal liability for the content they published. The Union government now claims the core of the IT rules is to create a transparent, readily available forum for redressal of grievances. Digital policy observers too acknowledge the need for such an oversight. “If the traditional counterparts of these mediums, such as TV, print, voice call, text message, can operate within a regulatory structure, why should the new mediums not have the same oversight?” asks Jaijit Bhattacharya, president, Centre for Digital Economy Policy Research.

Several social platforms claim to have their own in-built mechanism to filter objectionable content, but such content curation has often faced allegations of randomness, ideological bias and commercial motivations. The lack of transparency is evident as they never inform users why any particular content was flagged or taken down. In a recent controversy over an alleged “toolkit”, Twitter was accused of political bias due to this vague mechanism.

Twitter's public policy says it labels “significantly and deceptively altered or fabricated” content as “manipulated media”. It claims the platform uses its own technology or third parties to determine if the content is manipulative. The randomness of Twitter's policy was at play in the toolkit case as it flagged some tweets with the same content but left several others untouched. Another fiasco happened when the platform removed the ‘blue tick' verification mark from the personal account of vice-president M. Venkaiah Naidu and several top RSS leaders. The randomness and opaqueness of such processes have raised the clamour for scrutiny of the functioning of these intermediaries.

“We need to know what is happening in the invisible mechanics of the automated public sphere. If something is tagged as manipulated misinformation, it is important to explain why,” says Anita Gurumurthy, executive director, IT for Change. Osama Manzar, an IT expert and member of the Alliance for Affordable Internet (A4AI), believes such malfunctioning can be avoided if the platforms do a local level of fact-checking in the local context and language. “Wrong content should not only be removed but the reason for the removal must reach the user,” he says.

If social media platforms have shown inconsistent content curation, influenced by ideological bias and commercial considerations, the government has also often sought to play regulator. In the past six months, there have been several episodes where the Centre and Twitter have crossed swords and, on most occasions, Twitter was forced to comply with the government's demands. Earlier this month, the Union government asked Twitter to take action against cartoonist Manjul whose tweets were believed to have violated the laws of the country. Twitter did not comply.

Critics say the new rules will not allow social media platforms to reject such requests in future. “Are social media houses like postmen, who deliver your mail but do not read its contents? Or are they like publishers, who have a fiduciary responsibility to the public to ensure the accuracy, propriety and legality of what they publish? Facebook and Twitter used to claim the former, but once they gave in to pressure to flag, and even remove, posts, they became more like the latter. The answer that they are sometimes postmen and sometimes publishers will not resolve the regulatory challenge,” says Congress MP Shashi Tharoor, who is also chairman of the parliamentary standing committee on information technology.

Objections to the rules

The new rules prescribe a due dilige­nce code intermediaries must heed to enjoy the ‘safe harbour' immunity. The digital media ethics code lists 10 categories of content that social media platforms should not host. A platform, on receipt of information from a court of law or appropriate government agency about hosting prohibited content, will have to take it down within 36 hours. Civil rights activists claim that this provision disproportionately empowers the government to block any content—especially that which is critical of it—by invoking the sovereignty, national integrity and public order clauses. “Terms like defamatory and libelous can be invoked easily to characterise content seen as unfavourable,” says Gurumurthy. Manzar advocates judicial oversight to decide the nature of the content and the necessity/ proportionality of intervention by the State.

The biggest bone of contention has been the traceability clause. The established social media platforms—those with at least five million registered users—will need to enable identification of the first originator of the information, which is required by a court of competent jurisdiction/ competent authority if it is an offence related to the sovereignty, integrity and security of India, friendly relations with foreign countries, public order or a sexual crime. Though the rules clarify that the traceability order may be passed only for these categories of offences, experts feel that categories such as ‘public order' are broad in scope and can be misused by the State.

More importantly, they fear this provision will be the end of encrypted messages and, eventually, of user privacy. “Encryption helps prevent not only your messages from being accessed by third parties but also guards against a whole range of cyber security risks,” says Apar Gupta, executive director, Internet Freedom Foundation (IFF), a Delhi-based digital liberties organisation.

WhatsApp, which will be massiv­ely hit by this provision, has moved the Delhi High Court, claiming this will force the company to circumvent its end-to-end encryption policy. Ironically, WhatsApp has been facing a legal challenge in the same court for forcing its users to consent to a new service agreement, which enables sharing of user data with parent company Facebook. Though the policy came into effect on May 15, the mandatory consent is now on hold till India rolls out a personal data protection law, claims WhatsApp.

The government, meanwhile, says it has merely asked the big tech companies to set up a system for tracing the origin of a message and it doesn't imply that it wants them to break encryption technology. The jury is out on whether traceability is possible without breaking the encryption norms. “Traceability is absolutely possible without compromising privacy or encryption. An encrypted message is pretty much like an Amazon package. No one except the intended recipient can see what is inside the package. But the recipient can find out who sent the package since it is labelled with the sender's name,” says Jaijit Bhattacharya. Prof. V. Kamakoti of IIT Madras argues that the traceability clause can be implemented with a few simple tweaks, but his proposals were countered by Prof. Manoj Prabhakaran from IIT Bombay saying these tweaks not only have limited use but also endanger the privacy of all users.

A group of 29 academic and security experts from across the globe had written a letter to Prasad last year saying there was no way to create “exceptional access” for some without weakening the security of the system for all. “For accessing information, law enforcement agencies should explore the existing tools under the law such as Section 69 of the IT Act, 2000, and also examine the provisions of the agreements under the Clarifying Lawful Overseas Use of Data Act in the US,” says Mishi Choudhary, legal director of the New York-based Software Freedom Law Centre.

The rules also call for a three-tier regulation mechanism for OTT platforms such as Netflix and Amazon Prime, which includes self-regulation by publishers, a self-regulating body headed by a retired judge empaneled with the government, and an inter-departmental committee. The committee, headed by a joint secretary of the I&B ministry, can hear complaints against OTT platforms and will be empowered to delete or modify content to prevent incitement of a cognisable offence relating to public order.

Most critics claim that this “oversight mechanism” by a group of bureaucrats is nothing but a permanent strategy of the government to censor content on OTT platforms. They suggest the inter-departmental body should be replaced with an independent quasi-judicial body, populated by diverse and broad representation and supported by a clear set of benchmark standards. While Gurumurthy says the IT guidelines should have excluded online publishers or curated publishers, Subimal Bhattacharya believes the provisions meant for OTT platforms may not stand up to judicial scrutiny.

What happens next?

With the new rules coming into force on May 26, the government asked all digital platforms to submit a compliance report within 15 days. According to government sources, several of them have demanded six months to one year for comprehensive compliance. After a “final warning” from the Centre, Twitter has also consented to comply with the guidelines, though the timeline has not been specified. Apart from WhatsApp's appeal in the Delhi High Court, there are at least half a dozen petitions against the IT rules pending in the Supreme Court and high courts. The government has not divulged its next course of action, but Prasad has categorically stated that foreign-based tech companies doing in India will not go “unpunished indefinitely”.

Critics have questioned the government's haste in enforcing these regulations while the tabling of the personal data protection (PDP) bill in Parliament is still in limbo. The draft of the PDP bill should have been examined by the parliamentary standing committee on IT headed by Tharoor, but the government has sent it to a newly constituted select committee headed by BJP MP Meenakshi Lekhi. Though the deliberations of the Lekhi commi­ttee ended in December 2020, the final report has not been submitted yet. Pra­sad says the government will try to bring the data protection bill in the next session of Parliament.

Till then, the jousting will continue. Both the government and digital intermediaries have missed no opportunity to claim that they are the bigger advocates of the citizens' right to privacy, which, incidentally, has been declared a fundamental right by the Supreme Court. But the user must not become a pawn in this game of one­upmanship about who controls the remote in this realm of ‘virtual' discourse.

Northlines
Northlines
The Northlines is an independent source on the Web for news, facts and figures relating to Jammu, Kashmir and Ladakh and its neighbourhood.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Popular

More like this
Related

India’s increased crude oil imports from Russia contained inflation in 2023-24

Sanctions on Russia by West also helped Modi Govt...

Israel tosses Gaza ceasefire ball into the court of Hamas as a strategy

Both sides want to project a positive attitude to...

Congress and CPI(M) join forces in Murshidabad to defeat the sitting TMC MP

TMC doing its best to get all four seats...

DEMOCRACY, ECONOMICS AND GOVERNANCE

The confluence of elections, electioneering and the economy demands...