IT Act ill-equipped to address deepfake menace

Deepfakes are fast emerging as one of the biggest menaces that has the capacity to upset normal functioning in many fields with experts warning that existing IT legislations might not simply have sufficient teeth to tackle the growing problem.

  • Last Updated : May 17, 2024, 14:11 IST
Photo Credit: iStock

Deepfakes are fast emerging as one of the biggest menaces that has the capacity to upset normal functioning in many fields with experts warning that existing IT legislations might not simply have sufficient teeth to tackle the growing problem. The reason is simple: current provisions cannot prevent the creation and circulation of deepfakes, a technology marvel that was beyond the purview of foresight anywhere in the world.
Experts have also said, The Economic Times has reported, that the only way to deal with this problem is that policymakers should urgently step in to deal with the phenomenon to minimise the tremendous damage deepfakes can cause.
Deepfake is an outcome of AI put to mostly destructive use where the image/video of an individual can be created convincingly to dupe/mislead/persuade others.
The extent to which deepfakes can be deviously used exploded on the face of the citizens when recently Rashmika Mandanna, known for her roles in films such as “Pushpa”, “Mission Majnu”, was seen in a video which went viral, which the actor said was the outcome of deepfake technology.
Following the incident, Delhi Police has filed an FIR under sections 465 (forgery) and 469 (harming reputation) of the IPC 1860 besides sections 66C (identity theft) and 66E (privacy violation) of the Information Technology Act, 2000.
Siddharth Deb, manager – public policy, TQH Consulting said, “Criminal provisions under the IT Act and the IPC only partially address the harms which arises from deepfakes. Policymakers must identify and reduce the psychological impact on victims.”
Shakes by the Rashmika episode, minister of state for IT Rajeev Chandrasekhar said that victims must promptly file police complaints and “avail remedies provided under the Information Technology rules.”
Not leaving the issue to reactive complaints, the government shot off an advisory to social media platforms warning them that they may lose ‘safe harbour immunity’ under the IT Act if they fail to remove deepfake content which has been the subject of complaint within 36 hours of the report.
Unfortunately, all these steps are those that will follow the spread of deepfake content that has the potential to do immense harm within minutes of circulation.
“Deepfakes and Al-generated misinformation may do damage at the time they are spread, which cannot be undone,” remarked Jaspreet Bindra, founder and managing director of IT consultant firm The Tech Whisperer Ltd. UK. Bindra also added that around 95% of deepfakes are pornographic in nature.
Incidentally, all this comes against the backdrop of growing concern to effectively monitor and regulate the use of AI. In fact, recently, in a televised interaction with British Prime Minister Rishi Sunak, the world’s richest person and himself an AI-enthusiast Elon Musk described AI as the most “destructive force” ever to be invented.
“There is a clear need for a law on AI to govern the complexities relating to Al and related applications. The law needs to delineate legal responsibilities in AI-related incidents and provide accountable frameworks to protect individuals in cases of harm,” said Arjun Goswami, director – public policy at law firm Cyril Amarchand Mangaldas.
Goswami also pointed out that deepfakes can have immense potential to harm business interests by perpetrating frauds, or even corporate espionages. He thinks one could even use it to manipulate stock markets by faking big and influential investors. If false information is spread about a company, it might lead to erosion in stock market value or business losses on the ground.
Sarayu Natarajan, founder of Aapti Institute, a research organisation with a focus on the intersection of technology and society, told the newspaper that policymakers might consider regulation on the production of information. “Efforts should be made to monitor coordinated behaviour of those who provision video alteration/manipulation services, to content production and dissemination,” she said.
“India must tussle with the differentiated and exceptional impact these technologies have on individual dignity and societal harmony quickly,” she remarked.
She also said that policymakers must think of civil remedies for victims and ways to mandate watermarking in content made using deepfake technology, examining non-price access controls to tech use, and deeper scrutiny of recommender algorithms should be explored.
A few experts also brought up the need for India to collaborate with other countries to control the menace. For example, the UK is exploring how to infuse transparency and accountability by labelling deepfake videos or images.
Bindra said in future, sensitisation on deepfakes could become a part of companies’ prevention of sexual harassment during cybersecurity training sessions.
Goswami indicated that companies could go in for content verification protocols and invest in deepfake detection software.

Published: November 14, 2023, 12:12 IST
Exit mobile version