The advent of deepfake technology has raised concerns, and calls for countermeasures to detect and combat its misuse through digitally manipulated videos and images that can wreak havoc on individuals and societies.
Recent incidents in India like that of actor Rashmika Mandanna underscore the urgency in addressing the potential consequences of deepfake proliferation, through technical solutions and a comprehensive legal framework.
“The need of the hour is to focus on developing tools that swiftly identify and label deepfake content, prioritising public education to avoid immediate frenzies,” said cyber expert Rakshit Tandon.
“There are technical tools available. If a technology can be misused, then the same technology can be used to crack down on it. But the only problem is that it is not lightweight,” added Tandon while putting the onus on young entrepreneurs to come up with solutions that can easily detect deepfake.
“For example, users can have an add-on to their browser so when they use a video content, it gives them an alert saying ‘this is 80% deepfake’.” He also said that misuse of technology is not new and narrated an incident from the past.
“Deepfake is recent, but we have seen technology being misused for impersonation, carrying out financial frauds and creating widespread misinformation. I remember a visual from Karachi, which was later circulated in India to create a frenzy against Muslims. It was later found to be fake,” he said.
“Now it is far more advanced with Artificial Intelligence (AI) where there is voice and face cloning,” he added.
Understanding deepfake and need of the hour
Synthetic media, also known as AI-generated media, involves artificial production, manipulation, or modification of images, video, and audio using artificial intelligence. When used to mislead or deceive, they’re often described as “deepfakes” – the word “deep” stemming from deep learning software.
The technology has many positive applications ranging from adding effects to films, creating lifelike human avatars for customer services, generating fun social media clips, and more. However, their use to create political misinformation, fake celebrity and revenge porn, and even commit fraud has captured news headlines, raising concerns.
“Right now, the focus should be on developing more accessible tools that are designed to detect fraud and educate the masses about what is real and what is fake. Then, the law needs to be renovated in response to modern problems,” Tandon said.
He pointed out that the Indian Cyber Laws were created in 2000 and amended in 2008. “India is now among the leaders in digital revolution. You step out of the house and even the vegetable vendor is using the internet for payment. In this scenario, there is an urgent need for better laws. Those who are field practitioners should be included in the committee. Changes also need to be made in the Evidence Act,” he added.
He also said that though softwares to detect deepfakes are available, they aren’t easily accessible. “Companies need to purchase these at a heavy price and need licenses. Such softwares should be as handy as converting a word file to a pdf. “The software community should come up with tools that are in-built so that the moment I browse an audio or video, it recognises it as deep fake.”
Legal and evidential challenges
As these videos rapidly spread across platforms, locating the origin is akin to finding a needle in a haystack. The widespread dissemination amplifies the potential damage, making it imperative for investigators to navigate through the labyrinth of shares and reposts.
“Going to the original source of the deepfake video is the first challenge in the investigation because it is being circulated through social media en masse,” said Rohit Meena, Deputy Commissioner of Police (DCP), Shahdara, Delhi.
“Continuity analysis, essential in any legal investigation, take on a heightened level of importance in case of deepfakes. Frame-by-frame analysis in forensics becomes a meticulous process, requiring experts to check for tampering. The introduction of voice modulation adds another layer of complexity, demanding specialised software to discern the authenticity of the audio component. The technical intricacies involved make the investigation arduous,” he explained further.
“The creation of these videos involves sophisticated AI for facial and voice replication. The challenge compounds with the use of VPNs from different countries, causing the IP address to toggle around. Investigating deepfakes involves obtaining information from numerous servers and hosts that carry fragments of the video. The legal process extends beyond borders, making cooperation between nations crucial.”
“These manipulative videos have the power to create rifts in society. While complaints often stem from common people, the use of sophisticated technology often targets celebrities, intensifying the need for stringent investigation,” he added.
He also pointed out that they always check the videos for tampering and only when that is ruled out do they proceed. The DCP also said that unlike basic morphed images, which can be caught easily, deepfake feels more real. “Here you are replicating everything from phonetics to voice to facial. More deep technology,” he concluded.
Former Bombay High Court judge Seshadri Naidu said that nothing is immune from judicial scrutiny. However, when it comes to interference in government policies, lawmakers do enjoy some liberty. “Unless they violate the constitutional mandate, courts will not interfere as a matter of preferences — one idea over another. Governments do have the power to indulge in policy experimentation,” he said.
Echoing the DCP’s concerns, he said, “This very word ‘virtual’ is Janus faced: it has two meanings, ‘real’ and ‘unreal’. So, technologies like deepfakes make it very difficult for the courts to determine the nature of evidence.” Tandon said proving a crime with evidence to convict the criminal is a challenge for the law enforcement agencies.
“The culprit will say that he has not done it, it has been done by the AI so he will bring the code in a pendrive and say ‘you arrest the pendrive because I just created an algorithm to train the model, I never said that you go and create a deepfake of [actor] Kajol’. So, there are endless challenges. I think that is where the law machinery should start working for a strong deterrent.”
He said that there has to be a fear of the law and consequences for cyber-crimes and that those creating such AIs should act responsibly. “Tampering of evidence is also happening with the misuse of loopholes. The volume of tech fraud we are seeing is incredible. So, we are facing the heat where digital footprints of innocent people come into a crime and they become a victim and a part of the investigation just because someone has exploited their footprints.
“India has started taking measures. We have built National Cyber Forensics Law University and are giving importance to forensics. Also, so many forensic experts are being hired by the police, so there is a huge talent pool. There are young minds learning forensic science and digital and cyber forensics are part of their curriculum. But we need to open more labs to handle the chaos.”
On a brighter note Naidu said, “Law starts off slowly but steadily. Once it starts off, the courts adopt them in myriad ways so as to catch up with the crime.”
Referring to Anil Kapoor v. Simply Life India and Ors case, he explained that the Delhi High Court recognised the potential harm and granted protection to the actor’s persona and personal attributes. The court issued an ex parte injunction, restraining sixteen entities from utilising the actor’s name, likeness, image. It also injuncted them from employing AI tools for financial gain or commercial purposes.
Digital India without education
The digital landscape in India demands a shift towards cyber hygiene education. Imparting knowledge about online safety, responsible app usage, and recognising cyber threats should be a fundamental aspect of education at all levels.
“Education and awareness are key with anything new. There is a lack of education, awareness and information on cyber hygiene and security which became the real problem. If you look at the numbers you see that more educated people fall victim to it,” Tandon said.
“Knowledge has to come the moment you start using the technology. You start using netbanking but have you ever seen your banker educating you on how to use that app. You yourself download it, verify it and start pressing the buttons. You don’t even know about all 300 options available in your mobile banking. You just know how to send money, add a beneficiary, enter your password and that’s it. So that is where the problem is.”
Tandon also said that not many people take precautions like updating their android or ioS to keep attackers at bay despite the government repeatedly sending alerts.
“Back in 2000, you had 200-plus computer centres in one city teaching you MS Office, Windows, Word and Excel but today have you seen anybody going to such centres? “By imparting cyber hygiene and etiquettes through our education systems, you are educating the whole society. You empower the whole society by educating the children, teachers, parents. So, through a school you can easily secure the cyber space. These courses should start from grade zero because today parents are giving the phone even to a three-month-old infant.”
Role of intermediaries
Admissibility, presentation and extraction work hand-in-hand in the court of law for digital evidence.
“What intermediaries (social media platforms and telecom partners) capture and produce to the law enforcement agencies matters. The channel also has to be transparent. So, if the intermediary is not going to provide me with the right information, how will I create the right evidence. They should capture the user, as in, the original creator of a deepfake visual,” said Tandon.
Intermediaries help people create content.
“They have responsibility and are liable. If there is a notice served for any content which goes against the guidelines of the country, they have to immediately remove it, block it from everywhere. Intermediaries have to function according to the guidelines of the constitution.”
Explaining the mechanism, Tandon said that the agency sending out notice to an intermediary has to cite a valid reason.
“The first question they ask is whether you have a valid court order or a legal reason, sections of which have been violated and then only they act.”
“The intermediaries have also created a mechanism that cannot be misused; it has to be in the national interest and then we have IT Acts 67, 69 and 79 where it is written how agencies can monitor or interfere and how they have the power to block and remove content.” Many intermediaries like WhatsApp, which banned 47,15,906 Indian accounts in March alone, have also started issuing their transparency report.
From the Kashmir valley to the global stage, the santoor maestro weaves a 300 year…
Under the Sanjeevani Yojana, the Delhi government will provide free treatment to people aged above…
The Air Quality Index (AQI) deteriorated from 370 on Saturday to 393, placing it in…
From St. Stephen’s to St. James, a look at the vibrant history, architecture, and communities…
From the charm of colonial-era recipes to beloved neighbourhood classics, these bakeries capture the essence of Christmas, serving…
Police said the second batch of 50 constables for training in 'Urban Intervention' will begin…