DEEPFAKE AND AI CRIMES IN INDIA: THE URGENT NEED FOR A STRONG LEGAL FRAMEWORK
- Ritik Agrawal
- 21 minutes ago
- 7 min read
Susmita Chatterjee
National Forensic Sciences University
Editor: Sakshi Soni

ABSTRACT
Artificial intelligence (AI) has created deepfake technology, which has become one of most dangerous by-products. It allows people to create very realistic fake videos, photos, as well as audio that can fool even experts. In India, where massive digital population use digital platforms and AI tools, has witnessed a step increased in deepfake-related crimes, such as sexual harassment, political misinformation, financial scams, and cyberbullying.
The current legal framework, mainly the Information Technology Act, 2000, and a few provisions of the Bharatiya Nyaya Sanhita, 2023, fail to deal with the challenges which is created by AI-generated content. This article examines at how deepfake crimes are increasing in India, analyses the loopholes in the existing laws, highlights major incidents, compares India’s position with other countries, and argues for needs a proper and complete legal framework to handle these issues in India.[1][2]
INTRODUCTION
The rise of artificial intelligence has transformed digital life across the world. Among its many applications, deepfakes have become one of the most powerful as well concerning tool. A “deepfake” refer to AI-created photo, video, or audio recording in which a person’s face, body, or voice is digitally changed to create a very highly realistic but false version of them. Although the technology was originally developed for entertainment and research purposes, but their misuse has been increased rapidly and now it’s become as a tool for criminal activities, such as misinformation, blackmail, financial fraud and political manipulation, and etc. In India, with more than 820 million internet users, a thriving social media ecosystem and widespread access to smartphones, is particularly vulnerable to this technological threat.
In the past three years, Indian cyberspace has witnessed several disturbing cases, like actresses being targeted through sexually explicit deepfakes, political leaders being misrepresented during elections, innocent women being also extorted using morphed videos, and corporate employees losing lakhs of rupees through the AI-generated voice fraud.
While, Indian laws punish offences like identity theft, defamation, and obscenity, they were drafted long before AI deepfake technology existed. As a result, these laws fail to address the speed, scale, and anonymity of these new types of offences. Therefore, this leaves a serious gap in the legal system.
In this article shows that why India urgently needs to establish a specific legal framework to address the AI and deepfake crimes. Because without proper remedies victims will continue to suffer, and on other hand, the offenders will continue to take advantages of these technological loopholes.
WHAT ARE DEEPSFAKES AND HOW DO WE DEAL WITH IT
Deepfakes are a type of artificial media created using AI techniques such as deep learning, machine learning, and neural networks, especially Generative Adversarial Networks (GANs). These technologies create highly realistic but completely fabricated audio, video, or image content by digitally altering a person’s likeness, voice, or expressions. Unlike normal regular photo or video editing, deepfakes can copy person’s facial movements, voice patterns, and emotions so realistically that it’s very hard to tell apart from real content.

The functioning of deepfake technology operates by training AI systems on large datasets of real images or audio recording of a target individual. After trained, the AI can overlay or recreate a person’s identity in fake media. While this technology has legitimated uses in cinema, education, accessibility tools, and virtual simulations, its misuse has grown quickly, especially in the absence of regulatory safeguards.
Tackling the problem of deepfakes are requires a combined approach that uses technology, laws, and public awareness initiatives. Technically, researchers are creating and developing AI tools that can detect unusual facial expressions, eye movements, voice patterns, and digital signs. Companies such as Meta, Google, and OpenAI are also creating system or initiated efforts to implement watermarking and detection systems to identify AI-generated content. However, still technology measures alone are not enough, because deepfake creation methods keep changing as well improving very quickly.
As a result, strong legal measures are essential. The law should be clearly defined regarding criminal responsibility for deepfake crimes, platform accountability, and how victims can be compensated and protected. At the same time, public need better digital awareness this is necessary so they can recognize and report deepfake content. In India, however, the legal response is still remains disjointed and largely reactive only after harm has opccurred, instead of being forward looking and preventive.
INCREASE OF DEEPFAKE AND AI-RELATED CRIMES IN INDIA
India has witnessed in AI-related cybercrimes which has been increase rapidly, especially in deepfakes related crimes. The most common forms include non-consensual sexual deepfakes, political misinformation, financial fraud, identity theft, and online harassment. In fact, women’s and public figures are particularly face higher risk because their photos, videos, and voices are widely available online, making them more vulnerable as well it easy to misuse their data by this AI systems.
In the year of 2023, several Indian actresses were targeted and became victims of viral deepfake videos clips which is falsely portrayed them in explicit situations. These videos clips are share quickly across over the social media and before the platforms could remove them, causing serious and long-lasting harm done to their reputations. Similarly, during the time of election period, some fake videos and audio clips are recordings of political leaders which have been circulated to confuse and mislead voters, posing a threatening the fairness of the democratic process.
Another growing problem is AI voice cloning fraud. In these cases, criminals copy the voice of a senior official or a family member to trick people into sending money. There have been many incidents where Indian employees and business owners lost lakhs of rupees after receiving AI-generated phone calls that sounded exactly like someone they trusted. This kind of crime takes advantage not only the technology but also of people’s emotions and their trust.
At the same time, investigating of these crimes becoming difficult as well challenging. The cross-border data sharing, anonymity of the internet, and the use of encrypted platforms make it difficult to identify the offenders. Therefore, the law enforcement agencies often struggle due to lack of technical skills, inadequate forensic tools, and the absence of clear legal authority, which is needed to deal with the deepfake-related crimes effectively.
EXISTING LEGAL FRAMEWORK IN INDIA
The cyber offences in India are generally governed by the Information Technology Act, 2000 (IT Act). But some of its provisions can be applied indirectly to crime of deepfake-related. For instance, under Section 66D deals with cheating by impersonation through using computer resources, where under Section 66E relates to the violations of privacy. Along with that Sections 67 and 67A criminalize the publication and transmission of obscene and sexually explicit material in electronic form.
However, the IT Act was enacted long before AI-generated synthetic media emerged. It neither defines nor explicitly recognizes deepfakes, nor does it address the concerns which is related to AI training data, algorithmic accountability, or automated content creation. Moreover, the penalties prescribed under the Act are insufficient when compared to the scale of harm and rapid spread associated with deepfake content.

The Bharatiya Nyaya Sanhita, 2023 (BNS), which has been replaced by the Indian Penal Code, includes provisions on identity theft, defamation, and sexual offences that may be used in cases of involving deepfakes. Depending on the circumstances, sections dealing with impersonation, criminal intimidation, and obscenity can also be applied.
However, like the previous law, the BNS does not specifically deal with the AI-generated offences. The lack of clear definitions results in inconsistent application of the law and grants broad discretion to investigating authorities. As a result, the victims are often face delays, as well confusion over which authority has jurisdiction, and difficulties in presenting sufficient evidence.
MAJOR LEGAL AND PRACTICAL CHALLENGES
One of the main biggest challenges in prosecuting that the deepfake crimes is attribution. Identifying the original creator of a deepfake is difficult because the main reason of VPN usage, encrypted platforms and offshore servers. Secondly, the issue is evidentiary standards are unclear. The Courts currently lack clear and uniform guidelines on the admissibility, authentication, and forensic examination of AI-generated media.
Another challenge lies in the Platform liability; it also poses a problem. Intermediaries often enjoy safe harbour protection under Section 79 of the IT Act, yet they frequently delay or fail to act on takedown requests. As a result, this forces victims to repeatedly approach courts or cyber cells, extending their trauma and suffering.
Moreover, there are very few remedies are available that focus on the victims. Existing laws mainly prioritize on punishing offenders and provide very little help in term of compensation, rehabilitation, or emotional support, especially for women’s who are targeted by deepfake abuse.
THE INTERNATIONAL LEGAL APPROACHES
In several countries have taken strict measures to control deepfake crimes. In the United States, as well states like California and Texas have passed laws addressing non-consensual deepfake pornography and election-related to deepfakes crimes.
The European Union has proposed the Artificial Intelligence Act, which was classifying deepfake technology as high-risk. This law requires that AI-generated content to be clearly labelled and holds the creators and the platforms accountable. Like this, China also requires that AI-generated content to be labelled and its source to be traceable.
In comparison of other countries, India’s legal framework is still very weak and outdated. It depends on general cyber laws only and that are not strong enough to dealing with the complexities of modern AI-driven crimes.[3][4]

INDIA SHOULD NEED FOR A STRONG AND SEPARATE LEGAL FRAMEWORK
India urgently needs a separate and comprehensive law to deal with deepfakes and AI-related crimes. This law should be clearly defined what deepfakes are as well it illegal to harmful to create and regarding of share them in a harmful way.
The law should be providing strict punishment regarding deepfake offences as well it requires clear labels on AI-generated content, and social media platforms must be given clear legal duties like to remove harmful content quickly all over the internet. rained cybercrime units and better-informed judges and lawyers are also needed. The most importantly, victims must receive compensation, emotional support, and fast legal remedies. Without these kind of changes, deepfake crimes will continue to increase without any effective control.[5]
CONCLUSION
In India, deepfakes crimes are serious problem for the individuals as well for the society. The existing laws are not to enough to address them. Thus, India urgently needs for strong AI-specific laws, along with greater awareness, better enforcement, and platform accountability to protect people’s, privacy and trust.
REFERENCES
[1] Information Technology Act, 2000 –
[2] Bharatiya Nyaya Sanhita, 2023
.jpg)