Introduction
Now in this digital era, identity is no longer restricted to a person’s physical presence as our faces and voices which once used to showcase individuality have now transformed to sensitive data capable of misuse. The recent controversy involving actor Manoj Bajpayee brings this unsettling reality into focus where a video circulated on social media which was portraying Manoj Bajpayee as endorsing a political party in the Bihar Assembly elections. Within hours, the video was shared widely on social media appearing credible to the public at large. The actor, however, came up and clarified that the clip was an AI-generated deepfake video which is digitally manipulated and taken from an old advertisement he had shot for a streaming platform. He clarified that “This is fake, patched, and misleading”. This instance revealed more than just a personal grievance as it invariably exposed the boundary between truth and fiction.
Legal and Moral extent of Identity Misuse
This case highlights a crucial question of ownership and consent. Can one’s face or voice which are intrinsic parts of one’s personality be freely replicated and misused by another without prior consent? Indian jurisprudence recognises this right of publicity as a facet of Intellectual Property Rights, the acceptance of the right of publicity within the right of privacy is still at a nascent stage in India.
Right to publicity as a facet of the right to privacy, it is important to note that the debate over right to privacy has only come to a conclusion in 2017. Thus, the right to publicity has had a very limited development and a substantial portion of precedents in this aspect have been laid down by the High Courts in India.The Delhi High Court, in ICC Development (International) Ltd. v. Arvee Enterprises (2003), invariably held that a person’s identity has commercial value, and any unauthorised exploitation constitutes a violation of their rights. Similarly, the Hon’ble Supreme Court in the landmark case of Justice K.S. Puttaswamy v. Union of India (2017) invariably expanded the scope of privacy in order to include personal autonomy and control over one’s digital personality. However, enforcement mechanisms still remain porous as deepfakes generated through algorithms are easily able to mimic not only a person’s appearance but also their voice, mannerisms, and emotions. The legal system which was primarily designed for tackling tangible infringements, now grapples with violations occurring in the intangible digital realm.
Personality Rights and Broader Implications Beyond Celebrity lens
It is pertinent to note that personality rights basically protect individuals from the unauthorised commercial or representational use of their likeness. This protection, however, becomes blurred when it comes to AI-generated content. In this case when an algorithm synthesises Bajpayee’s likeness and voice to deliver political messages, the injury is twofold; firstly the Defamation of reputation, as the content falsely attributes opinions by showcasing affiliation with political party; and secondly the Infringement of publicity rights, as the image and voice are exploited without prior consent thereby misleading public.
The Information Technology Act of 2000, though deals with intermediary liability and cyber offences but it lacks specific provisions on AI impersonation. “Section 66D of the IT Act criminalises cheating by personation using computer resource code but it is important to establish intent and source in case of a deepfake before the Hon’ble court. The Digital Personal Data Protection Act of 2023, gives partial protection by invariably restricting processing of personal data without prior consent. Yet, when identity itself becomes “data”, enforcement demands not only legal clarity but technological literacy.
It would not be correct if we consider the Bajpayee incident as an isolated assault on celebrity rights, his case is more than that as it was shared by all individuals on social media. Today, we see many incidents where a student’s photograph can be easily morphed into an obscene video, a professional’s voice can be used for fraud, and an activist’s face can be attached to false propaganda. The danger, therefore, is democratised — everyone’s likeness is up for grabs. This phenomenon invariably challenges the traditional jurisprudence of consent. Consent in the digital age is not merely about agreeing to share data; it is about authorising its transformation and reproduction.
When Identity Becomes Intellectual Property
An emerging dimension in identity protection is the use of trademark law in order to protect identity, especially for public figures whose voice or likeness carries commercial value. We can see that celebrities worldwide have trademarked their names, signatures, and even voices to prevent misuse. For instance, Amitabh Bachchan has asserted strong rights over his baritone voice and image; Michael Jackson’s and Elvis Presley’s likenesses are trademarked, Marilyn Monroe’s image continues to be legally protected.

Under India’s Trade Marks Act, 1999, even sound marks like the Yahoo yodel or ICICI Bank jingle are registrable implying that, in theory, an actor’s face or voice could also qualify for protection if used distinctively in commerce. Ultimately, such protection is not about ego but existence. For artists, their voice and face are both livelihood and legacy and when technology replicates them without consent, it is not just a legal wrong but a violation of identity itself.
Deepfakes Undermine Democracy
The circulation of Bajpayee’s video ahead of state elections showcases that in a society where visual content drives political narratives, a single AI fabricated clip can very well distort public opinion. The Election Commission of India has issued advisories on “synthetic media,” but regulation without enforcement remains ineffective. Hence, more than personal violations it was a direct threat to electoral integrity which is very well capable of creating false endorsements or defamatory propaganda in seconds. Bajpayee’s case is a warning of how easily public trust can be hijacked.
The way forward
From a legal standpoint, India really requires a robust framework to address deepfakes. Such a framework must have Explicit penalisation of unauthorised AI-generated impersonation, Mandatory labelling of synthetic media by online platforms, Fast-track mechanisms for takedown and redressal. Courts also need to reinterpret existing doctrines like Injunctions against dissemination of false content, damages for reputational loss. Simultaneously, platform accountability must be strictly reinforced. From a technological standpoint, investments in deepfake-detection tools are also essential as Machine-learning Tools can identify inconsistencies. And from a societal standpoint, the conversation must shift from outrage to awareness. Citizens must learn to question, verify, and demand authenticity.
Conclusion
The question “Who owns your face and voice?” is legal, ethical, and deeply personal. In theory, the answer is simple: you do but in practice, without stronger laws, awareness, and vigilance, that ownership can be stolen by AI. Manoj Bajpayee’s deepfake controversy raises multiple questions before lawmakers, artists, and citizens. As technology advances, so must our commitment towards protecting the identity because the face and voice are not merely features; they are the very language of personhood. To own them, protect them, and preserve them is not just a matter of law — it is an act of dignity.