India’s government has established a formal regulatory framework for AI-generated content—including deepfakes and synthetic audio—by amending the country’s IT intermediary rules. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, were notified via gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar and will seize effect on February 20.
What Defines AI-Generated Content
The rules define “synthetically generated information” (SGI) as audio, visual, or audio-visual content created or altered using a computer resource that appears genuine and depicts people or events in a misleading way. However, routine editing like color correction, noise reduction, compression, and translation are exempt if they don’t distort the original meaning. Research papers and training materials are also excluded.
Increased Compliance for Social Media Platforms
Social media platforms—including Instagram, YouTube, and Facebook—will bear the brunt of the modern regulations. Under Rule 4(1A), platforms must ask users if content is AI-generated before It’s uploaded. They must also utilize automated tools to verify the content’s format, source, and nature. Content flagged as synthetic will require a visible disclosure tag, and platforms could be deemed to have failed in their due diligence if they allow violating content to remain online.
Faster Response Times
The new rules significantly reduce response times for lawful orders. Platforms now have three hours to act on certain requests, down from 36 hours previously. The timeframe for other orders has also been reduced, from 15 days to seven days, and from 24 hours to 12 hours.
The rules also connect synthetic content to existing criminal law. SGI involving child sexual abuse material, obscene content, false electronic records, explosives-related material, or deepfakes misrepresenting a person’s identity or voice now falls under the Bharatiya Nyaya Sanhita, the POCSO Act, and the Explosive Substances Act. Platforms are required to warn users at least every three months about the penalties for misusing AI content, but the government has assured intermediaries that compliance with these rules will not affect their safe harbor protection under Section 79 of the IT Act.
Frequently Asked Questions
What is considered “synthetically generated information”?
It covers any audio, visual or audio-visual content created or altered using a computer resource that looks real—and shows people or events in a way that could pass off as genuine.
What happens if a platform doesn’t comply with the new rules?
If a platform knowingly lets violating content slide, it’s deemed to have failed its due diligence.
Are there any exceptions to the labeling requirement?
Routine editing—colour correction, noise reduction, compression, translation—is exempt, as long as it doesn’t distort the original meaning. Research papers, training materials, PDFs, presentations and hypothetical drafts using illustrative content also get a pass.
As these regulations take effect, it remains to be seen how effectively social media platforms will be able to implement the new labeling and takedown requirements, and what impact this will have on the spread of AI-generated misinformation in India.
