New Delhi[India] February 10:The Indian government has reduced the time given to social media platforms to remove flagged content. Earlier, platforms had 36 hours. Now they have only 3 hours. The new rule applies when authorities flag content that breaks Indian law. This change was announced by the Ministry of Electronics & Information Technology & the rules will start from February 2026. Officials said the step is needed because harmful content spreads very fast. Many users asked one question online. Can platforms really act this quickly every time?

Why AI Generated Content Is Under Strong Watch

The government said AI generated and synthetic content is becoming risky. Deepfakes look real and confuse people. Some videos misuse faces and voices. The new rules clearly define what synthetic content means. It includes audio, video, or images made or changed by computers that look real. Normal editing and school projects are not included. Officials said AI content will be judged like any other illegal content. Many experts welcomed this. Some users commented that fake videos already caused damage before. They asked why such rules did not come earlier.

Also Read:Janakpuri Horror: Eyewitness Says Biker Fell Into Pit, Authorities Were Informed but Silent

Mandatory AI Labels And Hidden Identity Marks Explained

Under the new IT rules, platforms must label AI generated content clearly. Users should know what is real and what is not. If possible, platforms must also add hidden data to trace where the content came from. This helps find the creator later. The ministry said tools that create or share synthetic content must follow this rule. Many creators online reacted. Some feared extra work. Others supported the move. They said clear labels protect common users. A question often asked was simple. If content is clearly marked, will people still believe fake stories?

History Behind India’s Digital Content Laws

India first brought IT rules in 2021 to control online platforms. At that time, fake news and harmful posts were rising fast. Over the years, social media grew faster than laws. AI tools made things more complex. Deepfake cases increased during elections and public debates. The government slowly updated rules to match technology. Officials said this amendment is part of that journey. Older cases of viral fake videos were recalled by users online. Many said these rules reflect lessons learned late but necessary.

Public Reaction And What Platforms Must Do Next

Public reaction has been mixed. Some users praised the strict deadline. They said three hours can save reputations and lives. Others asked if small platforms can manage such speed. Companies like X and Instagram will need better systems. Automated tools must block illegal AI content like fake documents and abuse material. People online also worried about misuse of power. They asked who decides what is illegal. The government said due process will be followed. The rule has started many debates. One thing is clear. AI content is no longer free from fast action.