In a significant move towards limiting unlawful online content, the Government of India has introduced new regulations requiring social media companies to remove illegal material within three hours of notification. This new directive, which comes into effect on February 20, represents a stark reduction from the previous 36-hour requirement and encompasses the largest platforms including Meta, YouTube, and X, as well as extending to AI-generated content.

The decision to cut the takedown timeline has raised eyebrows, with no official rationale provided by the government for this expedited requirement. Critics of the regulation argue that it may lead to censorship in the world’s largest democracy, which boasts over a billion internet users. Concerns focus on the implications of faster removals potentially removing vital rights to free expression.

The Indian administration has previously leveraged existing Information Technology rules, mandating social media vans to eliminate content considered as illegal based on national security and public safety laws—giving authorities wide-ranging control over social media operations.

Recent transparency reports suggest that more than 28,000 URLs were blocked following government orders in 2024 alone, demonstrating the current level of oversight. The introduction of stricter guidelines includes labeling AI-generated content with additional requirements for platforms that produce or disseminate such materials. These stipulations dictate that synthetic media must be distinctly marked and tracked to prevent fraudulent use.

Digital rights organizations and technology experts express apprehension over the feasibility of implementing such demands, arguing that the pressuring timelines might push companies towards entirely automated moderation systems, consequently leading to increased chances of innocent content being wrongly removed. The Internet Freedom Foundation criticized the rapid response requirement, asserting it would mold social media platforms into what they term 'rapid fire censors'.

In discussions regarding artificial intelligence content, some analysts posit that although the labeling initiative is ostensibly positive for greater transparency, the quick deadlines could catalyze a shift towards over-automation in moderation processes, diluting the human element essential for content review.

This tightening of controls sparks a broader debate on the balance between maintaining public order and preserving the fundamental rights of free speech and expression online in one of the world's most active digital landscapes.