cURL Error: 0 India tightens AI rules: Social media to label deepfakes, harmful posts to be removed within 3 hours - PratapDarpan
Home Tech Hub India tightens AI rules: Social media to label deepfakes, harmful posts to...

India tightens AI rules: Social media to label deepfakes, harmful posts to be removed within 3 hours

0

India tightens AI rules: Social media to label deepfakes, harmful posts to be removed in 3 hours

India is updating its AI rules to curb the rise and spread of “artificially generated information”, be it audio, visual or audio-visual, putting the onus on social media platforms to properly label such content and remove any objectionable content within three hours. Objectionable content will have to be removed within 3 hours, labeling mandatory: Government’s new rules on AI, deepfakes

Advertisement
The Indian government has introduced strict rules for AI-generated and deepfake content, putting more responsibility on social media platforms.
The Indian government has introduced strict rules for AI-generated and deepfake content, putting more responsibility on social media platforms.

The Indian government is tightening its grip on AI-generated and deepfake content by making it mandatory for social media platforms to remove objectionable content within three hours and labeling AI-generated content. Under the amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) rules, platforms will have to ensure that any content created using AI tools is clearly and prominently labelled, while users will also need to declare whether the content uploaded by them has been created or altered using AI.

Advertisement

The new rules also require intermediaries to remove certain categories of unlawful or harmful content within three hours and ensure that AI-generated or manipulated content is clearly disclosed to users. The government is also mandating that social media platforms deploy tools and verification mechanisms to check user declarations, and hold them responsible if AI-generated content is published without proper disclosure. According to the government, the new rules aim to prevent misuse of AI and deepfakes online, while pushing platforms to take faster action on harmful or misleading content.

What are the new rules

The new rules, notified on February 10 and scheduled to come into effect from February 20, formally bring what the government calls “artificially generated information” (SGI) under India’s digital governance framework. This includes AI-generated or AI-altered audio, video, and visual content that appears to be real or authentic and may be difficult for users to distinguish from real-world content. The move comes amid growing concerns over misuse of deepfakes, impersonation, misinformation and synthetic media for fraud, harassment and other illegal activities.

Mandatory labeling and traceability

A central pillar of the latest amendments is mandatory labeling. The Center has directed social media platforms and other digital intermediaries enabling the creation or dissemination of synthetic content to ensure that such content is clearly, prominently and unambiguously labeled as AI-generated. Platforms are also required to embed persistent metadata or technical provenance markers, such as unique identifiers, to help trace synthetic content back to the originating platform or system, wherever technically possible. Importantly, intermediaries are prohibited from removing or allowing these labels or metadata to be tampered with.

User Declarations and Platform Accountability

To ensure compliance, social media platforms are also directed to seek user declarations at the time of upload, asking whether the content being posted has been artificially generated or altered using AI. Platforms are expected to deploy appropriate and proportionate technical measures, including automated tools, to verify the accuracy of these declarations. The rules state that failure to conduct due diligence in labeling and verification could expose the platform to liability under the amended framework.

Timeline for content moderation

Along with disclosure requirements, the government has also reduced timelines for content moderation. In some cases, social media platforms now have to act on valid orders or user complaints within three hours, down from 36 hours previously. Other response timelines have also been reduced, from 15 days to seven days and from 24 hours to 12 hours, depending on the nature of the breach.

AI content clearly falls under illegal activity rules

Advertisement

Notably, the latest amendments clarify that AI-generated content used for illegal activities will be treated like any other illegal content. Platforms are required to prevent the use of their services to create or disseminate synthetic content involving child sexual abuse material, pornographic or obscene material, impersonation, false electronic records, or material involving weapons, explosives or other illegal activities.

Safe harbor protection is maintained for compliant platforms

Additionally, the government is also assuring the platforms on safe port security. The notification clarifies that intermediaries will not lose protection under Section 79 of the IT Act for removing or restricting access to synthetic content, including automated tools, as long as they follow the rules.

– ends

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version