On Oct 22, GoI proposed sweeping amendments to IT Rules 2021, focusing on synthetically generated information (SGI).
Draft amendments mandate that AI or AI-generated content on social media platforms be labelled or embedded with a permanent, unique identifier. Platforms will also be required to obtain a user declaration confirming whether the uploaded content qualifies as SGI.
Can mandatory labelling requirements keep pace with synthetic content? MeitY's answer: defining AI-generated info and mandating prominent labels covering 10% of visual content or audio duration represents a clear choice: transparency over prohibition.
The argument is sound. Yet, transparency hinges on a key assumption: users will consume labelled content rationally and make informed judgements. Anyone who has seen misinformation run rampant on WhatsApp knows how optimistic that assumption is.
The real test is: will people notice these labels or will they fade into digital wallpaper - present but ignored? From cigarette warnings to social media fact checks, labels tend to lose impact over time.
The draft's boldest provision, Rule 4(1A), obliges significant social media intermediaries to collect user declarations on SGI and verify them through 'reasonable and appropriate technical measures'. This is promising, but it can create a compliance nightmare.
The technical hurdle is steep: platforms must detect AI-generated content across formats, languages and manipulation types - yet, current detection tools are unreliable.
The draft amendments limit verification to public content, sparing private messages. This means platforms must distinguish public from private content at upload, yet, privately shared material can go public via screenshots or forwards.
Mandating platforms to verify user declarations puts them in a bind. Under-enforcement risks losing safe-harbour protections and incurring liability, while over-enforcement can censor legitimate content. The rule states platforms will be deemed to have failed due diligence if they 'knowingly permitted, promoted, or failed to act upon' unlabelled synthetic content. Yet, the word 'knowingly', in algorithmic content moderation contexts, is murky at best.
Link: https://economictimes.indiatimes.com/opinion/et-commentary/click-label-share-how-india-wants-to-tame-ai-generated-content-on-social-media/articleshow/124790755.cms?from=mdr
B-121, Logix Technova, Sector 132, Noida Uttar Pradesh - 201304