Skip to content

India mandates government approval for AI models; issues clarification after uproar

  • by
  • 3 min read

On March 1, the Indian Ministry of Electronics and IT issued a stern advisory to AI platforms and social media intermediaries, emphasising the need to obtain permissions before launching AI products in the country. However, after the furore, the Minister of State for Electronics, Rajeev Chandrasekhar, clarified that only ‘significant’ tech firms must seek permission, while the startups are exempt.

The advisory, issued by India’s Ministry of Electronics and IT on Friday, also calls on tech companies to ensure that their AI services or products do not exhibit bias, discrimination, or compromise the integrity of the electoral process. The national elections in India are scheduled to be held in the coming months, and AI can play a huge role in this.

Another important clause in the advisory mandates the AI companies to submit an action taken-cum-status report to the Ministry within 15 days of receiving the advisory. The government has also asked the companies to label potentially misleading content with unique identifiers and metadata to facilitate the tracking of misinformation or deep fakes and identify the intermediary involved in their dissemination.

Failure to comply with the advisory could result in legal consequences for intermediaries or platforms, including prosecution under the IT Act and other relevant criminal statutes.

Moreover, companies hosting unreliable or under-testing AI platforms must seek government permission and label the platform as ‘under-testing’. Additionally, they must explicitly communicate the possible fallibility or unreliability of the platform’s output to the public, possibly through a ‘consent popup’ mechanism.

This new advisory generated much disappointment, especially in the Indian startup sector and this development caught many industry executives off guard.

On March 4, Rajeev Chandrasekhar tweeted that the “Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large platforms and will not apply to startups.”

Also, the advisory is aimed at untested AI platforms, and the process of seeking permission, labelling and consent should be seen as an insurance policy for the companies in case things go south.

However, these clarifications raise more questions, such as what constitutes significant platforms? As most of the ‘significant’ AI in the country are foreign, how will they be regulated? The clarification is silent on this matter.

In another tweet, the minister said, “There are legal consequences under existing laws (both criminal and tech laws) for platforms that enable or directly output unlawful content.”

The minister further states that the sole objective of the advisory is to make the tech firms aware that the platforms have clear existing obligations under IT and criminal law and that the best way to protect themselves is via labelling and explicit consent.

How the Indian government plans to tackle AI remains to be seen. However, the current advisory seems to have created more confusion and fear than answering questions.

In the News: Microsoft launches Copilot for Finance in public preview

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>