top of page
Staff Writer

New AI models will require government approval in India


Image was taken from Pexels

The government of India has reversed its hands-off approach to AI regulation with a new advisory that requires companies to obtain government permission before releasing a new AI model. 


Meity (Ministry of Electronics and IT) said in the advisory that tech companies should ensure that their products do not enforce bias or threaten the integrity of the electoral process.  It requires companies to properly label the “possible and inherent fallibility or unreliability” of the output of their AI models. 


However, the advisory is aimed at untested AI platforms and Meity’s permission is only meant for large platforms. It will not apply to startups, Rajeev Chandrasekhar, Minister of State for Meity, clarified in a post on X on March 4.


India is heading into its 18th Lok Sabha election in the second quarter of this year to elect a new union government. The ruling BJP faction led by PM Narendra Modi has expressed concern over the critical responses generated by some of the AI models. 

For instance, last month a user posted an interaction with Google’s AI model Gemini which was critical of PM Modi. While Gemini pointed out that Modi is a popular mass leader and wins elections, it also said that his government has shown authoritarian tendencies by cracking down on dissent, curbing freedom of press and undermining democratic institutions. 


This triggered a sharp reaction from the BJP government which warned Google that Gemini's response was in violation of the country's IT Rules.


“Government has said this before- I repeat for the attention of Google India. Our Digital Nagriks are not to be experimented on with unreliable platforms/algos/models. Safety and trust is the platform's legal obligation,” MoS Chandrasekhar said on February 24, in a post on X in reaction to the Gemini response.


The Government of India has indicated in the past that it sees AI as crucial for the country's strategic goals.

It has been pushing companies, both large and small, to develop AI for social good.


While the government initially avoided strict regulations to speed up AI adoption, it has been working on a framework to regulate its implementation and minimize potential harms. MoS ChandraSekhar announced last month that the government will release the first draft of the regulatory framework for AI by June-July.


Last December, the European Union also approved a new AI Act which will restrict the misuse of AI and require tech companies to be more transparent about the development and training of the AI models. The act will come into effect in 2025. 

Several industry leaders including OpenAI CEO Sam Altman have also urged governments to regulate AI to check its rampant growth and potential harm. 


Before Chandrasekhar's clarification, some industry heads warned that seeking government’s permission for every AI model will benefit large companies and stifle the work of small AI startups as they now have to go through several layers of permissions and red tape. 


“You now need approval for merely deploying a 7b open source model. If you know the Indian government, you know this will be a huge drag!  All forms will need to be completed in triplicate and there will be a dozen hoops to jump through! This is how monopolies thrive, countries decay and consumers suffer,” said Bindu Reddy, CEO of Abacus AI in a post on X on March 4. 

bottom of page