For India, the goal is clear: artificial intelligence (AI) is a resource to promote economic development. Therefore, its approach favors innovation. The Indian government recognizes the importance of regulating the technology appropriately. However, it plans to do so in a way that does not hinder the pace of AI development and allows the country to benefit from it and remain internationally competitive.
This formula was formulated by Narendra Modi's government cabinet before the Indian elections that took place between April and June. Even after his re-election, the Prime Minister's stance remains unchanged. While there are no specific laws dealing with AI yet, the country is awaiting the passage of the India Digital Act, which aims to comprehensively regulate online services.
To understand how India is approaching the regulation of AI, we can look at what this proposed law entails. But before we take a look into the future, let us take a step back to provide some context.
The most fundamental government document on AI in India is the National AI Strategy, which was released in June 2018. It aims to provide a solid framework for future AI regulation in India. It emphasizes concepts such as 'responsible AI", which is linked to respecting user privacy and ensuring safety guarantees.
In February 2021, the Principles for Responsible AI were published. This was another deliberate step to create an ethical environment for AI development in various sectors. At this point, the huge impact of the technology was already anticipated and a revealing aspect was identified: Failure to manage AI systems responsibly could have a negative economic impact. The Indian government thus recognized the economic importance of ethical behavior in AI development.
Shortly afterwards, the Operational Principles for Responsible AI were published in August 2021. They follow on from the previous ones, emphasizing the need for regulatory intervention and encouraging the development of AI systems that take ethics into account at the design stage.
To date, there is no specific law, only a set of principles. However, there are certain legal frameworks that regulate the use of AI in certain sectors. In the financial sector, for example, there have been a number of requirements for AI and machine learning systems since 2019. The use of these applications is also closely monitored in other high-risk areas such as healthcare. This is in line with the European regulation, in which the use case of the tool is decisive for determining the risk level.
Comprehensive regulation
However, the document that is most likely to impact the Indian AI sector today is the Digital India Act 2023. This text, whose enactment has been postponed until after the elections, is expected to define high-risk areas for AI systems and restricted areas for companies using the technology in consumer applications.
The Digital India Act 2023 aims to outline a framework for the country's online ecosystem that automatically embraces emerging technologies such as artificial intelligence. The aim is to create a safe digital space for users. To achieve this, certain measures are proposed.
One measure is the central role of online safety. Content moderation will be necessary to prevent cyberbullying, hate speech and misinformation. It is also planned to classify intermediary platforms such as social networks or e-commerce portals so that stricter rules can apply to certain types of intermediaries than to others.
In the context of artificial intelligence, this could mean that providers of different sizes or those marketing different tools could be subject to different restrictions. In addition, the Indian government is considering revising the concept of 'safe harbor", which allows platforms such as Instagram or Twitter to not be held responsible for the content published on them. Should this scenario materialize, AI providers could bear some responsibility for the results generated by their models.
The Digital India Act 2023 will give users rights, such as the right to be forgotten, while providing greater data protection and privacy safeguards. All of this has profound implications for the development and use of AI systems.
It is expected that the document will also contain specific measures to promote the development of new technologies such as blockchain and, of course, AI. It is expected that this will be done in a safe environment.