From algorithms running our social media feeds to monitoring disease outbreaks and managing the cybersecurity of critical systems, artificial intelligence (AI) applications are present in every domain of our lives. Despite such a ubiquitous presence, laws regulating AI are yet to come up.
Earlier this month, the European Union (EU) agreed to the proposed AI Act, which would be the first comprehensive AI legislation in the world upon its enactment — slated to be sometime in 2025. While some quarters have pitched for self-regulation and guidelines, the proposed EU law assigns four risk-based classifications for AI applications: unacceptable risk, high risk, moderate risk, and low risk.
Meanwhile, in India, existing laws shall be stretched to cover AI applications until the enactment of AI-specific laws. The proposed Digital India Act, which is understood to replace the Information Technology (IT) Act, 2000 upon enactment, is expected to cover AI. Similar laws are in the various stages of making in most of the major economies of the world.
Simran Singh, a lawyer who advises AI-centric start-ups, says that India is progressively embracing artificial intelligence, albeit gradually. She says the public-private partnerships around AI, such as collaborations between the Reserve Bank of India (RBI), McKinsey And Company, and Accenture to use AI and machine learning (ML) to improve its regulatory supervision are tell-tale of such an embrace.
None of the 15 major economies of the world —including the 27-nation bloc EU— had a comprehensive law governing AI as of August, according to the Global AI Regulation Tracker run by the International Association of Privacy Professionals (IAPP). While China has come up with regulations in some domains, the EU has the most far-reaching legislation in the making. Elsewhere, a bill has been tabled in Canada, a draft policy has been prepared in Israel, and a series of federal frameworks and guidelines are in place in the United States.
The approach to regulating AI is thus quite diverse across the world, says Sanhita Chauriha of the Vidhi Centre for Legal Policy.
“While the EU is implementing the comprehensive AI Act, the United States is pursuing a decentralised model, allowing states to propose individual legislation. China has opted for sector-specific guidelines, tailoring regulations for distinct areas like finance and healthcare. The United States has emphasised a balance between innovation and regulation, collaborating with industry stakeholders,” says Chauriha, Fellow of Applied Law and Technology Research at Vidhi Centre.
The EU’s horizontal risk-based approach to regulating AI is ideal as one focuses on not merely high-risk areas but also on medium- and low-risk areas of AI, so all AI-based systems and applications are not treated similarly, but are regulated proportionate to the risk involved, says cyber laws expert Karnika Seth.
Seth adds that merely having guidelines in place instead of a horizontal risk-based approach without any institutionalised enforcement mechanism would render such a regulatory framework a paper tiger.
What’s EU’s AI Law?
The EU’s proposed AI Act has a four-tier risk-based classification for AI systems. It aims to ensure that AI systems are “safe” and “respect fundamental rights and EU values” while also encouraging investment and innovation.
The four classifications as per levels of risk are such:
Unacceptable Risk Systems
These AI systems run counter to EU values and are considered to be a clear threat to the fundamental rights in the 27-nation bloc. They are prohibited with limited law enforcement exceptions. These systems include:
- Biometric categorisation systems using sensitive characteristics, such as political, religious, philosophical beliefs, sexual orientation, race
- Untargeted scraping of facial images to create facial recognition databases
- Emotional recognition in the workplace and educational institutions or manipulation of behaviour
- Systems assigning social scores based on social conduct or personal characteristics
For a strictly defined list of crimes, however, law enforcement exceptions have been made. Biometric identification systems would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime, according to the draft of the AI Act, which lists the following cases eligible for such exceptions:
- Targeted searches of victims of abduction, trafficking, or sexual exploitation
- Prevention of a specific and present terrorist threat
- Locating or identifying a person suspected of having committed acts of terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, or environmental crime
High-Risk Systems
The AI systems that pose significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law are classified as high-risk, according to the AI Act’s draft. These systems are:
- Certain critical infrastructures, such as in the fields of water, gas, and electricity
- Medical devices
- Systems to determine access to educational institutions or recruitment
- Certain systems used in the fields of law enforcement, border control, administration of justice, and democratic processes
For such systems, there will be mandatory compliance and assessments of how these systems affect the rights of the EU residents.
Low And Minimal Risk Systems
AI systems like chatbots and certain emotion recognition, biometric categorisation systems, and generative AI tools will have minimal oversight with the low-risk classification. They would, however, be required to show that the AI-generated content —such as deepfakes— is artificial or manipulated and not real.
AI systems like recommender systems or spam filters are classified as minimal or no-risk and the proposed AI Act free usage of such tools.
The Need To Regulate AI
Artificial intelligence (AI) applications are changing the world faster than an average person can fathom. Earlier this year, ChatGPT stormed into mainstream consciousness after it began cracking bar exams and students started using it for university assignments. Generative AI tools are even being used to produce photos passing off as real photos of ongoing wars in the world.
These examples are just the tip of the iceberg of challenges that the advent of AI has thrown at us. While existing laws can be stretched to cover AI-related affairs, they fall short of properly addressing the technology. Consider this: While existing copyright laws may cover issues arising out of AI-generated content or data protection laws might address privacy concerns, how do you address concerns about far-reaching AI tools that can potentially influence public opinion and social behaviour, scrape tons of data to build public databases, and lead to foreign election interference?
The current legal framework falls short of addressing such concerns. In the United States, a debate has been raging about whether TikTok, the popular Chinese app, has been used as a geopolitical tool. The propensity of even disinterested users being flooded with polarising content around wars and conflict and paving the way for trends like the one justifying Osama bin Laden’s attacks has fuelled such concerns. The app’s AI-driven algorithm collects troves of behavioural data about users and has the potential to be used to manipulate social behaviour, which is one of the things explicitly prohibited in the EU’s proposed AI Act. While other platforms like Facebook and Instagram also collect such data, they are not controlled by China, a totalitarian state where the ruling Communist Party’s writ runs large and the line between the public and the private is blurry.
This way, powerful AI tools in the hands of authoritarian regimes —which can use them for crackdowns via facial recognition, social credit scores, or geopolitical gains— and even non-state actors are a concern for the democracies of the world like India and the United States.
Such issues need to be addressed with AI legislation on a national basis and an international customary AI regime needs to be developed like we have the international law regime, say experts.
Cyber laws expert Seth tells Outlook that the intention behind the usage is what matters with AI tools and that’s why regulation cannot be left to self-regulation by the industry alone. She further says the rise of AI and new-age technologies like the metaverse have thrown new challenges that can only be addressed properly with a new special national law.
“Crimes against women have been reported in the metaverse. Our laws prescribe a ‘person’ can be booked for a crime, but what if an AI-run bot or a robotic entity has been harassing, assaulting, or defaming one in the metaverse? How do you address that? How do we prosecute an action by an AI-driven robot? These are the questions that need to be addressed with AI-centric legal frameworks,” says Seth, the founder of the law firm Seth Associates.
AI systems have also outdone themselves to the extent that even makers of powerful tools do not know how exactly the systems function. AI scientist Sam Bowman, who is a researcher at AI company Anthropic, said on the ‘Unexplainable’ podcast that there is no clear explanation of how AI tools like ChatGPT work like the way we know how a ‘regular’ software like MS Word or Paint work. He further said that even the development of such tools has been quite autonomous so humans have been more of facilitators of these tools rather than being builders.
“I think the important piece here is that we really didn’t build it in any deep sense. We built the computers, but then we just gave the faintest outline of a blueprint and kind of let these systems develop on their own. I think an analogy here might be that we’re trying to grow a decorative topiary, a decorative hedge that we’re trying to shape. We plant the seed and we know what shape we want and we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree. We just kind of started the process, let it go, and try to nudge it around a little bit at the end,” said Bowman on the podcast.
Sanhita Chauriha of the Vidhi Centre tells Outlook AI is something that policymakers, lawmakers, and even developers are still trying to understand and have not been able to completely figure out how to go about regulating it. Therein lies the challenge.
“If we don’t understand something fully, how do we regulate it? So, for now, the countries are taking up a trial-and-error approach to see what works. AI is growing faster than our understanding. Fast forward to 10 years down the lane, we would be trying to regulate something that we cannot think of right now. So keeping pace with the developments along with trying to navigate the concerns through close monitoring of the systems by the respective regulators would be an ideal approach,” says Chauriha.
How Should India Regulate AI?
While there is no dedicated law governing AI in India, consultations are ongoing and the Government of India has formed seven working groups that are tasked with hammering out drafts. The Digital India Act (DIA) is also in the making which is expected to replace the Information Technology Act, 2000, and is expected that it would also cover AI.
The lone presentation of the DIA, however, released by the Ministry of Electronics and Information Technology (MEITY) does not mention AI. People part of the governmental consultations, however, tell Outlook that the final draft of the AI —and the law that it would translate into— will also take AI within its ambit.
Cyber laws expert Seth, however, says that DIA in itself might not be sufficient to regulate AI and an AI-dedicated law would be better suited to address the many issues at hand. She adds that there should ideally be one agency to look at AI or at least a nodal agency to coordinate the work of multiple stakeholders that are there at the moment.
Chauriha of Vidhi Centre says that a nodal agency instead of a single-point regulator —like the Telecom Regulatory Authority of India (TRAI)— would be better in the Indian set-up, which would also address the interdisciplinary nature of AI applications.
She tells Outlook, “Establishing a dedicated AI-related nodal agency, rather than a single government regulator, is crucial for the effective governance of AI. This specialised agency would provide the necessary expertise to comprehensively address the interdisciplinary nature of AI, fostering collaboration among existing stakeholders such as TRAI, CERT, and others. By focusing solely on AI-related matters, the agency can ensure agility and adaptability in response to technological advancements, fostering the development of nuanced and dynamic regulatory frameworks. Additionally, the agency would play a pivotal role in stakeholder engagement, working closely with industry experts, academia, and civil society to gather diverse perspectives. This might address the unique challenges posed by AI and can promote ethical considerations and the establishment of uniform standards, contributing to a balanced and effective regulatory environment.”
Simran Singh, a lawyer who advises AI-centric start-ups, says that the AI regulations also need to enable innovation and investment — India is projected to have a digital economy of $1 trillion by 2030. She says that regulations need to strike a balance between the ease of doing business and safeguarding privacy and addressing the many ethical issues of AI.
Singh tells Outlook, “I strongly believe that regulations that pose challenges in compliance can significantly hinder innovation and discourage investments, particularly impacting the agility that is crucial for start-ups. At the onset, the absence of regulations might appear freeing for new businesses exploring AI. Yet this apparent freedom leaves them vulnerable due to the lack of protective legislation surrounding data privacy, security, and AI development. For one of the fastest growing economies like ours, it becomes imperative to establish a comprehensive legislation to regulate AI. This legislation should strike a balance by not only safeguarding crucial aspects like data privacy and AI ethics but also being straightforward enough to facilitate ease of doing business.”
There also needs to be an international AI regulatory regime in place, but then that would come with its challenges, say experts.
Chauriha tells Outlook, “Establishing a customary international AI regime has both advantages and challenges. On the positive side, such a framework could foster global collaboration, ensuring a unified approach to AI governance. It may lead to the development of common ethical standards and regulatory guidelines, fostering responsible AI innovation worldwide. This harmonization could streamline international trade, as companies navigate consistent rules across borders. However, there are certain challenges like aligning diverse national interests and regulatory environments. Striking a balance between a shared framework and respecting cultural and legal variations poses a significant hurdle. Additionally, coordinating the enforcement of international AI regulations may prove challenging, given the evolving nature of AI technologies and the need for agility in governance and the development of economies at different levels. Collaboration can be a way forward.”
Seth says, “We would also need international cybercrime conventions to effectively combat transnational cyber crimes. So, not only a national AI law but we would also require an international AI legal regime.”