CORPORATE GOVERNANCE IN THE AGE OF AI
CORPORATE GOVERNANCE IN THE AGE OF AI

CORPORATE GOVERNANCE IN THE AGE OF AI

This article has been written by Tejaswini Kaushal, 2nd year law student at Dr. Ram Manohar Lohiya National Law University, Lucknow

AI in the Corporate Sphere: Prophetic or Preposterous?

Ever wondered if the rise of Artificial Intelligence (“AI”) in the corporate world is a prophetic foresight or simply preposterous? Unquestionably, in a world dependent on ChatGPT and DeepFace, AI has successfully invaded not only our everyday lives but also the corporate decision-making domain through AI tools like the recently launched Microsoft Azure AI, alongside others like IBM Watson, Google Cloud AI, etc. From optimizing supply chains to personalizing customer experiences, AI is rapidly transforming the business landscape, and corporations in India are no exception. However, as the use of AI in decision-making increases, so does the assortment of challenges faced by corporate governance.

The prophetic 2017 technological thriller, ‘The Circle’ aptly portrayed the harrowing consequences of excessive reliance on AI for corporate decision-making by a powerful tech company, where things quickly spiralled out of control, leaving corporate compliance litigators in a fix over ethical and legal issues. In this article, we will examine the legal and ethical challenges of AI in corporate governance in India and suggest strategies for navigating them. We will also explore the potential benefits of AI for corporate governance, and how a thoughtful and responsible approach can help us harness the full potential of this transformative technology.

Existing Legal Framework for AI Governance

In India, there are no laws that specifically regulate the use of AI and Machine Learning (“ML”) at present. The Ministry of Electronics and Information Technology (“MEITY”), which is responsible for developing AI-related strategies, has set up four committees to create a policy framework for AI. Furthermore, the Niti Aayog has established a set of seven responsible AI principles, which include principles like safety and dependability, equality, inclusivity, privacy and security, transparency, accountability, and protection of human values to promote innovation by building trust among the public and ensuring that the use of AI is in the public interest. However, the absence of a specific regulatory framework for AI in India creates uncertainty for corporations in terms of compliance and accountability.

In the international sphere, the position is not as straightforward either, yet is full of promise. To guarantee that AI systems are reliable, prevent market fragmentation and protect the basic liberties of everyone who come within its jurisdiction, the EU had presented the Artificial Intelligence (AI) Act in 2021. In 2022, the Brazilian Congress passed a law to create an AI legal framework while Canada also presented the Digital Charter Implementation Act, which consists of three pieces of legislation intended to strengthen Canada’s data privacy framework and ensure the progress of AI ethically.

Ethical Considerations in AI Governance

The use of AI in corporate governance raises several ethical considerations, one of the biggest being the role of bias. The algorithms used in AI decision-making are only as unbiased as the data on which they are trained. If the training data is biased, the resulting decisions will also be biased. This is a significant challenge for corporations, as biased decisions can have adverse effects on customers, employees, and society at large.

Another ethical issue is posed by the lack of transparency in AI decision-making, which makes it difficult to understand how decisions are made, leading to mistrust and skepticism. In a country like India, where the public’s trust in corporations is often low, transparency in AI governance is crucial.

Ensuring accountability for the decisions made by AI systems is a crucial ethical consideration in AI governance that corporations must address. In cases where AI decisions lead to harm, corporations need to take proactive responsibility and provide remedies to those affected.

Lastly, human oversight is a critical ethical consideration in AI governance. While AI can help automate decision-making processes, it cannot replace human judgment entirely. For instance, in a world’s first, Japanese venture capital company Deep Knowledge has appointed an AI, named Vital, to its board of directors. Yet, it is widely acknowledged that human oversight remains necessary to ensure that AI decisions align with ethical and moral standards and that the decisions do not cause harm to individuals or society.

Navigating Legal and Ethical Challenges in AI Governance

To navigate the legal and ethical challenges of AI governance in India, corporations can adopt the following strategies:

  • Develop a comprehensive AI governance framework:

Corporations need to develop a comprehensive AI governance framework that outlines the guidelines and standards for the use of AI in decision-making. The framework should also provide guidelines for ethical decision-making and ensure compliance with the existing laws and regulations. Certain preexisting acts are recommended to be amended and /or judicially interpreted to include AI within its ambit of applicability. The Information Technology Act, 2000 has the potential to serve as the fundamental legislation for regulating AI in India through an insightful interpretation and rebooting that aligns with the evolving landscape of technology. Similarly, the Consumer Protection Act, 2019 should incorporate the provision of strict liability on corporations for their defective products and services, including those that utilize AI, inspired by Finland‘s model. Moreover, the forthcoming Personal Data Protection Bill, 2022 is expected to have a crucial role in regulating the processing of personal data in India and establishing guidelines for the responsible use of AI in this context.

  • Foster a culture of transparency:

Corporations need to foster a culture of transparency in AI decision-making. This can be achieved by ensuring that the AI decision-making process is explainable and that customers and employees have access to information that was used to arrive at the decision.

  • Prioritize ethical decision-making:

Corporations need to prioritize ethical decision-making in the use of AI. They can achieve this by ensuring that the AI systems are designed to align with ethical and moral standards, that they do not cause harm to individuals or society at large, and that they are transparent and accountable. This can be accomplished by establishing ethical standards and guidelines that guide the development and deployment of AI systems.

  • Ensure human oversight and intervention:

While AI can help automate decision-making processes, it cannot replace human judgment entirely. Human oversight and intervention are necessary to ensure that AI decisions align with ethical and moral standards and that the decisions do not cause harm to individuals or society. Corporations need to ensure that human oversight and intervention are built into the AI governance framework. This can be achieved through protocols that require a human review of AI decisions and the ability to intervene in cases where the decisions are deemed unethical or potentially harmful.

  • Address bias in AI decision-making:

Corporations need to address bias in AI decision-making by ensuring that the training data used for AI systems are diverse and inclusive. They should also ensure that the algorithms used are designed to eliminate bias and that the systems are continuously monitored to identify and address any potential bias in decision-making.

  • Build trust with stakeholders:

Building trust with stakeholders is critical in the use of AI in decision-making. Corporations can build trust by being transparent about how AI is used, addressing ethical concerns and risks associated with AI, and ensuring that AI decisions align with ethical and moral standards. Corporations should also engage with stakeholders and seek their input and feedback on the use of AI in decision-making.

Conclusion

The integration of Artificial Intelligence (AI) into the realm of corporate governance in India has brought about a paradigm shift in the way decisions are made. It has not only improved efficiency, but has also provided accurate and timely information for informed decision-making. However, this new technological advancement also poses a range of legal and ethical challenges for corporations.

As we embrace this new era of AI-powered corporate governance, it is essential to address these challenges to ensure that businesses are operating ethically and within the confines of the law. It is imperative that companies implement strict regulations and policies to mitigate the risks and potential misuse of this technology.

Despite challenges, AI’s adoption in corporate decision-making has become a sine qua non, as illustrated by companies such as initiatives of TATA, JPMorgan, Walmart, etc., and its use will continue to grow. The focus should now be on working towards reducing the issues and challenges that may arise from its use, ensuring that we harness the full potential of this innovative technology for the benefit of society as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *