AI in proceedings raises ethical, legal considerations: CJI
New Delhi: The integration of Artificial Intelligence (AI) in modern processes, including court proceedings, raises complex ethical, legal and practical considerations that demand a thorough examination, Chief Justice of India D Y Chandrachud said on Saturday.
The CJI said AI represents the "next frontier of innovation" and its use in court adjudication presents both opportunities and challenges that warrant nuanced deliberation. Justice Chandrachud said while AI presents unprecedented opportunities, it also raises complex challenges, particularly concerning ethics, accountability and bias and addressing these challenges requires a concerted effort from stakeholders worldwide, transcending geographical and institutional boundaries.
He was speaking at a two-day conference on technology and dialogue between the Supreme Courts of India and Singapore. Chief Justice of Singapore Justice Sundaresh Menon, and several other judges and experts were also present during the conference. Justice Chandrachud said in the legal sector, AI holds immense potential to transform the way legal professionals work, from enhancing legal research and case analysis to improving the efficiency of court proceedings. He said in the realm of legal research, AI has emerged as a game-changer, empowering legal professionals with unparalleled efficiency and accuracy. With the launch of ChatGPT, a conversation has emerged about whether to rely on AI in reaching a conclusion for a case, the CJI said. "These instances show that we cannot avoid the question of using AI in court adjudication. The integration of AI in modern processes, including court proceedings, raises complex ethical, legal, and practical considerations that demand a thorough examination," he said.
The CJI said amid the excitement surrounding AI's capabilities, there are also concerns regarding potential errors and misinterpretations. "Without robust auditing mechanisms in
place, instances of 'hallucinations' – where AI generates false or misleading responses – may occur, leading to improper advice and, in extreme cases, miscarriages of justice," he said.
Justice Chandrachud said the impact of bias in AI systems presents a complex challenge, particularly when it comes to indirect discrimination.
He said in the realm of AI, indirect discrimination can manifest in two crucial stages -- firstly, during the training phase where incomplete or inaccurate data
may lead to biased outcomes and secondly, during data processing often within opaque "black-box" algorithms that obscure the decision-making process from human developers.
Black box refers to algorithms or systems where the internal workings are hidden from users or developers, making it difficult to understand how decisions are made or why certain outcomes occur, he said. The CJI said facial recognition technology serves as a prime example of high-risk AI, given its inherently intrusive nature and potential for misuse. He said the full realisation of AI's potential hinges on global collaboration and cooperation.
Capacity building and training play a crucial role in ensuring the ethical and effective utilisation of AI technologies and by investing in education and training programs, one can equip professionals with the knowledge and skills needed to navigate the complexities of AI, identify biases and uphold ethical standards in their use of AI systems, he said.
The CJI said there is a fear that adoption of AI may lead to the emergence of two-tiered systems, where access to quality legal assistance becomes stratified based on socio-economic status.
"The poor may find themselves relegated to inferior AI-driven assistance, while only affluent individuals or high-end law firms can effectively harness the capabilities of legal AI. Such a scenario risks widening the justice gap and perpetuating existing inequalities within the legal system," he said.