As artificial intelligence (AI) becomes increasingly embedded in legal workflows, its impact on arbitration continues to grow. I was honoured to moderate a recent panel discussion about AI in arbitration at The Arbitration Summit hosted by Thought Leaders 4 and featuring insights from Harry Borovick of Luminance, Claire Morel de Westgaver of Ontier, and Gábor Domjanovic of Forgó, Damjanovic & Partners.
The session addressed AI’s historical context, current applications, ethical challenges, and future implications for legal professionals. When using AI in arbitration, it’s clear that practitioners need to strike a balance between efficiency and ethical responsibility, distinguish between AI use by counsel and arbitrators, and follow adaptive guidelines.
The evolution of AI in legal practice
While AI has been used for decades in rudimentary forms (e.g., document search tools), its relatively recent democratisation through technologies like large language models (LLMs) has revolutionised legal tasks. According to Borovick, concerns over hallucinations (AI-generated inaccuracies) are transitional and will diminish as the technology matures. However, AI’s true value lies in augmenting, not replacing, lawyers, particularly in dispute resolution where human judgment is irreplaceable.
Practical use of AI in arbitration
As illustrated by the findings of the BCLP Arbitration Survey (2023), as well as recently released snippets from the forthcoming White & Case and Queen Mary University of London Arbitration Survey, it’s clear that AI is already in active use for tasks such as legal research, document review, translation, and even detecting AI-generated evidence from the opposing party. While AI in arbitration is becoming more common, concerns remain about cybersecurity, bias, “black box” opacity, the risk of deepfakes, and manipulation of evidence.
Ethical considerations and responsible use of AI
When exploring the ethical use of AI in arbitration, understanding the difference between counsel and arbitrator use cases is important. While the work of the International Bar Association’s Task Force on AI is still underway, Domjanovic indicated that the draft proposes a “traffic light” framework to categorise AI uses by user. The system highlights significant differences between AI use by counsel and by arbitrators. For example, the framework identified 53 different uses of AI within a hearing, 29 of which were identified as permissible (“green”) for counsel, but only four for arbitrators.
Given this context, perspectives vary on disclosure of reliance on AI. There are many considerations that should be taken into account by both parties and arbitrators, including those set forth in the recently launched Ciarb Guideline on the Use of AI in Arbitration.
Additionally, certain practices that are considered standard now may become obsolete as technology (and the way legal practitioners utilise it) evolves. Current guidelines should therefore evolve together with technology, balancing innovation with accountability, particularly taking into account emerging “hard law” regulations like the EU AI Act.
The session concluded with a conversation on what constitutes responsible use of AI. The panel discussed various existing mechanisms that contribute to the definition, including ethical foundations, responsible AI principles (e.g. “human-in-the-loop”), enforceable governance policies and procedural adaptations, such as model clauses and procedural orders outlined in the Ciarb Guideline on AI.
Conclusion
The experts agree that AI is no longer a futuristic concept—it is an integral part of modern world, and the legal profession is embracing its potential with increasing confidence. While practitioners should remain vigilant to risks, the priority should be to adopt AI ethically, transparently and responsibly.
Arbitration professionals should therefore equip themselves with both the tools and the judgment to integrate AI effectively while maintaining procedural fairness and integrity.