Basil Jarrett | AI in court? Jamaica’s legal system has entered the chat
Last week, something quite revolutionary happened in Jamaica’s legal system. Chief Justice Bryan Sykes issued Practice Direction No. 1 of 2025, a bold, forward-looking directive on the use of generative AI in court proceedings. It was concise, practical, and brilliant. Just the kind of thing that we’ve come to expect from this chief justice, who, since taking office, has pushed the Supreme Court and judiciary into a new era of data-driven accountability, with clearance rates and backlog reduction high priorities during his tenure.
Practice Direction No. 1 hit all the right notes. Protect the public, preserve the dignity of the court, and gently nudge legal practitioners into the digital age, all without sounding like AI is the new courtroom bogeyman.
Practice Direction is essentially a formal instruction issued by the courts to guide lawyers, litigants, and even judges on how certain procedures should be handled. It’s not a new law, but it has the impact of one inside the courtroom. Think of it as the judge’s version of a “house rule” that standardises practice, reduces confusion, and keeps everyone on the same page. Practice Direction No. 1 attempts to tells lawyers and self-represented litigants that, if they use tools like ChatGPT to draft affidavits, submissions or pleadings, they must disclose it. It also warns them that they remain personally responsible for verifying the accuracy of whatever it produces, and that courts will not tolerate those errors being included in court records. It’s Jamaica’s first official move to bring some order, transparency and accountability to the wild wild west of AI-assisted lawyering.
This is a positive move. It feels like a lifeline being handed to a legal profession that has so far refused to view AI as anything but a threat to the credibility of the legal profession. Just last May, a high-court judge in Trinidad condemned the use of AI by lawyers, describing it as a serious breach of professional ethics. In contrast, Sykes’ move appears to acknowledge that the future is here, whether we like it or not, and that regulation, not resistance, is our best ally. For that, the chief justice and his team must be applauded.
BOLD FIRST STEP
But, if this Practice Direction is the first step towards a full-fledge embrace of AI, then we must also acknowledge and address its limitations. And, to my untrained, some may say unqualified, legal mind, there are a couple.
The first is the demand that anyone using AI to draft legal documents must disclose it. While this is absolutely spot on, saying you did something and explaining how you did it are two completely different things. Simply stating, “This affidavit was prepared with the assistance of generative AI,” and that the information has been independently verified, but there is no mechanism for the courts to verify the information itself. The document also warns of what happens if someone fails to disclose the use of AI, but, with a legal system already clogged with delays and backlogged cases, do we want to add to it?
JUDGES AND AI
A second noticeable omission in the document is the exclusive focus on litigants and lawyers, but no warning to judges themselves. It’s as if they are somehow above or immune to the creeping allure of generative AI.
Let’s be honest here. Judges, despite their assumed infallibility, are humans too. They have heavy caseloads, tight deadlines, and Word documents named ‘Judgment_Final_RealFinal3.docx’. Judges overseas are widely publicised as already using AI for legal summaries, clerical assistance, and judgment writing. Why would Jamaica be any different? In some cases, litigants may actually hope that the judge is using AI, as it potentially strips away perceived biases that often end in curious decisions. The document’s silence on judges here is obvious. We can’t regulate AI for one half of the courtroom and pretend the other half doesn’t own a smartphone and a $20-a-month subscription to ChatGPT.
NOT CREATED EQUAL
Finally, the Practice Direction speaks of ‘generative AI’ as though they are all the same. AI tools are not all built the same. Some are designed to write term papers, design buildings and produce social media content. Then there are those that are specifically optimised for legal precision. I know quite a few, much to the chagrin of my own attorneys. To paint all AI tools in one colour will stifle the growth and development of a technology that could one day alleviate our justice system of the very ills that Justice Sykes has targeted as part of his legacy. In order to not do so, we will need clearer definitions of acceptable tools, and perhaps even approved platforms, especially as courtrooms get more digitised and desperate lawyers look for shortcuts.
This is not a criticism of Practice Direction No. 1. Quite the opposite, actually. It is visionary, historic and downright necessary. But, as we regulate, we must also educate, simply because the AI tsunami is coming and there is nothing we can do to stop it. We must create a culture where AI is not feared but understood, and where legal professionals are taught how to use it safely. That means continuing legal education, developing bar association guidelines and conducting workshops and professional development seminars that perhaps teach prompt engineering alongside tort law.
Law, just like every other corner of society, will not escape AI’s tentacles and, in the end, literacy is our best defence. So, let us celebrate Justice Sykes for this bold and innovative first step. Practice Direction No. 1 of 2025 has the potential to become a landmark document, the first of many that, despite its speed bumps, is paving a road to a clearer future where we don’t get run over by AI traffic.
Major Basil Jarrett is the director of communications at the Major Organised Crime and Anti-Corruption Agency and crisis communications consultant. Follow him on Twitter, Instagram, Threads @IamBasilJarrett and linkedin.com/in/basiljarrett. Send feedback to columns@gleanerjm.com


