Call Us 02 9232 8033

AI in Law: Risks, Benefits and Ethical Challenges

The legal profession has long been a bastion of tradition—pages upon pages of case law, meticulous research, and the ever-reliable leather-bound books housing centuries of legal wisdom. Yet, in recent years, artificial intelligence has been making its way into this time-honoured field, promising efficiency, accuracy, and a revolution in legal practice. But as we have seen in a recent family law matter where a lawyer unwittingly submitted AI-generated, fabricated case law to a court, the promise of AI in law is not without peril. 1 

The Legal Profession’s AI Evolution 

From rudimentary automation tools to advanced legal research platforms, AI has been steadily integrating into the fabric of law. Today, law firms and solo practitioners alike use AI-powered tools such as predictive analytics, automated contract review, and even virtual legal assistants to streamline their workflow. Platforms like LEAP, Casetext’s CARA, and Westlaw Edge claim to offer precise and efficient legal research by analysing vast databases of case law in seconds—something that would take human researchers days, if not weeks. 

The allure of AI in law is obvious: speed, cost-cutting, and the ability to process complex legal queries at an unprecedented scale. But as with any powerful tool, AI is only as good as its user—and its database. And when human oversight is lacking, the consequences can be severe. 

AI in the Courtroom: A Risky Proposition? 

However, the use of AI in case law as seen by Justice Humphry within a family law matter involves a cautionary tale. The lawyer, relying on AI-powered legal software, presented a list of case precedents that did not exist. This incident forced the presiding judge to issue a stern rebuke, highlighting the dangers of unverified AI-generated legal research. Justice Humphry, in her ruling, emphasized that legal professionals have an unequivocal duty to verify their sources and that AI tools cannot replace due diligence. This being that Law Y in Leap seeks to have a verify tool noting a stern warning for fact-checking prior to submissions. Here, her associates had spent significant time and resources to find these cases, which of course did not exist. 

AI-generated misinformation has entered courtrooms before. In Mata v. Avianca, Inc. (S.D.N.Y. 2023), a U.S. federal judge sanctioned two lawyers for submitting AI-generated case citations that turned out to be completely fabricated by ChatGPT. The risk here is clear: AI is fallible, and the consequences of its errors can undermine the very foundation of legal integrity. Therefore, the rule of thumb for any lawyer should be to only cite cases that they have learned or have read online. It ensures that the precedent they desire to use is accurate and actually supports their case. 

Where AI Excels in Law 

Despite its pitfalls, AI undeniably has its place in modern legal practice. When used correctly, it can enhance efficiency and accuracy in several key areas: 

  • AI-powered research platforms scan vast databases to identify relevant case law, saving lawyers significant time. 
  • AI algorithms can quickly sift through contracts, flagging inconsistencies, identifying risks, and suggesting optimizations. 
  • AI can assess historical case outcomes and predict the likelihood of success in litigation, helping lawyers craft stronger strategies. 
  • Some AI tools can draft basic legal documents such as wills, NDAs, and lease agreements with minimal human input. 
  • Chatbots and AI-driven virtual assistants provide preliminary legal advice, improving accessibility for those who might not afford traditional legal services. 

The Ethical Quandary: Where Do We Draw the Line? 

The ethical implications of AI in law cannot be ignored. The sanctioning of the Melbourne Solicitor following a family law matter highlights one of the biggest dangers: blind trust. If lawyers begin to rely too heavily on AI without verifying its outputs, the legal system risks being undermined by misinformation. 

Additionally, the data used to train AI algorithms determines their bias. When trained on historically biased case law, AI models can perpetuate systemic injustices rather than rectify them. Legal scholars and policymakers are now addressing these issues. They are questioning whether to impose stricter regulations on AI in law.

Moreover, the question of accountability arises—who is responsible when AI gets it wrong? Can a lawyer claim ignorance if an AI tool provides incorrect information? Or should they be held to the same professional standard as if they had manually conducted the research? Courts are beginning to take a hard stance, as seen in the cases where lawyers faced disciplinary actions for AI-induced errors. 

The Future: Striking a Balance 

AI is here to stay in law. But as we integrate it further, a balanced approach is crucial. Law firms should consider AI as an assistant rather than a replacement. AI can process information, but human lawyers must apply their judgment, ethical considerations, and experience to ensure accuracy. 

Best practices for AI usage in law may include: 

  • Lawyers should cross-check AI-generated case law against official legal databases before submitting them to courts. 
  • Legal professionals must disclose when AI has been used to generate research, arguments, or draft documents. 
  • Law firms should educate lawyers on the capabilities and limitations of AI tools to ensure responsible usage. 
  • AI should complement, not replace, human decision-making. Every AI-generated recommendation must be subject to human scrutiny. 

Final Verdict: A Tool, Not a Replacement 

While AI enhances legal practice, lawyers must use it with caution. The legal profession thrives on accuracy, ethics, and precedent. AI, in its current state, cannot fully uphold these pillars without human intervention.

AI is a powerful tool, but it is not infallible. Law firms and courts are adopting AI-driven technologies. The focus must stay on responsible usage, verification, and ethical practice. The legal profession relies on knowledge and wisdom. No AI, no matter how advanced, can replace it.

Jake McKinley notes that this article is written for the purpose of providing generalised information and not to provide specialised legal advice. If you require qualified legal advice on anything mentioned in this article, our experienced team of solicitors at Jake McKinleyare here to help.Please get in touch with us on 02 9232 8033 today to make an enquiry. 

1 https://www.pressreader.com/australia/the-guardian-australia/20241011/281578066105075

Related Articles