In the rapidly evolving landscape of legal practise, the integration of artificial intelligence (AI) tools has become increasingly prevalent.
In May 2023, I published an article discussing the care which needs to be taken when utilising current AI products – see our May 2023 article ‘Will artificial (emotional) intelligence replace family lawyers’. The recent Federal Circuit and Family Court of Australia (Division 2) decision in Handa & Mallik has reiterated some of the risks associated with the use of AI in family law matters, bringing this issue to the forefront once again.
In this matter, an order was made for a solicitor to provide submissions of not more than five pages, to identify any reasons as to why he ought not be referred to the Office of the Victorian Legal Services Board and Commissioner.
A personal cost order as against the solicitor was also foreshadowed.
In Summary:
- Solicitor’s Role: The solicitor appeared as an agent in relation to an enforcement application. The parties were asked whether authorities could be provided on which reliance was placed (and which the Judge would read whilst the matter was stood down).
- List of Authorities: The solicitor tendered a single-page list of authorities which neither the Judge nor the associates could locate. The solicitor was asked to provide copies of the authorities, and he did not do so.
- Use of AI: When the matter returned to Court, the solicitor was asked whether the list of authorities had been prepared using AI. The solicitor indicated that it had been prepared by legal software which encompassed AI.
- Court’s Concern: Following the solicitor’s admission, the Court informed the parties and the legal representatives that it was concerned about the veracity of the information provided. Consequently, the Court expressed concerns regarding the competency and ethics of the solicitor.
Implications and Risks:
The decision in Handa & Mallik is an illustrative example of the risks of overreliance on current AI products for both solicitors and parties. The matter highlights several key issues:
- Accuracy and Verification: AI tools can generate misleading or false responses or ‘hallucinations’, where the AI fabricates content (e.g. it might reference non-existent cases). This can lead to significant legal consequences, as seen in Mata v. Avianca, No. 22-CV-1461 (PKC), 2023 WL 4114965 (S.D.N.Y. June 22, 2023), in which a lawyer was fined for submitting misleading information generated by AI.
- Ethical and Competency Concerns: The use of AI in legal practice raises questions about the ethical responsibilities and competency of legal professionals. The Court’s reaction in Handa & Mallik underscores the importance of verifying AI-generated content prior to submission.
- Confidentiality Risks: AI tools can inadvertently breach confidentiality, with data supplied by users potentially becoming accessible to others through subsequent queries or system vulnerabilities.
- Regulatory Challenges: As AI technology evolves, existing legislation may not keep pace, creating a precarious situation for lawyers integrating new technologies into their practice.
Takeaway:
The decision in Handa & Mallik serves as a cautionary tale surrounding the potential pitfalls of overreliance on AI in legal practice. It underscores the necessity for legal professionals to exercise due diligence and ensure that AI-generated inputs are accurate and reliable. Failure to do so can result in significant prejudice, including disciplinary action and costs orders. As AI continues to evolve, it is crucial for the legal profession to adapt and establish robust verification processes to mitigate these risks.