A recent joint statement by major Australian legal bodies has highlighted the principles and responsibilities for legal practitioners using artificial intelligence (AI) in legal practice.
On 6 December 2024, a joint statement on the use of AI in Australian legal practice was released by the Law Society of New South Wales, the Legal Practice Board of Western Australia and the Victorian Legal Services Board and Commissioner. The statement identified several key principles relating to AI which serve as a timely reminder for legal practitioners integrating AI tools into their practice, while emphasising the ongoing need for careful and considered use of AI in the legal profession.
These principles included:
- Accuracy: While AI can be beneficial, it is our duty as lawyers to provide accurate legal information. Unlike a professionally trained lawyer, AI cannot exercise superior judgment or provide ethical and confidential services. Clients rely on our expertise, not the AI’s.
- Understanding AI: It is crucial for lawyers to understand both the capabilities and limitations of AI, not only for their own use, but also so they can provide trusted guidance to their clients about that use.
Additionally, the statement stressed the importance for lawyers to understand the large language models (LLMs) and foundation models behind the latest AI tools. This understanding is vital because clients may be using AI themselves, seeking advice on its lawful use, or be adversely affected by a third party’s use of AI.
Ethical and professional obligations
The statement continued by reminding lawyers to maintain high ethical standards and comply with professional obligations in accordance with the governing rules of their jurisdiction. This includes consideration and maintenance of:
- Client Confidentiality: Lawyers cannot safely enter confidential, sensitive or privileged client information into public AI tools (including chatbots or copilots like ChatGPT). If lawyers use such tools with client information, they must carefully review contractual terms to ensure the information will be kept secure.
- Independent Advice: AI tools cannot reason, understand, or advise. Lawyers are responsible for exercising their own forensic judgment and should not rely on AI outputs as a substitute for their own analysis of a client’s needs and circumstances.
- Fair Costs: Lawyers should ensure that AI use does not unnecessarily increase costs for their client above traditional methods (e.g., because of additional time spent verifying or correcting its output).
- Competence and Diligence: AI tools, research assistants, and summarisers cannot replace legal knowledge, experience, or expertise. No tool based on current LLMs can be free of 'hallucinations' (i.e., responses that are fluent and convincing but inaccurate), and lawyers using AI to prepare documents must be able and qualified to personally verify the information they contain to ensure that their contents are accurate and not likely to mislead their client, the court, or another party.
AI hallucinations
The issue of AI hallucinations, where AI tools generate false or non-existent information, continues to be a significant concern in the legal profession. For instance, in the 2023 case of Mata v. Avianca, lawyers who were unaware that AI could hallucinate cases submitted a brief containing fabricated case citations generated by AI, leading to sanctions and fines.
Similarly, in the Australian case of Handa & Mallik, a Victorian lawyer was referred to the Victorian Legal Services Board and Commissioner after using AI to produce a list of authorities that included non-existent cases – see our September 2024 article, ‘A ghost in the machine? Risks and pitfalls of AI use within legal practice’. These incidents underscore the importance of verifying AI-generated content to avoid misleading clients, courts, or other parties.
Recommendations
It is crucial for legal practitioners to remain vigilant and ensure that any AI assistance is thoroughly cross-checked against reliable sources to maintain the integrity of their work.
The joint statement identified several measures to help lawyers ensure ethical compliance, protect client information, and maintain professional standards when using AI tools. To achieve this, lawyers should:
- implement clear, risk-based policies to minimise data and security breaches, and specify which AI tools are being used in their practice,
- limit AI use to lower-risk tasks that are easier to verify, and
- be transparent about AI use and disclose to their clients how and what is being used.
AI is otherwise here to stay. It is therefore imperative that the conversation and diligent consideration of best practice continue as it evolves.
View our recent video sharing insights on the joint statement here.
Statement on the use of artificial intelligence in Australian legal practice | VLSBC