AI and the law: New guidelines and the consequences of misuse

date
06 October 2025

The Law Institute of Victoria and the Queensland Courts have released new guidelines to ensure the ethical and responsible use of AI, reinforced by recent disciplinary action that highlights the professional consequences of AI misuse in legal practice.

These guidelines mark a significant step in regulating artificial intelligence (AI) use in legal practice, as its benefits and challenges become more apparent. While AI offers efficiencies in legal research, drafting, and document review, it also introduces risks – particularly the generation of false or misleading information that may appear factual but is entirely fabricated. Known as 'AI hallucinations', these errors have already led to significant professional consequences in courtrooms across Australia and abroad, as well as consequences for the administration of justice.

Victoria

In September 2025, the Law Institute of Victoria published comprehensive guidelines for the ethical use of AI in legal practice.1 These guidelines emphasise the importance of maintaining confidentiality, verifying outputs, and ensuring transparency with clients and the courts.

The guidelines require that legal practitioners must:

  • never input confidential or privileged information into open-source or commercial AI systems without client consent,
  • verify the accuracy and relevance of all AI-generated outputs, including case citations,
  • disclose the use of AI to clients and be prepared to inform opposing counsel and the court if questioned,
  • review the AI tool’s privacy policy, terms of use, as well as its Model Card and Data Nutrition Label (if available),
  • engage with technology vendors to make informed decisions about AI adoption,
  • ensure fair billing practices for work completed with AI assistance, and
  • retain necessary records to conduct conflict checks after a retainer ends.

These principles reinforce core professional obligations such as competence, diligence, honesty, and the preservation of confidentiality and privilege. A detailed review of the guidelines is necessary for all individuals using AI in legal practice / legal matters in Victoria.

Lessons from Dayal

The importance of these obligations and proper use of AI in legal matters was underscored in the 2024 case of Dayal [2024] FedCFamC2F 1166 (Dayal).2 In that matter, a legal practitioner submitted a court document containing hallucinated authorities generated by AI. The practitioner was subsequently referred to the Victorian Legal Services Board.

On 19 August 2025, the Board responded by varying the practitioner’s practising certificate. The variation imposed significant restrictions whereby the practitioner:

  • is no longer entitled to practise as a principal lawyer,
  • is prohibited from handling trust money or operating their own law practice,
  • may only practise as an employee solicitor,
  • must undertake supervised legal practice for two years, and
  • is required to report quarterly to the Victorian Legal Services Board during this period of supervision.

This disciplinary action demonstrates the seriousness with which regulatory bodies are likely to treat AI misuse in legal practice and serves as a cautionary example of the professional consequences that can flow from failing to verify AI-generated content.

Queensland

On 15 September 2025, the Queensland Courts updated their generative AI guidelines for judicial officers, legal staff, and practitioners appearing before courts and tribunals.3 The guidelines aim to improve awareness of the risks associated with AI, including the potential for biased or incorrect outputs and inadvertent disclosure of sensitive information.

Key directives include:

  • treating any data entered into public AI tools as potentially published to the world,
  • carefully reviewing all AI-generated content for accuracy and ethical compliance,
  • considering issues of bias, copyright, and plagiarism, and
  • maintaining robust security protocols.

Like stated above, a detailed review of the guidelines is necessary for all individuals using AI in legal practice / legal matters in Queensland.

Key takeaways

The recent updates from Victoria and Queensland, along with disciplinary actions such as those seen in Dayal, highlight how regulatory bodies are not only supervising, but are prepared to act when AI is misused. This signals a shift from guidance to enforcement, with professional obligations being redefined in the context of emerging technologies.

While AI can enhance legal workflows, these developments reinforce that its use must be grounded in ethical awareness and professional responsibility. In order to maintain compliance, those leveraging AI in legal practice must remain vigilant, informed, and responsive to evolving best practice standards.

View our recent video sharing insights on these guidelines here.


1 Ethical and Responsible Use of Artificial Intelligence | The Law Institute of Victoria
2 Dayal [2024] FedCFamC2F 1166
3 Using Generative AI | Queensland Courts

Ask us how we can help

Receive our latest news, insights and events
Barry Nilsson acknowledges the traditional owners of the land on which we conduct our business, and pays respect to their Elders past, present and emerging.
Liability limited by a scheme approved under Professional Standards Legislation