Artificial intelligence is rapidly transforming healthcare across Australia, reshaping how diagnostics, treatment planning, and mental health support are delivered. As these technologies grow more autonomous and influential, they raise urgent questions about legal responsibility, regulatory oversight, and patient safety.
Current state-of-play
Over the past year, Australia has seen a rapid uptake of artificial intelligence (AI) across many areas of healthcare. In response to the growing integration of AI in clinical settings, the Australian Government released its 2025 Final Report on Safe and Responsible AI in Health Care,1 acknowledging that existing legislation is not fully equipped to manage the risks posed by AI in high-stakes clinical environments.
The report calls for national policy leadership, tailored regulatory frameworks, and mandatory guardrails for high-risk AI applications. It also emphasises the need for clear accountability structures, particularly in scenarios where AI systems influence or automate clinical decisions.
How AI is impacting legal liability and professional standards
As Australia begins implementing national recommendations, the implications of AI integration in healthcare are becoming increasingly urgent, particularly for clinical, legal, and insurance professionals navigating care delivery, risk management, and compliance. It introduces new challenges, especially for liability in cases where, for example, an AI system misguides a procedure, as well as informed consent when AI is involved in clinical decision-making.
Legal liability
AI introduces new complexities in determining the responsible party when something goes wrong. In healthcare, doctors and hospitals owe a duty of care to their patients, pursuant to common law and civil liability legislation. This duty persists regardless of whether AI decision-support tools have been utilised.
The growing use of AI is prompting a redefinition of what constitutes ‘standard of care’. As with other medical equipment, clinicians must ensure responsible use of AI as part of patient care. Although software developers and IT system integrators might theoretically owe duties directly to patients if their products cause foreseeable harm, federal laws typically channel such claims through statutory product liability under the Australian Consumer Law, rather than through negligence. As a result, healthcare providers using AI-integrated systems will generally remain the primary accountable parties, with courts unlikely to accept sole reliance on AI recommendations as a defence against clinical responsibility.
However, highly autonomous AI scenarios, where AI acts independently, can complicate liability by requiring the identification of an accountable human or institution behind the AI system. In these scenarios, liability must rest with either the healthcare institution deploying the AI or individual clinicians supervising its use. For instance, vicarious liability principles ensure hospitals remain liable if staff are negligent, which would include decisions influenced by AI.
This means that while the law effectively assigns a duty of care within AI supported healthcare, the main challenge lies in establishing a breach of that duty, particularly given the complex and opaque nature of AI decision-making.
Determining a breach of duty involves assessing whether a medical professional's conduct fell below the standard of care expected of a reasonably skilled practitioner. AI complicates this assessment by raising questions about whether reliance on, or rejection of, an AI recommendation meets that reasonable standard, and how peer professional opinion defences apply when AI norms are still evolving. For example, if a doctor relies on an AI diagnosis later found to be incorrect, the key issue is whether a competent doctor would have identified the error through additional scrutiny or obtaining second opinions. If the AI’s error was so obscure that no reasonable clinician could have detected it, the clinician may not be deemed negligent.
These evaluations increasingly rely on expert input and evolving guidelines, which advocate for cautious and supplementary use of AI. There is growing recognition that new types of expert witnesses, with both clinical and AI literacy, may be needed to assess malpractice claims involving AI.
Black box paradox
The inherent ‘black box’ nature of AI systems complicates the assessment of breaches, whereby the internal logic or reasoning behind an AI’s output is unclear or inaccessible, making it difficult to trace errors or justify decisions. Claimants must demonstrate that a clinician’s reliance on AI was unreasonable, which is difficult when the system’s decision-making processes are not transparent or fully understood, even by its developers.
This hurdle poses significant practical challenges, potentially requiring innovative legal strategies or burden of proof adjustments, as suggested internationally by the European Union. Additionally, scenarios may arise where patient harm results solely from an AI failure, without clinician negligence, shifting liability toward manufacturers under product liability.
Overall, the integration of AI into healthcare demands clearer professional guidelines and may require legal reform to address the evolving challenges of establishing breaches and assigning responsibility.
Professional standards
In August 2023, the Australian Medical Association issued a position statement supporting the use of AI in healthcare, provided it remains patient-centered and enhances the health and well-being of individuals and the broader community.2 Building on this, the Australian Health Practitioner Regulation Agency (AHPRA) and the National Boards released guiding principles to help health practitioners assess the use of AI in their practice. These principles reflect existing obligations within professional codes of conduct and urge practitioners to uphold responsibilities such as:
- accountability
- thorough understanding of the AI tool
- transparency of use
- informed consent, and
- ethical and legal compliance.
The consistent message from regulators is clear: AI is not a substitute for human expertise, but a valuable complement when applied thoughtfully and ethically.
Therapeutic Goods Administration guidance on AI scribes
In August 2025, the Therapeutic Goods Administration (TGA) clarified that digital scribes, also referred to as AI scribes or ambient scribes, are regulated as medical devices under the Therapeutic Goods Act 1989 when they perform functions beyond simple transcription.3
If a digital scribe is designed to analyse or interpret clinical conversations, such as generating diagnoses, treatment recommendations, or clinical insights not explicitly stated by the practitioner, it is considered to have a therapeutic purpose and must be included in the Australian Register of Therapeutic Goods before being supplied.
These regulated products must meet all relevant safety, privacy, and compliance obligations, including informed consent, review of software updates that may alter functionality, and mechanisms for reporting safety concerns or non-compliance. The TGA also notes that digital scribes used solely for transcription, without interpretation or clinical decision support, do not fall under the medical device definition.
As AI tools become more embedded in clinical practice, including cases where their use is essential in managing practitioner workloads, these guidelines underscore the growing need for legal clarity, patient autonomy, and robust oversight.
AI in mental health – opportunities and challenges
AI is also reshaping mental health care. A 2024 study published in JMIR Mental Health surveyed both community members and mental health professionals across Australia,4 revealing that 28% of community members and 43% of mental health professionals are already using AI tools – primarily for quick support, personal therapy, research, and clinical documentation.
Most respondents acknowledged the benefits of AI with respect to accessibility, cost reduction, and efficiency, although nearly half of the respondents reported concerns surrounding privacy, ethical risks, reduced human connection, and potential misuse.
This study highlights the dual-edged nature of AI’s introduction into mental healthcare: while AI offers scalable support for practitioners, it also presents new legal and ethical complexities that current frameworks are only beginning to address.
Preparing for the next phase of AI integration in healthcare
AI offers enormous benefits, but it also raises tough questions about safety, accountability, and patient rights. Below is a summary of key questions raised by industry stakeholders, along with practical guidance on how they are being addressed in the Australian healthcare context:
Who should be responsible when AI is involved in clinical decisions?
AI should not create a loophole in duty of care. Clinicians and hospitals must remain accountable. Every hospital should appoint a responsible officer to oversee AI use and ensure a human is always involved in key decisions.
If a patient is harmed by AI, is the law ready?
Not entirely. In Queensland, for instance, amending the Civil Liability Act to introduce a rebuttable presumption of breach (where the Court presumes a breach has occurred until evidence proves otherwise), could shift the burden to providers to prove non-negligence. Courts must also be empowered to demand transparency from AI developers to address the ‘black box’ problem.
Is healthcare AI covered under consumer protection laws?
Not explicitly. This regulatory shortfall requires attention. Until updates are made to the Australian Consumer Law, AI providers should indemnify hospitals and carry professional indemnity insurance to protect patients and institutions.
What about training, are clinicians prepared to use AI safely?
Not yet. Regulators should establish clear standards for AI-specific training and certification, especially for high-risk tools like autonomous surgical robots. Courts could then consider compliance with these standards when evaluating negligence claims.
How do we make sure laws are consistent across Australia?
National coordination is essential. Unified laws across Australia will give healthcare providers clarity and help developers operate within a fair, consistent legal framework.
Key takeaways
AI in healthcare is a present force reshaping how we diagnose, treat, and manage patient care. With AI becoming increasingly embedded in clinical workflows, professional standards will continue to evolve, and AI literacy is now essential for clinicians, legal, and insurance professional working within healthcare.
To keep pace, stakeholders should monitor updates from regulatory bodies such as AHPRA, the TGA, and state health departments, and stay engaged with emerging case law, product classifications, and training requirements. Participating in cross-disciplinary forums, subscribing to legal and health tech briefings, and conducting internal audits of AI use and risk exposure are practical steps professionals and organisations can begin implementing immediately.
1 Safe and Responsible Artificial Intelligence in Health Care – Legislation and Regulation Review
2 Artificial Intelligence in Healthcare | Australian Medical Association
3 Digital scribes | Therapeutic Goods Administration (TGA)
4 Cross S, Bell I, Nicholas J, Valentine L, Mangelsdorf S, Baker S, Titov N, Alvarez-Jimenez M | Use of AI in Mental Health Care: Community and Mental Health Professionals Survey