Regulating medical AI scribes
Automatic scribes run by artificial intelligence now routinely ‘listen in’ on your visit to the doctor.
These software scribes – potentially used by around 40 per cent of general practitioners in Australia (and growing) – are transforming medical practice.
They are also used beyond your GP clinic, by specialists and in hospital care to draft medical records and other documents, like discharge summaries and referral letters.
Dozens of companies are now producing AI scribe products, which are becoming ubiquitous in healthcare settings. Globally, the industry is worth hundreds of millions of dollars.
They are claimed to save doctors’ time and decrease burnout, although evidence for these claims is mixed.
Perhaps they can improve the accuracy and completeness of medical records, but they also carry risks, particularly to the privacy of highly sensitive information, patient autonomy, data security and bias.
So what if they actually worsened medical care? And who is checking? Regulators in Australia are behind the curve.
Beware the bias
AI scribes have developed very quickly and spread into medical practice like wildfire.
Many doctors love them, but it’s important for us to fact-check how beneficial the tool really is. Evidence of their success and accuracy is mixed.
Studies have shown that AI scribes routinely leave out important information, add incorrect information and sometimes simply make things up – something known as a ‘hallucination’.
One doctor reported that the system had included an entire neurological exam that simply hadn’t happened; another recorded an oral contraceptive when the patient was a man.
Of course, humans make mistakes in medical records, too. But automation bias – our tendency to accept and favour AI-generated answers – means that healthcare providers might increasingly place unwarranted trust in AI scribes.
Doctors have a duty to ensure they keep accurate records.
They can’t delegate this duty to computer software, as tempting as it may be for busy clinicians who are drowning in paperwork. Instead, they need to carefully interrogate the output of AI scribes.
And it’s not just automation bias that we need to worry about.
Where’s the humanity?
Like all large language models (LLMs), AI scribe products contain biases that can impact patients’ care in the real world.
For example, they may not accurately decode the speech of patients from diverse cultural backgrounds or those whose speech is impaired.
An aspect of AI scribes that both doctors and patients might welcome is that they help face-to-face interaction. Doctors can avoid typing on the computer throughout the consultation, relying on the scribe to capture important information.
We know that high-quality doctor-patient communication is valuable – but this may be the crux of the issue. It isn’t necessarily doctor-patient-computer communication that people value.
Commentators have suggested that something almost indefinable, something ‘human’, is lost when a computer becomes the ‘third party’ listening in on the doctor’s office.
And this is also true when the usual ‘chit-chat’ between doctor and patient – so valuable for building relationships and picking up matters of potential importance – is deliberately excluded from the generated record.
Consent is crucial
First and foremost, doctors should not be using AI scribes without a patient’s informed consent.
But what does this actually mean in practice? It’s the ‘informed’ bit that’s messy.
Ideally, patients should be told about the use of the scribe – and whether it is ‘listening in’ or actually recording. They should also be told how their information is used and stored.
For instance, some companies use de-identified patient data to train their AI models (for example, Microsoft’s Dragon Copilot).
Most have servers in Australia, but some, like ChartNote, Abridge and Upheal, process data in the US. Some don’t retain any patient data, while others might store transcripts and notes for weeks or months.
Patients don’t have to agree to the doctor’s use of an AI scribe. But it’s important to recognise that they might feel uncomfortable saying no.
The Australian Health Practitioner Regulation Agency (AHPRA) has also said that doctors can decline to see patients who don’t agree to using an AI scribe. This can put patients in an awkward position when seeking care, especially when they want to protect their information.
Consent is important because AI scribes collect medical information (and thus fall under health records and privacy laws).
They are also acting as surveillance devices, and using them without consent can incur criminal penalties in many states and territories – so the concerns are not isolated to patients but to clinicians too.
Regulatory oversight
It may surprise patients to learn that while the use of AI scribes in medical practice is increasing at a lightning-fast pace, these products haven’t yet gained traction with our key medical device regulator.
In fact, as long as they only ‘scribe’ rather than help doctors make medical decisions, the Therapeutic Goods Administration excludes them from oversight.
This regulatory gap exposes clinicians to significant risk and leaves patients without adequate protection. Without proper regulation, scribes risk becoming a lawless ‘wild west’ of medical technology.
In contrast, the UK has recently moved to regulate AI scribes under medical device law, with the National Health Service England (NHS) mandating compliance with formal standards and registration with the Medicines and Healthcare products Regulatory Agency.
This is a pathway that Australia could follow.
When it comes to AI scribes in Australian healthcare, self-regulation and the individual exercise of professional judgement is no longer tenable.
As AI scribes become embedded in clinical workflows, it is time to introduce clear, enforceable regulatory standards to ensure safety, accountability and public trust in digital healthcare.
This article was published by Pursuit.
Megan Prictor is a Research Fellow at the Melbourne Law School in the University of Melbourne. She researches in the fields of law and emerging health technologies, such as biobanking, genomic medicine, and data regulation.

