A fifth of family doctors (GPs) seem to have readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools, suggest the findings of an online UK-wide snapshot survey.
Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the inherent risks of inaccuracies (‘hallucinations’), algorithmic biases, and the potential to compromise patient privacy, conclude the researchers.
Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools, say the researchers.
To gauge current use of chatbots to assist with any aspect of clinical practice in the UK, in February 2024 the researchers distributed an online survey to a randomly chosen sample of GPs registered with the clinician marketing service Doctors.net.uk. The survey had a predetermined sample size of 1000.
The doctors were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or ‘Other’. And they were subsequently asked what they used these tools for.
Some 1006 GPs completed the survey: just over half the responses came from men (531; 53%) and a similar proportion of respondents (544;54%) were aged 46 or older.
One in five (205; 20%) respondents reported using generative AI tools in their clinical practice. Of these, more than 1 in 4 (29%; 47) reported using these tools to generate documentation after patient appointments and a similar proportion (28%; 45) said they used them to suggest a differential diagnosis. One in four (25%; 40) said they used the tools to suggest treatment options.
The researchers acknowledge that the survey respondents may not be representative of all UK GPs, and that those who responded may have been particularly interested in AI - for good or bad - potentially introducing a level of bias into the findings.
Further research is needed to find out more about how doctors are using generative AI and how best to implement these tools safely and securely into clinical practice, they add.
“These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,” they say.
They point out: “[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather.
“While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice.”
In conclusion, they comment: “The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy.”
To view the full paper, visit: Generative artificial intelligence in primary care: an online survey of UK general practitioners | BMJ Health & Care Informatics