As the artificial intelligence train barrels on with no signs of slowing down — some studies have even predicted that AI will grow by more than 37% per year between now and 2030 — the World Health Organization (WHO) has issued an advisory calling for "safe and ethical AI for health."
The agency recommended caution when using "AI-generated large language model tools (LLMs) to protect and promote human well-being, human safety and autonomy, and preserve public health."
ChatGPT, Bard and Bert are currently some of the most popular LLMs.
In some cases, the chatbots have been shown to rival real physicians in terms of the quality of their responses to medical questions.
While the WHO acknowledges that there is "significant excitement" about the potential to use these chatbots for health-related needs, the organization underscores the need to weigh the risks carefully.
"This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision and rigorous evaluation."
The agency warned that adopting AI systems too quickly without thorough testing could result in "errors by health care workers" and could "cause harm to patients."
In its advisory, WHO warned that LLMs like ChatGPT could be trained on biased data, potentially "generating misleading or inaccurate information that could pose risks to health equity and inclusiveness."
There is also the risk that these AI models could generate incorrect responses to health questions while still coming across as confident and authoritative, the agency said.
"LLMs can be misused to generate and disseminate highly convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content," WHO stated.
Another concern is that LLMs might be trained on data without the consent of those who originally provided it — and that it may not have the proper protections in place for the sensitive data that patients enter when seeking advice.
"While committed to harnessing new technologies, including AI and digital health, to improve human health, WHO recommends that policy-makers ensure patient safety and protection while technology firms work to commercialize LLMs," the organization said.
Manny Krakaris, CEO of the San Francisco-based health technology company Augmedix, said he supports the WHO’s advisory.
"This is a quickly evolving topic and using caution is paramount to patient safety and privacy," he told Fox News Digital in an email.
NEW AI TOOL HELPS DOCTORS STREAMLINE DOCUMENTATION AND FOCUS ON PATIENTS
Augmedix leverages LLMs, along with other technologies, to produce medical documentation and data solutions.
"When used with appropriate guardrails and human oversight for quality assurance, LLMs can bring a great deal of efficiency," Krakaris said. "For example, they can be used to provide summarizations and streamline large amounts of data quickly."
He did highlight some potential risks, however.
"While LLMs can be used as a supportive tool, doctors and patients cannot rely on LLMs as a standalone solution," Krakaris said.
"LLMs generate data that appear accurate and definitive but may be completely erroneous, as WHO noted in its advisory," he continued. "This can have catastrophic consequences, especially in health care."
CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER
When creating its ambient medical documentation services, Augmedix combines LLMs with automatic speech recognition (ASR), natural language processing (NLP) and structured data models to help ensure the output is accurate and relevant, Krakaris said.
Krakaris said he sees a lot of promise for the use of AI in health care, as long as these technologies are used with caution, properly tested and guided by human involvement.
"AI will never fully replace people, but when used with the proper parameters to ensure that quality of care is not compromised, it can create efficiencies, ultimately supporting some of the biggest issues that plague the health care industry today, including clinician shortages and burnout," he said.