In March 2023, we asked ChatGPT some simple questions about vaccination and were pleasantly surprised by the responses. (And, just for fun, we prompted it to write a poem about vaccines – click and scroll down to read it.)
Since then, researchers have been conducting rigorous assessments of how well large language models (LLMs) such as ChatGPT perform when faced with questions about vaccination.

A study led by the University of Sassari in Italy tested how ChatGPT handled the World Health Organization’s 11 ‘myths and misconceptions’ about vaccinations. The team asked the same questions of GPT-3.5 (a free version of ChatGPT) and GPT-4.0 (the paid version). Then they asked two experts to rate the replies.
Overall, the responses were considered to be easy to understand and 85.4% accurate, although one of the questions was misinterpreted. Overall, the paid version of the tool was considered more accurate, clearer and more exhaustive.
AI & health information: pros and cons
We asked the team behind the study how AI might be used to address misinformation – and whether there are any risks associated with the rapid adoption of AI-driven tools.
‘Artificial Intelligence—and Large Language Models (LLMs) in particular—can play a significant role today in disseminating accurate information and countering misinformation,’ said Marco Dettori, Associate Professor at the University of Sassari.
‘Their potential lies in their ability to quickly process and synthesise vast amounts of data, making complex content more accessible to diverse audiences. In the health field, for example, these tools can support institutional communication, facilitate access to up-to-date scientific knowledge, and help improve public understanding by adapting language and format to the specific characteristics of different user groups.’
Dr Giovanna Deiana, a medical doctor at the University Hospital of Sassari, says this technology may also have a role in addressing online misinformation. ‘LLMs can also be actively used to counter the spread of false information through automated fact-checking systems, generating accurate content in response to distorted or misleading claims, and identifying problematic narratives circulating on social media,’ she said. ‘In this sense, they represent valuable tools for strengthening the quality of online information, promoting health literacy, and supporting public communication campaigns.’

However, the rapid emergence of AI chatbots raises some concerns. ‘Firstly, LLMs are not infallible and can generate content that appears credible but is incorrect, incomplete, or misleading,’ Dr Deiana said. ‘This phenomenon is particularly problematic in the context of scientific or health-related information, where precision is essential. There is also a risk that, in the absence of verified sources or proper oversight, these tools may reflect or even amplify existing biases or misinformation.’
Furthermore, Prof Dettori highlights potential inequities in accessing the latest tools. ‘The most advanced and high-performing versions of LLMs are often available only through paid services, limiting access for segments of the population or institutions with limited financial or infrastructural resources. This digital divide can exacerbate information inequalities between high- and low-income countries, and individuals with differing levels of digital literacy.’

Finally, there is a risk that LLMs may be used for purposes contrary to the public good – such as automated disinformation campaigns, manipulation of public opinion, the systematic spread of misleading narratives.
‘Moreover, the growing trust many people place in these tools may lead to the perception that LLM outputs are neutral, objective, and fact-checked—even when responses are based on probabilistic inference and unverifiable sources,’ he says.
The impact of this fast-moving technology will depend on how they are designed, regulated, and distributed, as well as on the social, economic, and cultural contexts in which they are used.
Room for improvement
In a separate paper published in npj Digital Medicine, Professor Fernández-Pichel and colleagues at the University of Santiago de Compostela in Spain, set out to compare how search engines and LLMs handled common health questions – including queries about vaccine safety.
‘We found that LLMs often provided more focused and relevant answers – largely because search engines tend to return cluttered results that don’t always directly address the question,’ Prof Fernández-Pichel told Vaccines Today.
‘That said, the performance of LLMs is not yet where it needs to be for high-stakes areas like health. Around 10–15% of their responses were inaccurate, which is too high when it comes to medical information that people may rely on to make important decisions.’
Read more on this study