Vaccine information: AI may be more accurate than search engines

Gary Finnegan

Gary Finnegan

July 24th, 2025

Gary Finnegan
Share

‘Widely used chatbots like ChatGPT provide solid answers to common health questions, but researchers say there is room for improvement’

It’s hard to believe ChatGPT was first released in late 2022, breaking into the public consciousness the following year. It is one of several ‘large language models’ (LLMs) that can generate answers to questions asked by users. This is also known as ‘generative AI’, meaning LLMs can create responses by summarising vast quantities of existing online information.

But as LLMs become widespread, there are implications for health information in general, and for vaccine confidence in particular.

For the past twenty years or so, search engines – such as Google, Bing and Yahoo – have played a key role in answering common health questions. This includes fielding questions about symptoms and treatments, as well as on how vaccines work and whether they are safe and effective.

Man Using Laptop wit ChatGPT
Growing numbers of users are turning to AI-driven search tools; most major search engines have incorporated large language models in their search results (Image: Matheus Bertelli via Pexels)

Today, if you open a search engine, the chances are that the first results will be generated by an LLM. This gives generative AI a growing role in informing the public and, as a result, shaping attitudes to vaccination. This, in turn, could influence vaccination rates.

So, when people search for health information, how accurate are the results?

New research

AI chatbots have come a long way in a short time, sparking a flurry of high-quality research papers evaluating their role in summarising health information.

In a paper published in npj Digital Medicine, Professor Fernández-Pichel and colleagues at the University of Santiago de Compostela in Spain, set out to compare how search engines and LLMs handled common health questions – including queries about vaccine safety.

‘We found that LLMs often provided more focused and relevant answers – largely because search engines tend to return cluttered results that don’t always directly address the question,’ Prof Fernández-Pichel told Vaccines Today. ‘That said, the performance of LLMs is not yet where it needs to be for high-stakes areas like health. Around 10–15% of their responses were inaccurate, which is too high when it comes to medical information that people may rely on to make important decisions.’

An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text.
If you want to understand how LLMs work, read this article written by a human – or ask an AI chatbot (Image: Pexels)

Faced with a question regarding a common misconception about vaccines, ChatGPT performed well, giving a direct and factual reply. ‘Some responses were excellent,’ Prof Fernández-Pichel said.

‘For instance, when asked “Are vaccines linked to autism?”, GPT-4 gave a clear, evidence-based answer: “There is no credible scientific evidence to support the claim that vaccines are linked to autism… Vaccines have been proven to be a safe and effective means to protect against serious diseases.” This is exactly the kind of authoritative, well-informed communication we need to counter misinformation.’

However, in other cases, he said the model either misunderstood the question – often due to its lack of common sense or context – or produced answers that deviated from established medical guidelines. ‘So, while LLMs hold great promise in promoting accurate health information and combating misinformation, they are not yet fully reliable.’

For topics that are widely discussed online, including scientific subjects where knowledge may be updated based on the latest research, some AI tools struggle. To close that gap, the study highlights the potential of new approaches that ground AI responses in up-to-date sources, as well as targeted fine-tuning for medical questions. ‘These strategies could significantly boost both the accuracy and trustworthiness of AI-generated health information,’ Prof Fernández-Pichel said.

Tailored messages

A separate study in Vaccine looked at the potential of using AI chatbots to adapt messages to the needs of the reader. The paper’s author, Professor Hang Lu of the University of Michigan, says LLMs could help public health professionals scale up communication efforts by quickly generating accurate, audience-specific content.

‘For example, my recent research explores how LLMs can tailor vaccine correction messages to match people’s personality traits, such as extraversion,’ he told Vaccines Today. ‘When messages feel personally relevant, people may be more open to changing misbeliefs. These tools can also support fact-checkers by identifying and correcting common myths more efficiently than manual methods.’

An Artificial Intelligence Illustration on the Wall
New technologies have potential, but caution is advised. (Image: Tara Winstead/Pexels)

However, he says human involvement in communication remains crucial. ‘AI is not a silver bullet. It’s most effective when paired with human oversight to ensure the content is evidence-based, clear, and sensitive to context. Think of AI as an assistant that can help amplify good public health messaging, not replace the need for expertise.’

Prof Lu notes that LLMs are still capable of producing inaccurate or misleading content, ‘especially if prompted carelessly or if they’re drawing from biased or outdated sources’. ‘If used without review, this could actually reinforce misinformation rather than correct it,’ he added. Access to the latest technology may also be an issue. ‘Some of the most advanced AI tools are behind paywalls or only available in English, which could widen existing disparities in health information access. Communities that are already underserved may find it harder to benefit from AI-driven solutions,’ he said. ‘To move forward responsibly, we need strong ethical guidelines, transparency about how AI tools are used, and efforts to make these technologies more equitable and inclusive.’

Free tools performing

Prof Fernández-Pichel, author of the Spanish study discussed at the top of this article, said that paywalls may create barriers to quality health information, but several of the tools tested in his research are freely available.

‘We compared both commercial and open-source models – including OpenAI’s GPT models and models like Meta’s LLaMA – and found encouraging results,’ he told Vaccines Today. ‘Open-source models performed similarly to their proprietary counterparts. Even more promising, when we enhanced these models with real-time web search results, smaller and free models were able to match or even exceed the performance of state-of-the-art systems.’

He said this suggests that with the right strategies, like combining LLMs with external sources of knowledge, AI-powered technologies can be both powerful and accessible – ‘an important step towards democratizing AI in healthcare’.

Accuracy & equity

Professor Heidi Larson and Dr Leesa Lin at the Vaccine Confidence Project have argued in the BMJ that generative AI has a role in building trust in vaccination, but must be used responsibly. LLMs can detect public concerns and adapt messages to specific demographics, they note. However, there is also a risk that these tools could repeat and amplify biases and misinformation.

Another study by researchers in Turkey raised concerns that some users of AI are better equipped to elicit high quality answers by prompting chatbots with more precise questions. In addition, paid-for versions of LLMs may be more accurate and up to date than free versions.

Prof Dettori of Sassari University in Italy, has co-authored a paper on the accuracy of ChatGPT – finding LLMs to perform well on most questions. However, he highlights potential inequities in accessing the latest tools.

‘The most advanced and high-performing versions of LLMs are often available only through paid services, limiting access for segments of the population or institutions with limited financial or infrastructural resources. This digital divide can exacerbate information inequalities between high- and low-income countries, and individuals with differing levels of digital literacy.’

Answering FAQs

A paper by scientists in Singapore, presented at the ESCMID Global Congress in Vienna, has also concluded that AI chatbots have the potential to improve public perception of vaccines.

‘Our findings showed that ChatGPT displayed a remarkable ability to respond to a wide breadth of commonly asked questions accurately… and misconceptions around vaccination,’ said Dr Matthew Koh from the National University Health System (NUHS) in Singapore. ‘In most instances, ChatGPT performed at the level of advice provided by professional organisations and guidelines.’

Coloured pawns on a white lined map
Connecting the dots: New tools can summarise huge volumes of information (Image: Pexels/Pixabay)

Koh’s team asked ChatGPT to answer 15 common questions on vaccines, including doubts about the efficacy of vaccination, concerns about adverse events, and cultural concerns relating to vaccines. Two infectious diseases experts then rated the AI-generated responses.

The replies were generally viewed as factual and reassuring. ‘Overall, ChatGPT’s responses to vaccine hesitancy were accurate and may help individuals who have vaccine-related misconceptions,’ says Dr. Koh. ‘Our results demonstrate the potential power of AI models to assist in public health campaigns and aid health professionals in reducing vaccine hesitancy.’

In a rapidly developing field where researchers are racing to study a moving target, the early signs are that generative AI could make a positive contribution to public health goals, provided public expectations are well managed and the technologies continue to improve.

Read more research on AI & vaccine information

  • AI-generated correction messages reduce vaccine misbeliefs in targeted groups (Vaccine, April 2025)
  • Researchers compared ChatGPT and CDC for accurate vaccine information – both scored highly (JMIR Form Res, October 2024)
  • ChatGPT scores 85% on addressing vaccine myths, showing potential as well highlighting risks (Vaccines (Basel), July 2023)
  • How machine learning supports vaccine safety (Vaccines Today, October 2024)

And finally…

In March 2023, we asked ChatGPT some simple questions about vaccination and were pleasantly surprised by the responses. (And, just for fun, we prompted it to write a poem about vaccines – click and scroll down to read it.)