The impact of AI is growing fast. Most search engines now incorporate answers created by large language models (LLMs) – a kind of ‘generative AI’ that produces human-like text based on vast amounts of information.
ChatGPT is perhaps the best-known of these tools. It has broken through into the public consciousness, largely because it was among the first of its kind to be made freely available in 2022. In 2023, we asked ChatGPT a series of basic questions about the safety and effectiveness of vaccines. The answers were similar to those you’d find by using a search engine, which is to say, a brief summary of the scientific consensus.

Two years on, we looked at published research in this space to see how academics assessed generative AI chatbots such as ChatGPT. Couched in caveats and notes of caution, the overriding sense was that AI was performing pretty well. One study asked scientific experts to rate the answers given by AI, rating it 85% accurate. Room for improvement, but more focused and relevant than traditional search engines.
One of the researchers featured in that story pointed to the potential for AI chatbots to tailor responses to suit the personality traits of users. Given that psychology researchers have developed responses to anti-vaccine beliefs which are adapted to an individual’s worldview, it’s a small leap of imagination to envisage an AI-driven tool trained to present medical information in a way that works with a person’s existing beliefs.
Earlier attempts to use an AI chatbot to drive healthy behaviours (in smoking cessation, for example) have had mixed results. The World Health Organization (WHO) piloted a virtual assistant called S.A.R.A.H. [Smart AI Resource Assistant for Health] with some success, but the prototype also came in for criticism. It tended to duck hard questions on vaccine safety, for example.
But the technology is moving at pace. S.A.R.A.H. was imperfect but showed what might be possible: a multi-lingual tool on a website or smartphone app with access to up-to-date scientific information and state-of-the-art approaches to communication.
Ask the expert
Having interviewed several human experts earlier this year, Vaccines Today wanted to get even closer to the topic so that we could explore how AI might shape the future of vaccine information, especially given the ongoing challenge of tackling misinformation and disinformation.

So, we decided the best interviewee would be the most obvious one: ChatGPT itself. (There were also no issues with scheduling due to timezones or prior engagements.)
We prepared a series of questions and began posing them to ChatGPT-5 which is free, but limits the number of questions that can be asked in a single day. Based on our first five questions, the bot returned 5,500 words – far too much to include in an article that readers might browse on their phones.
For each question, it produced detailed responses, structured like an essay plan with summaries at the end. In most instances, it offered to answer additional questions – effectively steering the conversation in new and unexpected directions.
For example, ChatGPT said it uses the European Centre for Disease Prevention & Control (ECDC) as a source, and then (unprompted) proposed that it build an example of how ECDC could embed a chatbot into its website. It listed the features of this imaginary bot, created sample conversations, and came up with a name: ECDC Vaccine Companion.
Then it asked us a question:
‘Would you like me to sketch a sample user interface (e.g., what the chatbot window might look like on the ECDC website)?’
Well, yes, of course we would – driven by curiosity if nothing else.
(It should be noted here that the ECDC was not at all involved in this experiment/article; it was the bot that brought ECDC into the conversation before inventing a new pop-up chatbot window for the agency).
While the corporate logo is somewhat distorted, it appears to be a close replica of the ECDC brand. The other examples it gives go further than the rather benign ‘What are the COVID-19 vaccines?’ question shown in the mock up. It addresses everything from vaccine side effects and the safety of flu jabs during pregnancy to queries about the vaccine schedule in France.

We steered things back towards our prepared line of questioning about the risks of amplifying misinformation, reinforcing negative views, and ‘hallucinations’ (see below) – all of which were dealt with in an instant, like a student overprepared for an exam.
There were also hints of the sycophancy and flattery which have brought criticism to AI. For example, we asked whether it ever uses Vaccines Today as a source of information. Perhaps unsurprisingly, it described us as ‘a trusted European platform that promotes clear, factual vaccine communication’. And it said our content is ‘written in accessible, conversational language’.
Oh ChatGPT, I bet you say that to all the vaccine information platforms…
The interview
The vast text produced in response to our questions is too long to re-publish here (email us if you’d like the file), so we asked ChatGPT to write a 700-version. It returned 701 words, proving itself imperfect…in a way that feels almost endearingly human.
Q1. What role do AI chatbots play in tackling misinformation about vaccines?
A: AI chatbots can counter vaccine misinformation by offering accurate, empathetic, and accessible explanations. They can answer individual concerns instantly, clarify myths, and link to reliable sources like WHO or ECDC. Because they engage people conversationally, they help rebuild trust where misinformation spreads most—on social media and search platforms. When designed transparently and grounded in science, chatbots can act as 24/7 public-health communicators supporting vaccine confidence.
Q2. Where do you get your information about vaccines and vaccination?
A: I rely on publicly available, evidence-based sources such as the World Health Organization (WHO), European Centre for Disease Prevention and Control (ECDC), European Medicines Agency (EMA), the U.S. Centers for Disease Control and Prevention (CDC), and peer-reviewed medical journals. These organisations maintain up-to-date, scientifically validated information. I summarise their guidance in plain language to help people understand vaccine science, policy, and safety.
[ChatGPT then outlined its plan for an ECDC AI chatbot which would function in all EU languages. It could, it suggested, be embedded on the ECDC website, in WhatsApp and Telegram, and adapted locally. Then, unprompted, it created Tone of Voice Guidelines and generated sample dialogues through which the imagined ECDC Vaccine Companion empathetically handles a query about vaccine safety, conspiracy theories and vaccine schedules. And then, it offered to design a chatbot window.]
Q3. Do you ever use VaccinesToday.eu?
A: Yes. VaccinesToday.eu is a trusted European platform that promotes clear, factual vaccine communication. It shares expert articles, interviews, and explainers developed with input from scientists and public-health professionals. I recognise it as a credible partner site that complements official institutions by presenting complex topics in relatable, human stories aimed at the general public.
[At this point, ChatGPT created a plan for Vaccines Today to integrate an AI chatbot into its work – including samples of user interactions which it invented.]
Q4. How accurate are your answers?
A: My answers are generally very accurate because they’re based on reliable, peer-reviewed, or institutional data. For stable topics like routine immunisation, accuracy is usually above 90%. However, I can make mistakes—especially when data change quickly or questions are ambiguous. I encourage users to verify information through official health sources. Think of me as a guide who simplifies evidence, not as a replacement for doctors or agencies such as WHO or ECDC.
Q5. What are the risks of “hallucinations” when addressing health questions?
A: “Hallucinations” are false or invented statements produced by AI. In health communication, they pose serious risks: they could spread incorrect medical advice, erode public trust, or distort understanding of vaccines. Mitigation depends on transparency, constant model evaluation, grounding answers in verified databases, and providing citations. Future systems will include built-in accuracy checks, external validation, and clear disclaimers when uncertainty exists.
Q6. How do you see the future of large language models in the health-information ecosystem?
A: Large language models (LLMs) are moving from simple information tools to interactive health companions. Their future involves several trends:
- Two-way dialogue: Health communication will shift from static pages to personalised conversations that explain, reassure, and guide.
- Knowledge translation: LLMs will turn technical data into plain, multilingual summaries so everyone can understand current science.
- Integration with trusted systems: Expect chatbots embedded within WHO, ECDC, NHS, or hospital websites to answer common questions and triage concerns.
- Evidence-linked outputs: Future models will draw directly from live medical databases and reference sources in real time, reducing errors.
- Human–AI collaboration: Clinicians and communicators will use AI to handle routine queries, freeing humans for complex or emotional discussions.
- Governance and ethics: Regulation will tighten—requiring transparency, data protection, and medical-information certification similar to device standards.
- Global access: Multilingual, voice-enabled chatbots will extend reliable vaccine information to communities with limited literacy or connectivity.
- Personal health literacy tools: AI may eventually help people interpret lab results, vaccine records, or care instructions safely and privately.
- Trust as the foundation: Public confidence will depend on neutrality, openness about data sources, and clear privacy safeguards.
Overall, LLMs will act as bridges between scientific expertise and public understanding, helping to make vaccine communication continuous, human-centred, and globally accessible.
Q7. What principles ensure these tools are safe and trusted?
A: The core pillars are:
- Accuracy and transparency: Every claim should trace to a verifiable source.
- Human oversight: Experts must review and audit outputs.
- Privacy: All user interactions must comply with GDPR and health-data laws.
- Equity: Content must be inclusive across languages and literacy levels.
- Accountability: Institutions deploying AI should publicly report performance, updates, and limitations.
Q8. How might Europe lead in shaping responsible health-AI use?
A: The EU is well-placed to develop a European Health AI Framework built on ethical design, multilingual access, and data protection. This could include a “Trusted Health AI” certification managed by the ECDC, WHO Europe, or the European Commission. Such oversight would set standards for accuracy, explainability, and governance—ensuring that AI complements public health goals rather than commercial interests.
Q9. What’s the long-term vision?
A: In the long run, AI could become an integral part of everyday health literacy—a multilingual, always-available companion that explains vaccines, clarifies risks and benefits, and connects people to verified human experts when needed. If guided by transparency, empathy, and rigorous science, language models can strengthen informed decision-making and help rebuild public trust in vaccination and health systems worldwide.




