What was the opportunity?
Claudia Lehman, Boehringer Ingelheim (CL): We had already seen considerable efficiencies and quality gains from work we had been doing with rule-based automation over the last five years or so, but we felt there was still an untapped opportunity, which Generative AI (GenAI) would help us address. We had always wanted to harness AI as part of our pharmacovigilance (PV) activities.
In time, we expect to exploit the technology to aggregate document writing, and in analytics and other tasks. But the important thing was to get started – to build experience and expertise in a relatively safe space, with the aim of broadening GenAI’s application over time. We identified case processing/case intake as good initial use cases, as types of application that are easily controllable via human review.
Lucinda Smith, ArisGlobal (LS): The risk with delaying AI uptake until a more optimal moment is that you risk the early wins, such as operational efficiencies as well as the chance to build knowledge. The longer you wait, the greater the danger of falling behind.
Scoping the deployment
CL: Because we were starting small, we deliberately defined this as a technical initiative rather than a “project” – with scope to control and keep monitoring it. We wanted to do this in a very quick and flexible way, close to the topic decision making – enabling continuous interaction.
Our computer system validation group guided us too – in how to factor in use of AI in our validation and testing plans, and in the risk assessment we would need to do (what we would have to document; and how we would assess mitigation and outline quality control processes). You can’t just jump into AI without understanding and planning for all of this.
We had to factor in our case processing vendor here too, because they would be performing the quality control and would need to understand where the information is coming from, and how AI-enabled activity would differ from existing, validated, rule-based automation. Taking the time to work through this also ensured the vendor’s team didn’t see the technology as a threat.
Smoothing issues and optimising adoption
CL: When we started testing, we encountered some issues arising from differences in the interfaces.
This is because the AI functionality was integrated into existing workflow automation. It was a useful reminder of the need to consider and re-validate not just individual elements but also the overall process, when introducing or enhancing automation. Process qualification means testing that everything that goes in the AI engine comes back, for instance; that the fields are extracted into the right data points in the system; and that the whole process still works within the system. Initially that wasn’t the case but, with adjustments in the system, we made it work.
LS: Given the inevitable changes to the way people operate, to their mindsets, to processes, and to culture, it was also important to apply change management – to build trust, as well as skills.
CL: Because we had started our automation journey at least five years earlier, we had an advantage here. We already had experience of where and when reviews of data are needed, for example, and how to engender trust as we move away from manual processes and change the associated controls. We have adopted the “Four Eyes” principle (a second set of eyes), for instance, which has helped establish confidence. This ensures an optimum level of control over data as we defer increasingly to AI.
We need to be mindful of the risks, of course, so we are also providing a wealth of guidance for users on everything from “What is AI?”, “What is an algorithm?”, and “What is inference?”, to “What are hallucinations?”, and “What is the risk?” Although not everyone needs to become an AI developer, teams must understand safe use and the personal accountability that sits with each user. There’s a middle ground between blindly trusting AI and being so risk-averse that you reject the technology. When AI suggests something, we have to understand it and be able to rationalise it, using our logical minds. People need to learn to discern where the technology genuinely adds value.
Repositioning the role of human teams
CL: AI presents an opportunity to think about the way we work, and to suggest ideas if there is a process we don’t like. Even now there are practices we have carried forward from the times when they were manual and paper based. Introducing AI presents a chance to review whether there is scope to reinvent a process in an electronic/digital context. The end goal, though, is always good-quality data and a robust PV system.
It’s also in this context that we need to think about the evolution of PV roles. If we look across a whole process, where do our experts need to jump in; where does AI help; where and how does rule-based automation contribute; and where will targeted training help people make a positive difference? That could be in analysing exceptions, for instance, as technology takes over more of the transactional work.
The same scrutiny should apply at the case processing vendor’s side. The more that these companies can harness automation options including AI to streamline transactional work, the greater the scope for their own teams to add new value.
Key takeaways
CL: My advice to other pharma organisations considering a similar journey would be to seize the day, get started and build experience. If you start small, you have a chance to iron out issues before extending AI-based automation to larger work volumes or new use cases.
The wider opportunity is to capture rich information that might otherwise be missed, from free-text patient narration. Every patient that calls us, and tells us their story, adds to our understanding of the safety profile of the drugs in what they say. Even a non-serious case might include something in the free text that points to serious event information, and we cannot ignore that. We owe it to the safety of our patients to distil and harness more of those critical insights.
References and notes
*An in-depth podcast exploring Boehringer Ingelheim’s deployment of AI in PV is available an apple podcast – life science gen AI exchange.
ABOUT THE AUTHORS
Claudia Lehman1, Lucinda Smith2
1. Head of Global Pharmacovigilance Operations at Boehringer Ingelheim
2. Chief Safety Product Officer, ArisGlobal

Claudia Lehmann is Head of Patient Safety and Pharmacovigilance Operations at Boehringer Ingelheim.

Lucinda Smith is Chief Safety Product Officer at ArisGlobal. She previously worked in frontline scientific and strategic Pharmacovigilance and Drug Safety roles at a major pharma brand for more than two decades.



























