2024 | September / October 2024

Bridging AI and GMP Standards in the Future of Pharma

by cyb2025

ELKE WIESER1, STEFAN PAULI2
1.VTU Österreich, Wien, Austria
2.VTU Engineering Schweiz AG, Muttenz, Switzerland

ABSTRACT

This article examines the integration of Artificial Intelligence (AI) in pharmaceutical manufacturing, particularly in regard to Good Manufacturing Practice (GMP) standards. While AI offers significant potential, challenges remain in regulation and validation. Regulatory bodies like the FDA and European authorities are working on incorporating AI into GxP processes, but specific guidelines are still in development. The report highlights recent advancements, including a proposed validation framework and examples of successful AI applications in GxP-compliant production. Overall, AI adoption in pharmaceutical manufacturing is advancing, but it should be approached cautiously to maintain product quality and patient safety.

CONCEPTS AND DEFINITION OF ARTIFICIAL INTELLIGENCE

Artificial Intelligence (AI) generally refers to machines that partially replicate human intelligence in an artificial manner. A fitting quote from Elaine Rich in 1983 defines AI as “the study of how to make computers do things at which, at the moment, people are better.” AI encompasses a variety of technologies, including machine learning (ML) algorithms that learn independently from collected data. These range from simpler methods like linear regression or random forests to more complex ones such as neural networks. The latter connect neurons in a network modeled after the structure of the human brain, linking one neuron (nerve cell) to a network of other neurons. Large neural networks are referred to as deep learning when they excel in processing images, text, or speech, although they require a significant amount of data and computational power. Therefore, deep learning is the closest approximation to the brain, although even here, much is simplified or adapted since our brain cannot be replicated one-to-one. Neurotransmitters in the human brain, such as dopamine, are not simulated, and the cells in the brain are organized in a linear manner rather than the more “chaotic” network found in the brain.
Thus, the term Artificial Intelligence leads through several intermediate terms like machine learning to the subcategory of deep learning.

 

HISTORICAL OVERVIEW OF AI

The desire for intelligent behavior in machines is not new. An early example is the mechanical automaton of Swiss watchmaker Jaquet-Droz, who developed machines in 1770 capable of writing letters – purely mechanically and without intelligence. The term Artificial Intelligence itself was introduced at a conference in 1956 (1). From the outset, AI was used as a marketing term to secure public funding for researching such technologies – with success, as significant investment and research into AI took place in the 1960s. Expectations were high from the beginning, and substantial investments were made in AI during the 1960s. However, these high expectations were not met, leading to the so-called “AI winter” in 1987, when interest in AI waned significantly due to a lack of results. A decade later, however, the first AI result gained international attention: the chess computer Deep Blue defeated the then-world chess champion Garry Kasparov. Further advancements, such as speech processing by Siri in 2011 and the release of a beta test version of GPT-3 in 2020, accelerated AI development and created the impression of rapid change in this field. The question now is how rapidly changing AI can be combined with Good Manufacturing Practice (GMP), which has undergone relatively slow development in recent years, as shown below.

 

THE EMERGENCE OF GMP

The history of Good Manufacturing Practice (GMP) is marked by significant drug scandals that called into question the safety of pharmaceuticals and the effort to improve the quality and safety of these products. The Thalidomide tragedy (2, 3) in the 1950s and 1960s, led to severe birth defects and marked a turning point in pharmaceutical regulation. In response, stricter approval procedures and improved safety standards in the pharmaceutical industry were introduced, leading to the emergence of GMP.

 

GMP IN THE MODERN COMPUTER-AIDED PHARMACEUTICAL INDUSTRY
In 1992, the EU introduced GMP Annex 11, which specifically regulates computer systems in pharmaceutical production. This was followed by the introduction of 21 CFR Part 11 in the USA in 1997, which regulates electronic records and signatures in the pharmaceutical industry. The differences between EU GMP Annex 11 and 21 CFR Part 11 pertain to their scope, legal bindingness, and specific requirements. Companies operating in the USA and the EU must comply with both sets of regulations to meet international GMP requirements, although they exhibit different regulatory approaches and specifications.

 

INTEGRATION OF AI INTO GMP-COMPLIANT PROCESSES

The integration of AI into a GMP-compliant environment presents particular challenges. Firstly, the data that serves as the foundation of AI must be compliant, ensuring data integrity, for example. Secondly, the AI models themselves must meet strict regulatory requirements to ensure safe and effective production. This requires continuous monitoring and adaptation of AI systems to ensure their operation within validated parameters.
On the other hand, some regulations still need to be defined.

 

FDA AND AI

In recent years, the FDA has specifically addressed AI algorithms and published two reports on the topic. In a 2019 article on AI in “Software as a Medical Device” (4) the FDA examined its approach to AI with fixed algorithms compared to AI with self-learning or evolving algorithms. In 2023, the FDA recognized the growing importance of AI and cloud technologies in improving patient safety (5) and has since sought a constructive dialogue to develop best practices and guidelines for their effective and safe use.
Since 2014, the FDA has supported the use of advanced manufacturing technologies, including AI, through its Pharmaceutical Quality Initiative and Emerging Technology Program to make the pharmaceutical industry more efficient and flexible. The application of AI in drug manufacturing requires adjustments to the regulatory framework to enable the use of real data and feedback for continuous improvements.
The role of AI in optimizing production processes, intelligent monitoring, and trend analysis is particularly emphasized. The FDA discusses how existing regulations can be adapted to AI applications in drug manufacturing and underscores the importance of public feedback. The challenges of integrating AI into pharmaceutical manufacturing, such as the need for clarity regarding regulatory oversight and adapting AI models to real-time data, are also highlighted.
A new approach to the regulatory framework is required, one that considers the entire product lifecycle while promoting innovation without compromising patient safety. The FDA is exploring how advanced technologies and AI can improve the pharmaceutical industry by focusing on optimizing production processes and ensuring product quality while addressing the regulatory challenges and the need for continuous adjustments and improvements.

 

Industry approaches for the validation and application of AI using a maturity model
The DACH working group of the International Society for Pharmaceutical Engineering (ISPE) has developed an industry-specific AI maturity model to establish guidelines for validating AI applications. This maturity model serves as a first step and foundation for developing further risk assessment and quality assurance measures. It relates to AI system maturity, which describes the extent to which an AI system can take control and evolve, considering user and regulatory requirements.
The model is based on control design, which represents the system’s ability to take over controls that ensure product quality and patient safety. It also considers the system’s autonomy, which describes the feasibility of automatic updates and thereby facilitates improvements. As shown in Figure 1, the model is defined in a two-dimensional matrix to measure AI maturity and identify the scope of validation activities.
The fundamental maturity model influences the risk assessment of AI applications. The model details the necessary validation activities for AI systems with varying control mechanisms and levels of autonomy. The goal is to identify clusters with similar validation requirements, defined by the matrix with the dimensions of autonomy and control design.
Stages 1-5 of control design, shown on the X-axis, represent the different design stages of control, as illustrated in Table 1.
Currently, there are no systems in pharmaceutical production at stages 4 or 5, but it is expected that applications at these levels will emerge with increasing
industry experience.

Autonomy stages 0-5, shown on the Y-axis in Figure 1, represent the different stages of autonomy granted to AI, as illustrated in Table 2.
From the various levels of autonomy and control stages, the clustered matrix results in the 6 AI validation levels (Figure 2). These describe the minimal control measures necessary to achieve regulatory compliance for systems at a high level. Detailed quality assurance requirements should be individually defined based on the categorization, depending on the intended use and risk profile of the AI system.

 

Systems at AI Validation Level I do not impact product quality, patient safety, or data integrity; hence, validation is not mandatory. However, the human factor in applications within this category should not be underestimated. If a system is designed to provide recommendations and operates parallel to the normal process over an extended period, safety precautions should be in place. These should ensure that the operator critically evaluates the results and does not use them to justify decisions without further scrutiny.

 

Systems at AI Validation Level II are AI applications that do not rely on machine learning (ML) and therefore do not require training. The results are purely code-based and deterministic, which allows them to be validated using a conventional approach for validating computer-controlled systems.

 

Systems at AI Validation Level III rely on mechanisms such as ML or deep learning. They require training with data to generate their outputs. Systems in this category operate in a locked state until retraining is conducted.
For validation, AI-specific measures must be undertaken, focusing on the data model and the data used, in addition to the conventional validation of computer-controlled systems. The integrity of the training data must be verified, ensuring that the data used for development is suitable for generating specific results and is neither biased nor corrupt.
AI validation documents should cover the following aspects:

  • Risk Analysis for ETL Process Steps: An analysis of all steps in extracting, transforming, and loading (ETL) data to identify potential risks.
  • Data Transformation Assessments: An evaluation of data changes and their potential impact on data integrity.
  • Label Creation Procedures: Clear procedures for how labels are created and quality-assured.

Additionally, the quality of the model must be verified during both the development and operational phases. During development, the following must be ensured:

  • Algorithm Selection: Verification that the chosen algorithm is appropriate for the use case.
  • Technical Performance of the Model: Verification that the trained model delivers expected results based on input data.

In the operational phase, these additional aspects must be considered and defined:

  • Quality Metrics for Model Monitoring: Establishing appropriate quality metrics to monitor model performance.
  • Conditions for Retraining: Defining the conditions required to initiate retraining based on model performance.

 

For retraining, it is desirable that the input structures for the model input material remain consistent. Otherwise, a new evaluation of the methodological setup during the development phase might be required. To ensure that the system operates only within a validated range, input data must be monitored during operation. For systems in this category and beyond, transparency issues are important, as the reason behind generating outputs based on different input data may not be immediately obvious.

 

Systems at AI Validation Level IV exhibit greater autonomy since various aspects of the update process are automated, including the selection of new training data. As a result, there is a strong need to focus on controlling performance metrics that reflect model quality during operation. The quality outputs of the model should be monitored to ensure they remain within the validated range.

 

Systems at AI Validation Level V have greater process control. Therefore, stronger system controls must be in place during operation.
This can be achieved through regular repeat testing with defined test datasets. Furthermore, the self-regulating mechanism should be reviewed during the validation phase.

 

Systems at AI Validation Level VI are self-learning systems. It is expected that strategies for controlling continuous learning systems will become available in the near future. Currently, no validation concept exists to ensure regulatory compliance for systems in this category.
In summary, the framework describes a trade-off between the organizational effort required to control the AI system during operation – more pronounced at the lower levels of the framework – and the technical requirements that facilitate increased validation activities to secure an increasingly autonomous AI system.

 

PRACTICAL APPROACHES FOR AI WITH DEEP VALIDATION LEVEL

A comprehensive validation process unites various disciplines such as data science, machine learning, statistics, process engineering, and GMP compliance. The goal is to verify the suitability of an AI algorithm with a low degree of autonomy, particularly to make accurate predictions for process parameters. The validation process developed for this purpose
(Figure 3) consists of three stages, considering all crucial aspects to ensure the model’s applicability for its intended purpose.
First, the quality of the data is assessed, as the quality of the data has a significant impact on the quality of AI. It is crucial at this stage to ensure that datasets are representative, sufficiently large, and free from bias to enable accurate and reliable predictions.
Next, the structure of the model is analyzed to ensure that it meets the requirements of the manufacturing process. An optimal model structure is essential for the algorithm’s ability to capture complex relationships between process parameters and make accurate predictions. A comprehensive analysis of these structural features ensures the robustness and effectiveness of the developed model.
Finally, various validation methods are applied to assess the model’s prediction quality.
A careful analysis of metrics such as accuracy and precision is crucial for evaluating the performance of the AI algorithm.
These methods enable a comprehensive assessment of prediction accuracy and ensure that the model can deliver practically relevant results.
This validation approach can serve as a guide for a variety of AI applications, particularly in the area of process monitoring and control. Careful validation not only ensures the reliability of AI models but also forms the basis for integrating these technologies into highly regulated industries such as pharmaceutical production.

 

CONCLUSIONS
It is undeniable that the time has come for the use of AI in pharmaceutical production.
The increased interest in the industry reflects the recognition of the potential that AI offers in terms of efficiency gains and quality improvement. Nevertheless, compliance with GMP rules remains of paramount importance.
The collaboration between industry and authorities such as the FDA underscores the shared interest in developing and implementing AI technologies that meet the highest standards. In this way, the path to GMP-compliant AI can be navigated cautiously and prudently, allowing pharmaceutical production to benefit from the advantages of this innovative technology without compromising regulatory compliance. It is the responsibility of the industry to ensure that AI applications in pharmaceutical production are not only effective but also compliant and safe.

 

Figure 1. Fundamentals for AI validation, adapted from (6).

 

Figure 2. Control design of applied control (X-axis) and autonomy granted to AI (Y-axis) (Source: (6)).

 

Figure 3. Our validation proposal for an AI algorithm with low autonomy
(AI Validation Level III) (Source: VTU).

 

Table 1. Control Design with the 5 proposed stages (adapted from (6)).

 

Table 2. The autonomy stages with the 5 proposed levels (adapted from (6)).

 

REFERENCES AND NOTES

  1. Kline, R. Cybernetics, automata studies, and the Dartmouth conference on artificial intelligence. IEEE Annals of the History of Computing 33.4 (2010):5–16.
  2. Vargesson, Neil. “Thalidomide‐induced teratogenesis: History and mechanisms.” Birth Defects Research Part C: Embryo Today: Reviews 105.2 (2015): 140-156.
  3. Grünenthal, “The history of the Thalidomide tragedy”, https://www.thalidomide-tragedy.com/the-history-of-the-thalidomide-tragedy
  4. U.S. Department of Health and Human Services. Artificial Intelligence and Medical Products: How CBER, CDER, CDRH, and OCP are Working Together. U.S. Food and Drug Administration. March 15th, 2024. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device
  5. Erdmann, N., Blumenthal, R., Baumann, I., Kaufmann, M. AI Maturity Model for GxP Application: A Foundation for AI Validation. Pharmaceutical Engineering, March/Apr 2022.
  6. Office of Pharmaceutical Quality. Discussion Paper: Artificial Intelligence in Drug Manufacturing. Center for Drug Evaluation and Research, U.S. Food and Drug Administration. 2023. https://www.fda.gov/media/165743/download

ABOUT THE AUTHOR

Elke Wieser has over 20 years of experience in the pharmaceutical industry. She has specialized in Bioengineering and Bioinformatics, with a strong focus on compliance. As Team Lead for Computer System Validation at VTU, she concentrates on the implementation of computer systems within the regulated GxP environment. Additionally, she is an expert on FDA and European compliance regulations, as well as data integrity issues. She plays a crucial role in VTU’s AI developments, ensuring their applicability in regulated areas.

Dr. Stefan Pauli is an expert in Data Science, Machine Learning, and AI. Since 2018, he has been working as a Senior Data Scientist / SME at VTU. After earning his PhD in Computational Science from ETH Zurich, he further honed his skills in algorithm development across various industries and start-ups. His diverse background enables a seamless integration of algorithmic solutions with industrial practice. At VTU, his focus is on implementing data analysis projects in chemical and pharmaceutical production.

Login