Ongoing Analytical Procedure Performance Verification—Stage 3 of USP<1220>

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology, March 2023
Volume 47
Issue 3
Pages: 40–44

Analytical procedure performance can be continually verified by risk-based monitoring of performance related data.

Paulista/stock.adobe.com – For high-risk analytical procedures, control-chart monitoring of selected performance data can provide early warning of changes in performance.

Paulista/stock.adobe.com

The principles of quality by design (QbD) and life cycle management, as outlined in International Council for Harmonisation (ICH) Q8–Q12, can equally be applied to the development, validation, and ongoing verification of analytical procedures. The first publicly available drafts of ICH Q14 (1) and Q2(R2) (2) issued in March 2022 cover the lifecycle management of analytical procedures and provide guidance on the development and validation stages of the lifecycle. There is little mention, however, of the ongoing procedure performance verification stage. United States Pharmacopeia’s (USP’s) general chapter <1220>, entitled “Analytical Procedure Life Cycle” (3), which became official in May 2022, holistically pulls together the design, qualification, and ongoing performance verification of analytical procedures in a single document via a three-stage approach similar to FDA’s lifecycle-based process validation guidance (4).

The focus of USP <1220> is to ensure that the analytical procedure is fit for purpose across the entirety of the analytical procedure’s life cycle. The requirements that the analytical procedure have to meet can be defined using the analytical target profile (ATP) (a concept explained in both USP <1220> and ICH Q14 as well as in the literature [5–8]). The ATP is a prospective description of the desired purpose and performance of an analytical procedure that is used to measure a quality attribute and defines the required quality of the reportable value produced by the procedure (3). In Stage 3 of USP <1220>, the ATP can also be used to determine the acceptance criteria needed to verify the performance of the analytical procedure during routine use (e.g., through a system suitability test [SST] or verification criteria applicable to procedure changes and transfers) and to ensure that the procedure continues to be fit for purpose. Performance verification can still take place even if an ATP hasn’t previously been defined. The verification acceptance criteria in these situations can be derived from validation or system suitability criteria. The principles in Stage 3 of USP <1220> can therefore equally be applied to procedures that have been formally designed/developed (where an ATP exists) via an enhanced approach as well as procedures where a minimal approach was taken. For high-risk analytical procedures (see later section for how risk is determined), control-chart monitoring of selected performance data, generated during routine use of the procedure, can provide early warning of changes in performance. These data could be direct measures of accuracy and precision (e.g., through the analysis of a quality control sample) or indirect measures (e.g., chromatographic resolution between peaks, signal to noise response, peak tailing factor, etc.). Where the risk is low, performance verification may simply involve monitoring of atypical results and/or system suitability failures.

In addition to ensuring that the analytical procedure is fit for purpose during routine use, ongoing verification can also be used to evaluate the impact of any changes (9) to the procedure across the lifecycle and ensures that the performance of the procedure is maintained at an acceptable level over the procedure lifetime. Ongoing verification can also act as a preventive measure by providing an early indication of potential performance issues or adverse trends and aids in identifying required changes for the analytical procedure.

Determining performance monitoring via risk assessment

The performance of all analytical procedures used in the product control strategy should be assessed across the lifecycle of the product to ensure they remain fit for their intended purpose. This assessment should include routine monitoring (for prioritized procedures) as well as the evaluation of changes made to procedures. The extent of the routine monitoring required can be defined using a risk-based approach (10). It is important to consider which measures of analytical procedure performance or risk are readily available as inputs to a risk assessment. Two risk assessment approaches are described below.

Simple risk assessment. This approach may focus only on the type and complexity of the analytical procedure and its intended use. This approach would enable the establishment of a platform risk assessment per analytical procedure (technique) type. A complexity factor can be used to ensure that any unusually complex procedures are not underestimated. Analytical procedures that do not form a critical part of the control strategy can be treated as low risk by default.

Data driven risk assessment. This approach may include an assessment of the current performance of the manufacturing process (which includes variability from the product and the analytical procedure) through an assessment of process performance (Ppk [11]) if enough representative batches of the manufacturing process (and analytical procedure) are available. A more involved risk assessment approach may include an assessment of the capability (precision) of the specific analytical procedure relative to the product specification or ATP. Precision to tolerance ratio (P/TOL [12]) and Z-score (13) are metrics that can be calculated to aid such an assessment. Conformity and validity rates and/or procedure robustness and reproducibility can also be used. Where procedures have been shown to be sensitive to changes (e.g., analyst, equipment, environment, reagent lots, or small changes in procedure parameters), this sensitivity presents a higher risk. Figure 1 depicts sources of variability associated with Ppk and procedure capability metrics (such as P/TOL and Z-score).

ALL FIGURES ARE COURTESY OF THE AUTHORS. Figure 1. Product and analytical procedure sources of variability. P/TOL is precision to tolerance ratio.

ALL FIGURES ARE COURTESY OF THE AUTHORS. Figure 1. Product and analytical procedure sources of variability. P/TOL is precision to tolerance ratio.

Manufacturing process performance assessment through Ppk. Ppk (11) is a process performance index and can be used as a worst-case surrogate measure for analytical procedure precision. It represents a combination of manufacturing process and analytical procedure variability over the long term. If a drug product has a good overall process capability (Ppk) for a particular quality attribute (i.e., if the test results show little variation versus the specification), then it can be assumed that the analytical procedure precision is good (see Equation 1).

where USL is the upper specification limit, LSL is the lower specification limit, µ is process mean, and σoverall is the combined process and analytical procedure standard deviation.

Analytical procedure performance assessment through P/TOL or Z-Score. P/TOLis described by Chatfield and Borman (12), and it is defined as shown in Equation 2:

where σa is the analytical procedure standard deviation. Low P/TOL values are desirable.

The Z-score (13) is similar to P/TOL as it is defined as the number of analytical procedure standard deviations between the process mean and the nearest specification limit (see Equation 3):

where SL is the nearest specification limit to the process mean (µ). High Z-scores are desirable.

A data-driven risk assessment that considers both process and analytical procedure capability is shown in Table I.

Table I. Data-driven risk assessment for identification of high-risk analytical procedures. P/TOL is precision to tolerance ratio.

Table I. Data-driven risk assessment for identification of high-risk analytical procedures. P/TOL is precision to tolerance ratio.

When performing a risk assessment, it is possible that a low-risk procedure could become a high-risk procedure during a subsequent periodic risk assessment due to changes in the manufacturing process that impact the process mean, or changes to specification limits, despite the analytical procedure precision (σa) remaining constant. In such cases it may not be possible/practical to improve the procedure performance to reduce the risk of out-of-specification data, but it may be appropriate to consider manufacturing process improvements or review of the specification limit(s).

Development of risk-based performance-monitoring plans

Three types of procedure performance indicators categorized by Ermer et al. (10) are described below:

  • Conformity: the number of out-of-specification results with analytical procedure root cause in a given time period. The number and types of analytical procedure errors will provide information on the reliability of the analytical procedure.
  • Validity: the number of invalid test results due to failures of system suitability criteria (established during Stage 1 of the lifecycle approach) in a given time period
  • Measurements collected during each analytical procedure test occasion monitored using control charts: these measurements could be critical analytical procedure attributes or parameters specified in the analytical procedure with well-defined acceptance criteria or ranges, respectively, or attributes or parameters selected due to a potential link with procedure performance.

“Conformity” monitoring is usually sufficient for low-risk analytical procedures, whereas for high-risk procedures “validity” monitoring and monitoring of informative analytical procedure attributes and/or parameters can provide early warning of potential changes in procedure performance.

A useful mechanism to identify the most value-adding performance attributes or parameters to monitor for high-risk procedures is to perform a procedure performance cause-and-effect review (PPC&ER). This review provides the analytical team with an understanding of likely root causes of any high risks identified from the risk assessment. Potential improvement ideas and tailored procedure performance indicators can then be documented for further consideration (and potentially taken forward for inclusion in the monitoring plan).

Both USP <1220> and ICH Q14 describe the use of Ishikawa or fishbone diagrams (11) for risk identification. These are highly useful tools for collectively brainstorming potential sources of variability associated with a procedure. Gembas (11) (or “procedure walk-throughs” used in the implementation of lean principles) are also very important for this activity, and the analysts that are most familiar with the procedures should be included. It can also be useful to include independent experienced analytical scientists who are not familiar with the particular procedure under assessment and/or technique subject matter experts. Augmented reality technology can be a useful way to orientate all meeting participants with the procedure under assessment and to demonstrate any particularly problematic steps. Once sufficient analytical procedure platform knowledge has been acquired, creation of generic Ishikawa diagrams and process maps can prove useful starting points and serve to significantly accelerate this step. As it is not practical to monitor all potential sources of variability, it is important to prioritize the most value-adding procedure performance indicators for the monitoring plan.

A PPC&ER can be performed for new high-risk procedures that have been developed following the guidance in Stage 1 of USP <1220> or ICH Q14 (as part of procedure development) and a monitoring plan developed at the same time as design of the analytical procedure control strategy. For established analytical procedures where USP’s Stage 1 was not followed/documented, then a new PPC&ER can be a good first step towards the design of a monitoring plan. Once the procedure performance indicators for routine monitoring have been selected, a decision on the most appropriate method of monitoring can be taken. The use of control charting in a similar fashion to statistical process control (SPC) should be considered to facilitate the identification of any drifts in performance of high-risk analytical procedures over time. The authors recommend the adaptation of SPC (14) tools and have termed this “Statistical Analytical Procedure Performance Control (SAPPC).” Well established SPC control chart types such as Shewhart (14), cumulative sum (Cusum) (14), and exponentially weighted moving average (EWMA [14]) can be applied to analytical procedures. Performance data compiled over time allows for the calculation of average parameters, for example repeatability from replicate sample preparations or intermediate precision from control sample analysis. Such average parameters are very reliable estimates of the long-term performance (10) and facilitate the identification of atypical results.

Continuous improvement during USP <1220> Stage 3

Analytical procedure performance monitoring will enable opportunities for proactive improvement of procedure performance to be identified over time. Improvements could be relatively small, such as updating training plans or changing wording in method documentation to improve clarity and consistency of procedure operation. Improvements could relate to instrument or reference standard management, or sometimes more fundamental procedure changes that require regulatory pre-approval action (e.g., changes to critical analytical procedure parameters or analytical technology).

Opportunities for improvement can be considered as a part of periodic review (e.g., reviews may occur during scheduled revisits of the analytical procedure risk assessment and PPC&ER or in response to a procedure performance trend alert during routine performance monitoring).

Case study 1: quantification of API polymorphic purity

Barry et al. (15) describe how ongoing verification of a solid-state nuclear magnetic resonance (ssNMR) procedure used to quantify polymorphic purity of an API enabled an unusual effect relating to a standard used in the procedure to be identified and remediated promptly through a detailed PPC&ER. The level of an unwanted Form (Form 3) in the standard was observed to gradually increase over time. Following a thorough PPC&ER, corrective actions were implemented that resulted in the variability of the procedure due to the standard being significantly reduced with an acceptable range of ±1%. Following implementation of remedial actions, it was agreed to routinely monitor the amount of Form 3 as part of the routine procedure performance monitoring plan to ensure the integrity of the standard was maintained.

Case study 2: ion chromatography procedure performance monitoring

Ion chromatography instrument failures were observed for the same analytical procedure at two independent quality control laboratories after years of normal operation. These failures resulted in lengthy investigations, system downtime, and wasted resources. A PPC&ER determined the root cause to be an aging component within the ion chromatography system. As the aging of the instrument component led to a slow decline in procedure precision over time, calibration standard injection % RSD and % sample duplicate difference have now been added to the routine procedure performance monitoring plan for this procedure to provide an early detection mechanism. Figure 2 provides examples of how these procedure performance indicators are trended. Drift in performance can be identified and the aging component replaced before batch results are impacted, and before system failure occurs.

Figure 2. Drift in precision detected over time for an ion chromatography procedure. RSD is relative standard deviation.

Figure 2. Drift in precision detected over time for an ion chromatography procedure. RSD is relative standard deviation.

Conclusion

Ongoing analytical procedure performance verification can be used to ensure that procedures continue to be fit for their intended purpose across the lifecycle of their use. It can be equally applied to analytical procedures that have been developed via an enhanced approach (where an ATP has been created) as well as established procedures where USP <1220> Stage 1 was not followed/documented. The extent of monitoring should be risk-based, and the assessment of both the manufacturing and analytical capability is recommended to determine which analytical procedures carry the greatest risk. A detailed procedure PPC&ER employing tools such as fishbone diagrams and gembas can be used to identify the most value-adding performance attributes or parameters to monitor (control charting is recommended for high-risk procedures). Performance verification can provide an early indication of potential performance issues and highlight opportunities to improve procedures. It can also be used to assess any changes made to analytical procedures over the lifecycle.

References

1. ICH, Q14 Analytical Procedure Development, Step 2 version (2022).
2. ICH, Q2(R2) Validation of Analytical Procedures, Step 2 version (2022).
3. USP, USP General Chapter <1220>, Analytical Procedure Lifecycle. Pharmacop. Forum 2021, 46 (5).
4. FDA, Guidance for Industry, Process Validation: General Principles and Practices (CDER, CBER, 2011).
5. Barnett, K.L.; McGregor, P.L.; Martin, G.P.; et al. Analytical Target Profile: Structure and Application Throughout the Analytical Lifecycle. Pharmacop. Forum 2016, 42 (5).
6. Jackson, P.; Borman, P.; Campa, C.; et al. Using the Analytical Target Profile to Drive the Analytical Method Lifecycle. Anal. Chem. 2019, 91 (4), 2577–2585.
7. Rignall, A.; Borman, P.; Hanna-Brown, M.; et al. Analytical Procedure Lifecycle Management: Current Status and Opportunities. PharmTech. 2018, 42 (12), 18–23.
8. Borman, P.; Campa, C.; Delpierre, G.; et al. Selection of Analytical Technology and Development of Analytical Procedures Using the Analytical Target Profile. Anal. Chem. 2022, 94 (2), 559–570.
9. Chatfield, M.J.; Borman, P.J.; Damjanov, I. Evaluating Change During Pharmaceutical Product Development and Manufacture-Comparability and Equivalence. Qual. Reliab. Eng. Int. 2011, 27 (5), 629–640.
10. Ermer, J.; Aguiar, D.; Boden, A.; et al. Lifecycle Management in Pharmaceutical Analysis: How to Establish an Efficient and Relevant Continued Performance Monitoring Program. J. Pharm. Biomed. Anal. 2020, 181, 113051. DOI: 10.1016/j.jpba.2019.113051.
11. Ishikawa, K. What is Total Quality Control? The Japanese Way; Prentice-Hall, Englewood Cliffs, NJ, 1985.
12. Chatfield, M.J.; Borman, P.J. Acceptance Criteria for Method Equivalency Assessments. Anal. Chem. 2009, 81 (24), 9841–9848.
13. Vukovinsky, K.; Watson, T.; Ide, N.; et al. Statistical Tools to Aid in the Assessment of Critical Process Parameters. PharmTech. 2016, 40 (3), 34–44.
14. Suman, G.; Prajapati, D. Control Chart Applications in Healthcare: a Literature Review. Int. J. Metrol. Qual. Eng. 2018, 9 (5).
15. Barry, S.J.; Pham, T.N.; Borman, P.J.; Edwards, A.J.; Watson, S.A. A Risk-Based Statistical Investigation of the Quantification of Polymorphic Purity of a Pharmaceutical Candidate by Solid-State 19F NMR. Anal. Chim. Acta 2012, 712, 30–36.

About the authors

Phil Borman is vice-chair of the USP Measurement and Data Quality Expert Committee (USP M&DQ EC); Amanda Guiraldelli Mahr is scientific affairs manager, US Pharmacopeia; Jane Weitzel is chair of the USP M&DQ EC; Sarah Thompson is principal scientist—Analytical Project Expert, AstraZeneca; Joachim Ermer, Stephanie Sproule, and Jean-Marc Roussel are members of the USP M&DQ EC; Jaime Marach is member of USP Analytical Procedure Life Cycle Joint Subcommittee; and Horacio Pappa is senior director, US Pharmacopeia.

Article Details

Pharmaceutical Technology
Vol. 47, No. 3
March 2023
Pages 40–44

Citation

When referring to this article, please cite it as Borman, P.; Mahr, A.G.; Weitzel, J.; et al. Ongoing Analytical Procedure Performance Verification—Stage 3 of USP<1220>. Pharmaceutical Technology 2023, 47 (3), 40–44.

Recent Videos