Pharmaceutical Technology Europe
The design of accurate and robust analytical methodology is instrumental to developing orally inhaled and nasal drug products (OINDPs) and their appropriate control programmes.
The design of accurate and robust analytical methodology is instrumental to developing orally inhaled and nasal drug products (OINDPs) and their appropriate control programmes. Analytical methods generate data that determines the selection of APIs and the screening and selection of suitable excipients and container closure systems that form the OINDP. They are a key part of the quality management system that underwrites the quality, safety and efficacy of clinical and commercial products. Analytical methods, including on-line and in-line systems, monitor and control the manufacturing process for the API and finished OINDP, and perform the required end product specification testing.
A Quality by Design (QbD) development programme uses a systematic approach that utilizes designed experiments and multivariate statistical tools to assemble a product and process design space and, where possible, link any defined critical parameters to the demonstrated product safety and efficacy.1,2
Appropriate measurement systems will be required to gain greater understanding of the product and process, and to establish this product and process design space. The greater understanding, enhanced knowledge and increased ability to control product and process more efficiently, will ensure consistent and high quality OINDPs, may help gain regulatory flexibility and will facilitate continual improvement.3 A comprehensive method development programme that generates the required analytical knowledge to support the quality management system and design space establishment is, therefore, integral to this QbD effort.
To gain full understanding of the capability of analytical methods, a life cycle approach to their development and validation is recommended where a core set of initial development and validation information is augmented throughout the method life cycle to demonstrate its continued fitness for purpose in all of the different environments and situations encountered in OINDP manufacture and control.
On the go...
Analytical methods suitable for their intended purpose, and rugged and efficient in their normal operating environment, are a critical element of a quality management system. These methods are used to select and monitor critical process parameters during manufacture, as well as the critical quality attributes in the resulting pharmaceutical product. For these reasons, the advantages of applying QbD to the development, validation and life cycle management of an analytical method during the overall development programme should be considered. Identification of critical method parameters and demonstration of how changes in these influence the method outcome aid the establishment of the analytical method design space (the boundary values for the combination of method parameters inside which the method performs as intended). An understanding of the variability associated with an analytical method will also provide insight into the contribution this makes to the overall variability of the OINDP. Key to this approach is the process of distinguishing between analytical robustness testing and analytical ruggedness testing. This has been very well described by Borman.4 Analytical method robustness testing typically involves evaluating the influence of small changes in the operating conditions. Ruggedness testing identifies the degree of reproducibility of test results obtained by the analysis of the same sample under various normal test conditions such as using different laboratories, analysts and instruments.
The types of tests conducted for an OINDP vary widely from delivered dose uniformity and aerodynamic particle size distribution (APSD) measurements, to foreign particles and microbiological testing. These requirements are elaborated in regulatory and pharmacopoeial guidance.5–8 This paper discusses some general points applicable to most methods, but mainly focuses on issues unique to OINDP that arise because of the combination of the drug formulation and user-operated delivery device in one pharmaceutical product.
The development process for OINDPs integrates the usual proof of concept, safety and clinical efficacy development phases with medical device design control requirements.9,10 This can be illustrated in terms of a method development life cycle (Figure 1) where validation criteria are aligned with the typical product development phases, and elements of the design control and device development life cycle. The design control aspects will ideally link to critical development milestones, but this may not always be the case. The entry point in the design control life cycle is different depending on whether formulations use an established delivery platform design or a new design. More than one iteration through design control process steps may occur.
Figure 1
The analytical method development life cycle can be summarized as identifying the method objective, selecting sample preparation and quantification principles, and then developing and validating a suitable procedure based on these selected principles. This should include identifying any critical method parameters and assessing method variability under proposed use conditions.
The complex nature of some OINDP methods and the repetitiveness of some of the manual processes emphasizes the need to consider apparatus design and ergonomics to minimize the impact of human factors on method variability. Using validated automated systems may be appropriate (e.g., for efficient waste firing and systematic dose collection). Where automated systems are used, a manual reference method should be provided, as well as demonstration of the comparability of the two methods.
Close control of environmental factors (particularly temperature, humidity and electrostatics) during testing is particularly important for OINDPs.
As with all dosage form types, laboratory apparatus and instruments must be regularly inspected for wear and tear, and be appropriately calibrated and proactively maintained.
Identifying the method objective and target operating criteria. When identifying the need to develop an analytical method to measure an OINDP attribute or process parameter, target operating criteria embodying the intended use of the analytical method should consider:
Once finalized, the target operating criteria determine the analytical principle, quantification process and validation.
Selection of the analytical principle, development and preliminary validation. The analytical principle covers the 'analytical unit operations' that will form the basis of the procedure.
Sample collection. For OINDPs, sample preparation, such as dose collection, should be performed in a way that produces data that are relevant to normal patient usage. Points to consider are:
Specific sample collection apparatus for use with OINDPs are detailed in the US and European Pharmacopoeias.7,8
Understanding how device handling effects performance is a fundamental goal of the extensive product characterization studies performed during development.7,11 Preliminary aspects of this work help define any dosing instructions described in the method; for example, the number of priming shots required for optimum dosing characteristics should be determined and stated.
Cause and effect analysis, risk assessment tools and designed experiments may be used to understand the impact of device handling and how it influences sample collection variability.
Sample preparation. The preparation of the analyte solution will involve wash down of apparatus and volumetric dilution. The operation is common with other product types and should be as simple as possible to minimize variability.
Quantification. Selection of the quantification principle will be based on the nature of the sample, and the required selectivity, precision, sensitivity and accuracy.
When considering analytical methods associated with different formulations and/or delivery platforms, some common attributes may exist; for example, HPLC is widely used to quantify active species and the associated impurities and degradation products. A general understanding of the capability of this quantification method when analytes are dissolved in solution is widely applicable. A large body of literature exists on the use of design of experiments (DoE) to develop robust HPLC separation of multiple analytes using multivariate optimization modelling software. Such experimentally derived resolution maps will underwrite the method selectivity and help define the analytical method design space.
Preliminary validation. After establishing sample preparation and quantification principles, the method should be validated according to accepted practice. Linearity, accuracy, reproducibility, assessment of detection and quantification limit should also be assessed.12
As the development programme progresses, additional assessments of precision, robustness and ruggedness will be undertaken. Specific experimental designs and measurement systems analysis tools may be used and these experiments may also yield sources of method variability.
Linking it all together. Critical method parameters are identified based on prior knowledge of the method and the product, as well as data generated during the extended validation programme. A formal risk assessment, which identifies the parameters most likely to affect method performance, will provide a useful baseline and can be revisited in the light of any changes.
In addition to the typical parameters associated with sample solution quantification, parameters linked to sample handling and preparation are often additional critical aspects for OINDPs. Furthermore, interaction among analytical parameters should be explored to understand all aspects of method performance and to establish a comprehensive design space. When multiple parameters are in evidence, DoE techniques may be used to assess their impact, such as how they influence method outcome and how they may interact with each other to enhance or reduce each other's effect. One example is the combined effect of humidity and triboelectricity on the variability of cascade impaction methods. Another example is the application of DoE approaches for method robustness analysis. In the traditional one-factor-at-a-time (OFAT) approach, the effects of several critical factors on method performance (e.g., pH, temperature, mobile phase composition, shaking time and orientation, flow rates) are studied separately, whereas DoE techniques can deal simultaneously with the considered factors.
Figure 2
Generally, there are two advantages of DoE approaches:
A detailed method robustness case study for the application of a DoE setup is presented.14 Here also a series of numerical and graphical analyses tools and techniques are described. More general introductions and theoretical aspects of statistical experimental designs can be found in the literature.13,15,16
Additional aids to identifying critical method parameters are standard risk assessment tools and management tools.17 Processes such as failure mode and effect analysis (FMEA),18,19 and cause and effect analysis may be employed. Once the critical parameters are identified, the effect of altering these parameters on the performance of the analytical method can be assessed using appropriate experimental design.
Suitable controls for the critical method parameters, such as instrument performance checks, run qualification procedures and method system suitability criteria, should be established.
Traditionally it has been acceptable to make changes to method parameters that lie within the validated range for that particular parameter, but there are further benefits to adopting a QbD approach for analytical methods. Sharing how the critical analytical method parameters have been identified, and demonstrating how they are controlled, will form part of a productive science-based regulatory interaction. Establishing a design space for analytical methods will allow the impact of any changes occurring during the life cycle of the particular method to be scientifically assessed. Any changes to noncritical parameters or to critical parameters within the established analytical method design space may not require regulatory notification. Once the capability of particular OINDP analytical methods are fully understood, the established critical parameters may be used to develop alternative and, potentially nontraditional, measurement systems with comparable capability, thereby facilitating continual improvement. This could take the form of control of a specific OINDP quality attribute upstream during manufacture by transferring the capability requirement to a PAT control system.
A cause and effect diagram, also known as a Fishbone or Ishikawa diagram, can then be used after a method walk through to facilitate brainstorming of all the potential factors that may influence method performance criteria. A tool known as CNX can help classify all the factors in the Fishbone diagram. A decision is made on which factors should be controlled (C), which are potential noise factors (N) and which should be experimented on (X) to determine acceptable ranges. A prioritization matrix is used to quantify X-type factors. Robustness studies are performed for the highest risk parameters for which acceptable ranges will need to be identified. DoE is used to assess the multidimensional combination and interaction effects of these factors. For the highest risk N factors, a ruggedness study is performed using a measurement systems analysis (MSA) design. The risk assessment process is represented diagrammatically in Figure 3.
Figure 3
As an example, critical quality attributes for OINDPs typically include delivered dose uniformity and APSD, where the device is actuated into specific collection apparatus using airflow to entrain and collect the dose. Traditional validation work and prior knowledge has highlighted factors associated with the collection apparatus, and the way the device is prepared and actuated as critical method parameters. Applying a QbD approach to an APSD method would involve performing a risk assessment. Examples of typical C-, N-and X-type factors are:
The risk assessment highlights the C, N and X factors to focus on during method development and validation to ensure appropriate controls are put in place. These factors should also be the focus during technology transfer activities associated with the analytical method, resulting in optimum use of resources.
Defining analytical methods as a set of criteria and attributes that must be met to control critical quality attributes of an OINDP or its critical manufacturing process parameters may also provide further flexibility. If these capabilities are filed as the core measurement process, control of critical product quality attributes and process parameters can then be achieved using a variety of validated analytical unit operations, provided they meet the predefined capabilities and attributes, and lie within the established analytical method design space described for the core measurement process.20
Understanding the capability of analytical methods is a key component of QbD, where risk-based multivariate method design and development, supported by the required core set of validation criteria, can be augmented with additional data derived from well-designed robustness experiments to gain greater understanding of method capability 'in the field' (outside the originating laboratory).
A greater appreciation of the contribution that method variability makes to the control of critical process parameters and critical quality attributes allows suitable measures to be taken to reduce and control measurement system variability during the product development life cycle, leading to improved product and process understanding and, consequently, a better definition of the product and process design space.
Additionally, developing a design space for the analytical method will result in more regulatory flexibility with respect to changes within the established analytical method design space, therefore, promoting continual improvement.
Andy Rignall is an Associate Director in Pharmaceutical and Analytical R&D at AstraZeneca (UK).
David Christopher is Associate Director, Statistics, at Schering Plough (USA).
Andrew Crumpton is Manager, Inhalation Analytical Science & Strategy, at GlaxoSmithKline (UK).
Kevin Hawkins is a Regulatory and Early Development Manager at Teva Pharmaceuticals (UK).
Svetlana Lyapustina is a Senior Science Advisor in the Pharmaceutical Practice Group of Drinker Biddle & Reath LLP (WA, USA).
Holger Memmesheimer is Director of the Drug Delivery Department at Boehringer Ingelheim (Germany).
Adrian Parkinson is Manager of the Analytical R&D Department of 3M (UK).
Mary Ann Smith is an Associate Director of Regulatory Affairs at Nektar Therapeutics (CA, USA).
Bruce Wyka is a Fellow at Schering Plough (NJ, USA).
Sebastian Kaerger is a Principal Scientist at Novartis (UK).
1. ICH Q8 — Pharmaceutical Development, November 2005. www.ich.org
2. Chi-wan Chen, "Implementation of Quality-by-Design: ONDQA Initiatives", at the meeting of the Advisory Committee for Pharmaceutical Science, October 2006. www.fda.gov
3. M. Nasr, "Quality by Design (QbD) — A Modern System Approach to Pharmaceutical Development and Manufacturing — FDA Perspective", at FDA Quality Initiatives Workshop, February 2007. www.aaps-ispe.org
4. P. Borman et al., Pharmaceutical Technology, 31(10), 142–152 (2007).
5. FDA Draft Guidance — Metered Dose Inhaler and Dry Powder Inhaler Drug Products, October 1998. www.fda.gov
6. FDA Guidance for Industry — Nasal Spray and Inhalation Solution, Suspension, and Spray Drug products — Chemistry, Manufacturing, and Controls Documentation, July 2002. www.fda.gov
7. Chapter 601, United States Pharmacopeia (2008). www.usp.org
8. Chapter 6.2, European Pharmacopoeia (2008). http://online.edqm.eu
9. FDA, "From Test Tube to Patient", January 2006. www.fda.gov
10. FDA Guidance for Industry — Design Controls for Medical Device Manufacturers, March 1997. www.fda.gov
11. Joint EMEA/Health Canada Guidance for Industry on Pharmaceutical Quality of Inhalation and Nasal Products" (2006). www.hc-sc.gc.ca
12. ICH Q2 (R1) — Validation of Analytical Procedures Text and Methodology, November, 2005. www.ich.org
13. D.C Montgomery, Design and Analysis of Experiments (Wiley & Sons, New York, NY, USA,1996).
14. J. Ermer and J.H. Miller, Method Validation in Pharmaceutical Analysis (Wiley-VCH, Weinheim, Germany, 2005).
15. T.P. Ryan, Statistical Methods for Quality Improvement (Wiley & Sons, New York, NY, USA, 2000).
16. W. Kleppmann, Taschenbuch Versuchsplanung — Produkte und Prozesse optimieren (Carl Hanser Verlag, München, Germany, 2006).
17. ICH Q9 — Quality Risk Management, November 2005. hwww.ich.org
18. D.H. Stamatis, Failure Mode and Effect Analysis: FMEA From Theory to Execution (ASQ Quality Press, Milwaukee, WI, USA, 2003).
19. R.E. McDermott, R.J. Mikulak and M.R. Beauregard, The Basics of FMEA. Quality Resources (Productivity Press, New York, New York, USA, 1996).
20. Discussions with the PhRMA ATG Quality by Design for Analytical Methods Working Group (May, 2007).