Calculating the Reportable Result from Retest Data

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology-02-02-2013
Volume 37
Issue 2

Two methods to evaluate retest data following out-of-specification results.

The importance of United States vs. Barr Laboratories (1) in the handling of out-of-specification (OOS) results in the pharmaceutical laboratory cannot be over estimated. It took 13 years after the judgement for FDA to issue its final guidance (2) on the matter in 2006. The European Compliance Academy (ECA) Analytical Quality Control Working Group (3) recently launched version 2 of its standard operating procedure (SOP) on OOS investigations (4), which harmonizes the FDA and European regulatory approaches.

Chris Burgess, PhD

In an earlier column, the question of retest sample size selection was discussed and proposals (5) were made. The article, however, did not address the vexed question of how to calculate the reportable result. Retesting is only performed if a root cause for the OOS cannot be established. In such cases, the FDA requirement is specific:

If no laboratory or calculation errors are identified in the first test, there is no scientific basis for invalidating initial OOS results in favor of passing retest results. All test results, both passing and suspect, should be reported and considered in batch release decisions (2).

The key word is considered. How can we take into account all test results in a rational and scientific manner as required by 21 CFR §211.160 (b)? There are two well-established ways of taking an OOS result into consideration in arriving at a reportable result.

Standard confidence interval approach

Firstly, we will look at isolating the OOS result using a standard confidence interval approach. Take as an example that the registered specification for an assay is 95.0 to 105.0% of the claim and that an OOS result of 94.7% was obtained in the laboratory. An investigation failed to identify a root cause. The quality unit authorized a retesting protocol for six retests and the % results obtained were 98.0, 97.0, 96.1, 96.5, 97.4, and 96.2. In this example, the decision was made that six passing retest results out of seven would lead to the isolation of the OOS result rather than the Barr case example of seven of eight.

The question that remains, however, is how to demonstrate that the OOS result has been successfully isolated and how to calculate the reportable result. One thing we cannot do to arrive at a reportable result is take the average of all the results because:

In the context of additional testing performed during an OOS investigation, averaging the result(s) of the original test that prompted the investigation and additional retest or resample results obtained during the OOS investigation is not appropriate because it hides variability among the individual results (2).

There is also a good statistical reason for not calculating this average, which is because the arithmetic mean (the average) is sensitive to outlying results and an OOS's inclusion will cause the reportable result to be biased. A better approach would be to use a statistical method to calculate the robust mean and robust standard deviation.

The idea behind the isolation method using a confidence interval is that if the confidence interval of the arithmetic mean calculated at 95% confidence from all the results lies above the lower specification limit (LSL), excluding it from the reportable result calculation is justified although the OOS result will be recorded in the laboratory record as required by regulation.

The lower 95% confidence limit (CL) of the mean is calculated for all results (n=7) using the conventional equation:

where is the mean of all 7 values, t(0.05,n–1) is the value of the t distribution for 6 degrees of freedom at 95% confidence and s is the calculated sample standard deviation.

Hence for our example;

As this value is greater than the LSL of 95.0%, the isolation of the initial OOS result is achieved.

The reportable value is the mean value of six retest results. This situation is shown in Figure 1.

Figure 1: Illustration of a successful out-of-specification (OOS) "isolation" using the confidence interval approach. LSL is lower specification limit, USL is upper specification limit.

Note in Figure 1 and, in particular, Figure 2, the effect of bias on the mean caused by inclusion of the original OOS result and the broadening of the normal distribution. There is a statistical disadvantage to this method because the distance of the OOS result from the retest values causes a point at which the "isolation" fails. For example, if the OOS result was 92.7% rather than 94.7, then repeating the calculation would yield a lower 95% confidence limit of 94.9%. In this instance, the "isolation" would fail (see Figure 2).

Figure 2: Illustration of out-of-specification (OOS) "isolation" failure using the confidence interval approach. LSL is lower specification limit, USL is upper specification limit.

H15 approach

A more statistically sound approach is to include the OOS result directly using a robust procedure based on the median and deviations from it, because medians are less influenced by outlying values.

The method briefly described here is based on Huber's H15 method (6). This method is more difficult to calculate than the standard confidence interval as it relies on an iterative procedure. However, it is easily accomplished using an Excel spreadsheet without the necessity for macros. The mathematical details are given in Ellison, Barwick, and Duguid Farrant (7) and are not covered here.

This robust statistical procedure has a major advantage in that the calculation includes the outlier as part of the data set and provides values for the robust mean and robust standard deviation. This calculation procedure meets the FDA requirement outlined earlier because active consideration is given to the OOS result. In addition, it calculates the 99% confidence interval rather than the 95% that provides greater assurance of compliance with specification.

Briefly, the algorithm used is as follows:

1. Calculate the initial estimates of the median and robust standard deviation

2. Evaluate all the data points using

This step replaces points outside the range with values at the calculated 99% confidence limits but leaves other values unchanged.

3. Update the estimate for the standard deviation based on the current estimate with a correction factor, β, for the normal distribution.

4. Repeat steps 2 and 3 for the updated data until the values for the robust mean and robust standard deviation no longer change, for example, by 0.1% (i.e., the calculation converges).

5. Calculate the final 99% confidence interval from 2.96 Ŝi/β where i is the cycle at which it converges.

It is easier to understand the process graphically (Figure 3) rather than the maths, and this approach is illustrated using the same data set as the original example. The initial OOS result is shown as a red circle and the retest data are shown as green circles (initial data line). The LSL is also marked on the plot. The first iteration moves both the initial OOS result and one retest result to the calculated robust 99% confidence interval shown as blue dots with a red outline. Note that five of the six retest results are unchanged throughout all the iterative calculations. As the iterations proceed, the robust mean and robust standard deviation change until the values converge. In our example, it took 28 iterations to obtain convergence. The biggest change during the iterations is to the robust standard deviation. The final 99% confidence interval is well within the specification and the robust mean value for the reportable result (based on all seven data points) is 96.64.

Figure 3: Illustration of the H15 approach for calculating the robust mean and robust standard deviation. LSL is lower specification limit.

Conclusion

Two established approaches for evaluating retest data following an OOS result have been described together with a method for generating a reportable result. These two approaches are not the only ones and companies are free to select and defend their own statistical approaches. Whichever approach is selected, the choice must be documented, justified, and described in an SOP.

References

1. United States vs. Barr Laboratories, Inc. Civil Action No. 92-1744, US District Court for the District of New Jersey: 812 F. Supp. 458. 1993 US Dist. Lexis 1932; Feb. 4, 1993, as amended Mar. 30, 1993.

2. FDA, Investigating Out-of-Specification (OOS) Test Results for Pharmaceutical Production (Rockville, MD, Oct. 2006).

3. European Compliance Academy (ECA) Analytical Quality Control Working Group website, www.gmp-compliance.org/eca_aqc_aboutus.html, accessed Jan. 7, 2012.

4. C. Burgess and B. Renger, ECA Standard Operating Procedure 01, Laboratory Data Management; Out of Specification (OOS) Results, Version 2, Aug. 2012.

5. L.D. Torbeck, Pharm. Tech. 35 (12), 28, 54 (2011).

6. M. Thompson and P.J. Lowthian, Notes on Statistics and Data Quality for Analytical Chemists, Section 7.6; Huber's H15 method, Imperial College Press, 2011, ISBN-10 184816-617 & references therein.

7. S.L.R. Ellison, V.J. Barwick, and T.J. Duguid Farrant, Practical Statistics for the Analytical Scientist, Section 5.3; Robust Statistics, Royal Society of Chemistry, 2009, ISBN 978-0-85404-131-2.

Chris Burgess, PhD,is an analytical scientist at Burgess Analytical Consultancy Limited, 'Rose Rae,' The Lendings, Startforth, Barnard Castle, Co Durham, DL12 9AB, UK; Tel: +44 1833 637 446; chris@burgessconsultancy.com; www.burgessconsultancy.com

Recent Videos
CPHI Milan 2024: Compliance and Automation in Aseptic Processing