Is Your Calibration Really a Good Straight Line?

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology-07-02-2015
Volume 39
Issue 7
Pages: 52–55

Statistical procedures give statistical answers not analytical judgement.

Linear calibration is usually considered easy, particularly for high-performance liquid chromatography (HPLC) methods. After all, the procedure and the rules have been unchanged for 20 years (1).

The International Conference on Harmonization (ICH) requirements (1) are simple and well known:

  • “A linear relationship should be evaluated across the range of the analytical procedure.

  • “Linearity should be evaluated by visual inspection of a plot of signals as a function of analyte concentration or content.

  • “If there is a linear relationship, test results should be evaluated by appropriate statistical methods, for example, by calculation of a regression line by the method of least squares over a minimum of five concentrations over the analytical range.

  • “Data from the regression line itself may be helpful to provide mathematical estimates of the degree of linearity.
  • The correlation coefficient

  • The y-intercept

  • The slope of the regression line and residual sum of squares

  • An analysis of the deviation of the actual data points from the regression line may also be helpful for evaluating linearity.”

The ICH requirements appear to be straightforward. Simply carry out the required experiments and put the numbers into a statistical package or even Excel. How is one to know, however, if it is a good straight line? This column discusses some of the issues, assumptions, and limitations of ordinary least squares (OLS) regression, illustrated using an extraordinary data set.

Issues, assumptions, and limitations of OLS regressionIssue one: Proving the calibration is a good straight line. One cannot prove the calibration is a good straight line.  The problem comes from inferential statistics in that the null hypothesis is that it is a good straight line; however, one can’t prove the null hypothesis. All one can do is show that the alternative hypothesis (i.e., that it is in fact a curve) is unlikely at a given degree of probability. 

Issue two: The assumption that the bigger the value of the correlation coefficient ris the better the straight line. The statement that ‘the bigger the value of the correlation coefficient r2’  is wrong. r2 measures amount of variation accounted for by the data model, not linearity. This truth is not always recognized by analytical chemists or by quality assurance (QA) personnel who may rely on r2.

Table I: Anscombe's Quartet data set (2).

More than 40 years ago, Anscombe wrote a paper illustrating this issue of slavishly following calculated statistical parameters particularly r2 (2). He showed a table of four X,Y datasets (Table I), which when submitted to OLS regression gave the same calculated statistical parameters within rounding error (Table II).

 

Table II: Anscombe's Quartet; some calculated statistical parameters (2).

He then showed the plot of the four data sets (Figure 1). It is readily apparent that blind obedience to calculated parameters is not sensible.

 

Figure 1: Anscombe’s Quartet of ‘Linear’ Regression data set (GraphPad Prism v6.05). All figures are courtesy of the author.

Issue three: The OLS model assumes that all the error is in the Y (or response) variable. Issue three is another assumption that is not always valid or recognized. In HPLC, the response variable (Y) is usually peak area and the X variable is concentration. Are the concentrations really error free? 

Issue four: The model residuals are normally distributed. This assumption is often overlooked; although, ICH suggests plotting the residuals. A residual is simply the difference between the observed value and the value predicted by the regression model. The evaluation of normal distribution is usually relatively small because the amount of data. However, it is easy to construct a normal probability plot using standard statistical packages, for example, Minitab 17. Ideally all the data should lie closely scattered around the center line.

Perhaps surprisingly, the plot of the residuals from Anscombe’s Quartet shows only one point beyond the 95% confidence interval (see Figure 2). The wave-like data shape should give cause for suspicion, however, and prompt further investigation.

 

Figure 2: Residuals normal probablity plot from Anscombe's Quartet data (Minitab 17).

Issue five: The variance is constant over the analytical range (Homoscedasticity requirement). With main peak assays with large response function values over a limited analytical range, for example 80% to 120% of target, this is a good assumption. Even over larger ranges, for example 20% to 150% of target, this may well be a reasonable assumption. For low concentrations of impurities requiring calibration from the reporting threshold to 120% of target, however, this is likely to be untrue, and alternative least squares procedures should be used (e.g., example weighting). It should be apparent that statistical procedures deliver statistical answers, not analytical judgement.

 

 

Example
A user of the Minitab Network on LinkedIn posted the following example of a linearity data set (3). The raw data are shown in Table III. Triplicate standard preparations were made at six concentrations covering a concentration range of 0.004 mgL-1 to 0.030 mgL-1 representing 20% to 150% of a target concentration. The OLS regression plot is shown in Figure 3.

Table III: Linearity data set with normalized response recently posted on the Minitab Network on LinkedIn.

 

Figure 3: OLS regression plot of Table III data with 99% Confidence contours of regression (GraphPad Prism v6.05).

 

It can be seen that r2 is 0.999618, and the intercept is indistinguishable from zero at 95% confidence. How about that for a straight line? However, as the data set poster remarked, how are the residuals distributed (see Figure 4)? Definitely not normally distributed, but why?

Figure 4: Normal probability plot of the residuals from OLS plot of data from Table III (Minitab 17).

 

First, let’s look at the residual plot, as suggested by ICH and shown in Figure 5. Clearly there is something strange about the concentration data at 0.016%. Let us assume that we have evidence that there was a preparation error at this concentration; therefore, we exclude it from the analysis (Figure 6). Is everything okay now?

 

 

Figure 5: Residual plot of the mean values from the regression data in Table III (GraphPad Prism v6.05).

Figure 6: Normal probability plot of the residuals from OLS plot of data from Table III excluding the data at 0.016% (Minitab 17).

 

It appears this has solved the problem, but is there more to learn? Let’s look at the revised regression data and its plot (Figure 7).

 

Figure 7: OLS regression plot of Table III data with 99% Confidence contours of regression excluding data at 0.016% (GraphPad Prism v6.05).

Well the slope hasn’t changed a lot, and r2 has even increased to almost unity. The intercept, however, has and is now more negative and significantly different from zero. The intercept is small, approximately 0.4% of the response at 100% of target, but the possibility of a negative peak area for a positive concentration should worry chromatographers. 

Help is at hand, because there is a useful, but little known, simple diagnostic plot not mentioned in ICH known as a Sensitivity Plot (4) . If the peak area response is divided by the concentration (a normalized response), theoretically you should get a constant value or, in practice, a straight line parallel to the X axis with a small random scatter due to measurement noise. The mean normalized response is the slope of the calibration line. If the sensitivity plots for the data in Table III are produced with and without the values at 0.016%, the results are shown in Figure 8.

Figure 8: Sensitivity plots of Table III data including and excluding data at 0.016% (GraphPad Prism v6.05).

 

The following are three important features in Figure 8 that are readily apparent:

  • The normalized responses are not linear.

  • Excluding the 0.016% data merely shifts the mean line (the slope of the calibration line) slightly downwards (approximately 0.5%).

  • The range of values at below 0.016% are much larger than those above suggesting the variance may not constant over the analytical range (issue five). The statistical term for this is heteroscedasticity (the violation of homoscedasticity) when the size of the variance differs across values of an independent variable, in this instance, concentration.

Conclusion
Statistical procedures give statistical answers not analytical judgement. The statistical tools, in this instance OLS linear regression, require sound interpretation by the analytical practitioner, not slavish adherence to statistical significance of parameters especially r2. Always look at the data and do sensitivity plots. Given the findings herein, it would be most interesting to investigate the laboratory records of the example data set to try and establish root cause(s) for the observed calibration data issues.

References
1. ICH, Q2(R1), Validation Of Analytical Procedures: Text And Methodology (ICH, Oct. 27, 1994, Nov. 6, 1996, incorporated November 2005).
2. F. J. Anscome, American Statistician 27, 17-21 (1973).
3. Gamal M. Mohamed, data set posted on the Minitab Network on LinkedIn, Feb. 12, 2015, www.linkedin.com/grp/post/166220-5970966595894288384.
4.  J. Ermer and P. Nethercote (Eds), Method Validation in Pharmaceutical Analysis, 2nd Edition, Section 5.5.1.2.1, p. 153 (Wiley-VCH, 2015). 

Article DetailsPharmaceutical Technology
Vol. 39, No. 7
Pages: 52-55

Citation:
When referring to this article, please cite it as C. Burgess, "Is Your Calibration Really a Good Straight Line?," Pharmaceutical Technology 39 (7) 2015.

Recent Videos
CPHI Milan 2024: Compliance and Automation in Aseptic Processing
Related Content