Analytical Method Comparability in Registration and Post-Approval Stages: A Risk-Based Approach

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology-10-02-2014
Volume 38
Issue 10

A risk-based approach is recommended for analytical method comparability for HPLC assay and impurities methods.

To ensure the quality of drug products and patient safety, pharmaceutical companies need to manage and justify changes in chemistry, manufacturing, and controls (CMC). Analytical methods are integral parts of CMC, and some common reasons to change analytical methods used in regulated environments include applying new analytical technologies and accommodating changes in chemical or formulation processes. When changes are made to analytical methods, pharmaceutical companies need to compare the new method and the existing method, and to demonstrate that the new method will provide equivalent or better performance to that of the existing method.

There are two concepts related to comparing two analytical methods for the same test: analytical method comparability and analytical method equivalency. Chambers et al. considered that analytical method comparability refers to studies to evaluate similarities and differences in method performance characteristics between two analytical methods (i.e., accuracy, precision, specificity, detection limit, and quantitation limit), while analytical method equivalency is a subset of analytical method comparability and refers to studies to evaluate similarities between two analytical methods in regard to generating results for the same sample (1). In other words, analytical method equivalency evaluates whether the new method can generate equivalent results to those from the existing method. In a recent publication, Chatfield et al. suggested another way to differentiate these two concepts, where analytical method equivalency is restricted to a formal statistical study to evaluate similarities in method performance characteristics (2). In a United States Pharmacopeia (USP) stimuli paper, Hauck et al. also discussed other approaches to assessing equivalence such as “result equivalence” (3).

Unlike analytical method validation, where clear regulatory guidelines such as International Conference on Harmonisation (ICH) Q2, are available, there is little regulatory guidance on how or when to perform analytical method comparability or equivalency. In a draft guidance published by FDA in 2003, Comparability Protocols-Chemistry, Manufacturing, and Controls Information, the agency stated that proper validation is required to demonstrate that the new analytical method will provide similar or better performance compared with the existing method. On the other hand, the agency also stated that whether an analytical method equivalency study is needed and how to perform it will depend on the extent of the proposed change, type of product, and type of test (e.g., chemical, biological) (4). Another relevant document on analytical method comparability is USP General Chapter <1010> “Analytical Data-Interpretation and Treatment,” which was first published in USP 28 in 2005  (5).

In this General Chapter under the section of Comparison of Analytical Methods, statistical-based approaches to compare method precision and accuracy are discussed with some examples. Most recently, an FDA draft guidance on analytical method validation was issued in 2014 (6), and the concept of analytical method comparability is discussed. However, in this draft guidance, the use of formal statistical studies is recommended whenever a change is made to an analytical method that has stability indicating properties. While this approach is valid and rigorous, the draft guidance does not consider less formal but suitable alternatives for method comparison assessments by leveraging a quality- and risk-based assessment strategy.

Based on a 2003 Pharmaceutical Research and Manufacturers of America (PhRMA) workshop, Chambers et al. published a position paper on analytical method equivalency (1). The authors summarized industry practice at the time and provided perspectives on when and how analytical method equivalency should be performed. The authors noticed that a wide range of different practices were used by pharmaceutical companies for analytical method equivalency, including only validation of the new method, side-by-side result comparison against pre-defined acceptance criteria, as well as formal statistical demonstration. Such wide variations are clearly not preferred by either regulatory agencies or the industry and may lead to delay in regulatory review.

The International Consortium for Innovation and Quality in Pharmaceutical Development (IQ) is a pharmaceutical industry association formed in 2010 with the mission of advancing science-based and scientifically driven standards and regulations for pharmaceutical and biotechnology products worldwide. In 2011, an IQ working group, Analytical Method Comparability, was formed to assess current practices by IQ member companies and to recommend best practices on analytical method comparability for analytical methods used in regulatory environments. Harmonized approaches on analytical method comparability, if in existence, will not only improve industry compliance to ensure product quality and patient safety, but also align with quality-by-design (QbD) principles and may also reduce the regulatory filing burden on method change control, which in turn may encourage innovation in pharmaceutical analyses.

As a starting point, the working group purposely limited its initial scope to changes in high-performance liquid chromatography (HPLC) assay and impurities methods in registration and post-approval stages. The rationale for this decision is that new developments in instrumentation and column technologies led to the birth of ultra-high pressure liquid chromatography (UHPLC) (7). An HPLC impurities analysis that used to routinely take 30 min or more to perform by conventional HPLC can now be achieved within a few minutes by UHPLC with better sensitivity and less solvent consumption. Though UHPLC has witnessed increasing popularity by pharmaceutical companies for new products, implementation of UHPLC in legacy products has been slow, partially due to the heavy regulatory filing burden upon method change control. CMC related changes, including changes in analytical methods, are regulated most stringently in registration and post-approval stages. While HPLC assay and impurities methods were the starting point, it is reasonable to anticipate that the general concepts developed in this position paper can be applied to other analytical methods.

The working group conducted a survey on analytical method comparability among IQ member companies, and the results are summarized in the next section. Based on the survey results and discussions among working group participants, the working group recommends a risk-based approach for analytical method comparability of HPLC assay and impurities methods in registration and post-approval stages. The views in this article are those of the authors and do not reflect the official policies of their respective companies.

Survey summary
A survey of industry practice on analytical method comparability for HPLC assay and impurities methods in registration and post-approval stages was carried out among IQ member companies. The survey contained 17 questions covering the following aspects with regard to industry practice on analytical method comparability: background and internal governing documents, regulatory interactions, general practice, quality assurance (QA) and statistician involvement, and detailed procedures. Nine questions were required to be answered as a “Yes” or “No” response, with the option to provide additional written details; written responses were required for the remaining eight questions. A total of 19 responses were received from 15 IQ member companies. Three companies provided multiple responses from different divisions within the same company. The survey results were discussed during subsequent working group meetings. Though this is a relatively small survey and the percentage numbers undoubtedly have a large margin of error, the authors believe that the results, described in the following section, provide a meaningful picture of current industry practice on analytical method comparability for HPLC assay and impurities methods.

Background and internal governing documents
With regard to the definitions of analytical method comparability and analytical method equivalency, 68% of participants thought that these terms represented two different concepts. From subsequent working group discussions, it was clear that most participants agreed with Chambers et al. that analytical method equivalency is the evaluation of whether equivalent results can be generated by two different analytical methods; analytical method comparability is a broader and less defined term (1). Meanwhile, some participants also indicated that analytical method equivalency should be restricted to a statistical demonstration of equivalence, as described by Chatfield et al. (2).

With regard to internal governing documents, 79% of participants indicated that there are no specific standard operating procedures (SOPs) or guidelines on analytical method comparability within their company. On the other hand, 53% of participants mentioned that there are some general discussions on analytical method comparability within their company’s change control policies or SOPs. Some participants, for example,  indicated that it is part of their risk assessment workflow to evaluate if an analytical method equivalency study should be performed when a change is made to an analytical method.

Regulatory interactions
With regard to regulatory interactions, 68% of participants indicated that they have had successful regulatory reviews of analytical method comparability packages for a registration filing or post-approval change. Additional written comments showed that a complete analytical method comparability package typically includes method information, method validation, equivalency data, and the reason for the change. Meanwhile, 47% of participants indicated that they have received questions on analytical method comparability from health authorities. Additional written comments suggested that regulatory authorities have expectations that analytical method equivalency needs to be demonstrated when changes are made to an analytical method; however, the feedback also indicated that there were no consistent requirements on what kind of data package should be provided. In general, more data are required when the method change is more significant.

In one example, when an HPLC assay method was replaced with an UHPLC assay method, method equivalency was demonstrated using a side-by-side comparison of both methods for three lots of material; this approach was accepted by FDA. In this example, the change was considered minor as the separation mechanism (i.e., reversed-phase HPLC) was not changed. In another example, however, when a normal-phase HPLC assay and impurities method was replaced by a reversed-phase HPLC method during registration stability testing, several questions were received from FDA requesting detailed explanations due to the significant difference in methodology, which impacted the impurity profile and the specifications. The agency requested detection limits and quantitation limits of all impurities by both methods and full disclosure of the impurity profiles by both methods on all lots tested. To help justify the change, the company also provided overlapping stability data for 3-6 stability time points to demonstrate that the reversed phase HPLC method provided better characterization of the drug product.

General practices
With regard to general practices, all participants stated that in addition to validation, they would also evaluate whether an analytical method comparability or equivalency study is needed when a change is made to an analytical method, while 63% of participants specified that not all method changes will need a comparability or equivalency study. From additional written comments, it was clear that risk-based approaches are adopted by most companies when evaluating whether an analytical method comparability or equivalency study is needed for method changes. The risks are typically evaluated based on the types of the changes, and in general, the more significant the change, the more likely a comparability or equivalency study will be needed.

For example, as indicated by some participants, neither validation nor equivalency will be necessary for compendial HPLC method changes if the changes are within the ranges allowed in the USP General Chapter <621> “Chromatography.” For non-compendial HPLC methods, such as those included in a new drug application (NDA) filing, no equivalency study will be needed if the change is within the established method robustness ranges. On the other hand, a comparability or equivalency study is typically required if the changes are outside of these ranges, such as a change in liquid chromatography (LC) stationary phase chemistry or a change in detection technique (e.g., from ultraviolet to mass spectrometry). Some companies also evaluated the risks based on the stage of the development. One participant mentioned that equivalency studies are performed for any changes to an HPLC method in post-approval stages.

If an analytical method comparability or equivalency study is deemed necessary, different practices still exist amongst the participants. A majority of participants (74%) agreed that, in addition to validation of the new method per ICH guidelines, a simple side-by-side comparison of representative batch results generated by both methods against pre-defined acceptance criteria (some companies referred this practice as cross-validation or as a bridging study) is usually sufficient. A more conservative practice (16%) is a demonstration of method equivalency through formal statistical analysis. One participant indicated that whether such a formal statistical analysis is necessary will also be evaluated based on the types of the changes as part of the risk assessment workflow. Another less common practice  (5%) is to conduct a full validation of the new method; the two methods are considered equivalent if they are both validated.

QA and statistician involvements
With regard to quality assurance (QA) and statistician involvement, 68% of participants indicated that QA is involved in analytical method comparability or equivalency studies, such as approval of study protocols, while only 26% indicated that statisticians might be involved. One participant indicated that statisticians are involved in the design of the equivalency study, setting acceptance criteria, data evaluation, and report writing; while other participants stated that statistician assistance is only required when it is deemed necessary by analytical scientists.

Detailed procedures
With regard to detailed procedures, though the survey was on analytical method comparability for HPLC methods, responses from almost all participants were only focused on analytical method equivalency. This suggested that analytical method equivalency (i.e., demonstration of no practically significant bias in mean results) is the primary goal when a change is made to an HPLC method. Most participants did not use formal statistical approaches, but for those who did, statistical approaches are used throughout the study to ensure a statistically meaningful outcome.

An analytical method equivalency study typically consisted of acceptance criteria, study design (sample selection, sample size, and test plan), data evaluation, and documentation. With regard to acceptance criteria, most participants started with generic acceptance criteria derived from internal SOPs on analytical method validation or method transfer, such as 1.5-3.0% for assay, and 10-30% relative for impurities (or an absolute difference, such as 0.03%, for levels close to quantitation limits). Such acceptance criteria could be loosened or tightened based on the specific method and product involved (e.g., product types).

On the other hand, some participants indicated that a method specific criterion will be set based on the statistical evaluation of the precision of the method and the acceptance criteria of the relevant product specifications. With regard to study design, representative batches are typically used, with spiking of impurities if necessary. Some participants also suggested that stability samples over a few time points could be used (cross-over testing), though this practice was discouraged by Chambers et al. to avoid generating concerns of regulatory compliance.

A wide range of different practices exist among participants in terms of sample size (number of batches) and test plans. Some examples include using six replicate preparations of one or two batches, three replicate preparations of three batches in three runs (fresh reference standard preparations for each run), and one preparation for each of 20-30 batches, etc. Some participants indicated that statistical approaches are used to determine sample size (statistical power calculation based on acceptance criterion and method precision) and test plan (evaluation and control of other factors not related to the change). Some of these statistical approaches were discussed by Borman et al. (8).

With regard to data evaluation, most participants simply compared difference in mean results against pre-defined acceptance criteria, while some participants indicated that a statistical test will be used. Concerning the specific statistical tests, the two one-sided t-test (TOST) is most often used (9), while the two-sample t-test is also used by some participants. With regard to documentation, most participants use a pre-approved study protocol and provide a study report once the study is completed.

In summary, it is fair to say that compared to analytical method validation, analytical method comparability or equivalency is a more science-driven and less regulated field. The survey results show that risk assessments are commonly performed by pharmaceutical companies to evaluate the impact of the changes, including whether an analytical method comparability study is necessary. If analytical method comparability is deemed necessary, different practices exist among pharmaceutical companies on how to perform the study.Though formal statistical approaches are used by some companies to provide a statistical meaningful declaration on whether the two methods were equivalent, a simple side-by-side comparison against pre-defined acceptance criteria remains the most popular approach for changes made to HPLC methods. On the other hand, the survey results also show that though regulatory authorities have expectations that analytical method equivalency needs to be demonstrated when changes are made to an analytical method, there seems to be no consistent requirements on what kind of data package should be provided. In general, more data are required when the extent of the change is more significant.

Overall, the survey results suggest that risk-based approaches should be adopted for analytical method comparability. In other words, whether an analytical method comparability or equivalency study is needed and how it is performed should be decided based on the risks associated with the changes. In the FDA draft guidance on comparability protocols, the agency states that “The need and plan for providing product testing to compare the two (analytical) procedures could vary depending on the extent of the proposed change, type of product, and type of test (e.g., chemical, biological)” (4). This statement suggests that risk-based approaches are expected by FDA as well. In the next section, a risk-based approach for analytical method comparability for HPLC assay and impurities methods in registration and post-approval stages is proposed based on the survey results and the subsequent working group discussions.

A risk-based approach for analytical method comparability
When a change is made to an analytical method, a risk assessment should be performed to evaluate the potential impact. Though analytical method equivalency (i.e., no practically significant bias in mean results) is an important factor, other method performance characteristics, such as precision, should be evaluated as well. Impact on method performance depends upon the nature of the method as well as the type and the extent of the change. With regard to HPLC, the mechanistic understandings of both thermodynamic and kinetic aspects of HPLC separation processes have been well developed in the past few decades. In the same time period, improvements to LC instrumentation and column technologies have made HPLC one of the most reliable techniques in analytical laboratories around the world. The knowledge of HPLC separation and the reliability of modern HPLC instruments have enabled analytical scientists to better understand the potential impact on method performance from certain types of changes to HPLC methods. Therefore, changes to HPLC methods could be categorized based on the risks associated with the changes (e.g., low risk, medium risk, or high risk). These changes would be accompanied by a commensurate amount of work to assess comparability, as depicted in Table I.

 

Change categorization

Examples

Validation

Comparability

Low risk

Small changes to the high performance liquid chromatrography (HPLC) operating parameters within accepted ranges (e.g. USP <621> or within established method robustness ranges).

No

No

----

Medium risk

Changes to the HPLC operating parameters outside of low risk ranges, but not involving any change to the underlying principles of the existing analytical method (e.g. column chemistry).

Yes

Yes

Side-by-side comparisons

High risk

Changes to the underlying principles of the existing analytical method (e.g. liquid chromatrography-ultraviolet (LC-UV) to liquid chromatography-mass spectrometry (LC-MS)).

Yes

Yes

Formal statistical studies

 

Low-risk changes are small changes to the HPLC operating parameters, such as changes allowed by USP General Chapter <621> “Chromatography” for a compendial HPLC method or changes within the established robustness ranges in analytical method validation (10-11). Two examples of low-risk changes are the column temperature within ±10˚C or the flow rate within ±50% from the target values of a compendial isocratic HPLC method. In low-risk situations, additional method validation and comparability studies should not be required, as the changes have already been assessed within the initial validation experiments and any impact is generally understood and predicted based upon mechanistic models.  

Medium-risk changes are changes to the HPLC operating parameters outside the established robustness ranges evaluated during the method validation, but do not involve any change to the underlying principles of the existing analytical method. An example of a medium-risk change includes an HPLC column change to a smaller particle size and/or shorter column length or diameter with appropriate scaling considerations to speed up a gradient HPLC method (i.e., conversion from HPLC to UHPLC methodology without appreciable change in separation mechanism). Other medium-risk changes might include changes in column stationary phase from C18 to C8 or a change in organic solvent in mobile phases from acetonitrile to methanol as long as the characteristics of method performance (e.g., specificity, accuracy, precision) remains practically unaffected, which can be confirmed through successful completion of associated method validation activities prior to implementation of any comparability studies.  

Finally, high-risk changes are those changes that either involve a change to the underlying principles of the existing analytical method or a change in a critical method parameter that has high impact to method performance. Some examples of high risk “underlying analytical principle” changes include conversion of a microbiological potency method to a reversed-phase HPLC potency method, a normal-phase HPLC method to a reversed-phase HPLC method, a UV detection to a mass spectrometry detection, or an assay method by HPLC to a near-infrared (NIR) real time release method.

The HPLC risk categorization examples discussed previously should serve as a general guideline for risk assessment, and it is recommended that each HPLC method change should be evaluated and categorized on a case-by-case basis. Application of QbD principles to analytical methods has been discussed recently (12-16) and can assist method change assessments. Identifying critical method parameters through a comprehensive risk assessment is an integral step in an analytical QbD workflow. The knowledge gained through analytical QbD should also be applied in risk assessment and categorization when a change is made to an existing HPLC method. For example, a change in detection wavelength for an HPLC impurities method may be an example of a medium-risk change or a high-risk change. If all impurities are quantified by calibration with external standards of each impurity, the change should be of medium risk; however, quantification of impurities based upon response-corrected areas could potentially represent a high-risk change if relative response factors are assumed to be 1.0, a common industry practice for impurities with calculated response factors between 0.8-1.2 (11). In such a situation, a change in detection wavelength could lead to an appreciable bias in impurity results.

For changes in the low-risk category, potential impact on method performance is usually low. The risks will be further guarded by system suitability evaluation, such as the resolution between critical pairs, performed in each HPLC run; therefore, low-risk changes can take place without additional studies. Neither additional validation nor analytical method comparability studies are necessary. This approach agrees with the requirements from USP and ICH.

For any changes beyond low risk, justification and validation of affected method parameters are generally required. Additionally, both health authorities and pharmaceutical companies expect that analytical method comparability or equivalency studies should be performed. A question of interest is whether a formal statistical demonstration of analytical method equivalency is always necessary. Chambers et al. discussed the differences between validation and equivalency, and emphasized that validation alone cannot guarantee equivalence because validation experiments are not designed to detect small bias between mean results  (1). This statement is fair, but it should be emphasized that the risk of non-equivalency decreases as successful validations of both methods have been completed.

For medium-risk changes, where underlying principles remain the same for both methods (e.g., reversed-phase HPLC methods with UV detection), the risk of having a practically significant difference in mean results is rather low when both HPLC methods are successfully validated in the same manner with the same pre-defined criteria. For example, a co-elution of two impurities would potentially fail equivalency, and would fail validation on method specificity as well. In addition, validation studies also confirm improvement or similarities of other method performance characteristics, such as precision. Therefore, the authors recommend that a side-by-side comparison against pre-defined acceptance criteria should be sufficient for medium-risk changes. Though not a formal statistical approach, the side-by-side comparison study should be designed carefully. Acceptance criteria could be based on internal SOPs on analytical method validation or method transfer. Generic limits of 1.0-2.0 % for drug substance assay, 2.0-3.0 % for drug product assay, and 10-30% relative for impurities can be considered as a starting point. Such acceptance criteria can be loosened or tightened based on the specific method involved. Representative batches, preferably covering a wide range of results, should be chosen for the study. Spiking impurities should be considered, if no suitable batches can be identified containing the impurities at appropriate levels (e.g., the specification limit). QA should be consulted if active GMP samples, such as GMP release and registration stability study samples are considered to be used for comparison. Material limitation is usually not a concern at registration and post-approval stages, and typically three batches or more should be used for the study. Factors affecting intermediate precision and/or reproducibility, such as analyst, instrument, and environment, should be included in the study if possible. For example, a common test plan is triplicate preparations of three batches in three runs, with each run using fresh standard preparations, and altering factors of analyst, instrument, and/or environment as well.

Because high-risk changes usually involve a change to the underlying principles between the existing and the new methods, the risk of being non-equivalent is high, even with successful validations of both methods. In some cases, the validation plan and acceptance criteria between the two methods have to be modified to accommodate the changes to the underlying principles. For example, the specificity evaluation for a NIR assay method will be different (test procedure and acceptance criterion) from that for an HPLC assay method. Therefore, the authors recommend that statistical principles should be followed throughout the study and a formal statistical assessment of the data should be applied for high-risk changes to ensure a statistically meaningful outcome. This includes selection of the acceptance criterion, design of the test plan, data validity evaluation (e.g., outlier testing), and an equivalency test (e.g., TOST). Note the two-sample t-test should not be used as an equivalence test (or as acceptance criteria for a side-by-side comparison) as it is designed to detect a difference of means, not to establish whether the means are equivalent based on a measure of practical importance (1, 9). Besides method equivalency, other method performance characteristics, such as precision, should also be evaluated (though note that a large difference in precision is likely to cause the TOST to fail (9). In other words, an analytical method comparability study should be performed for high-risk changes. Detailed procedures for statistically guided analytical method comparability and equivalency studies can be found in the USP General Chapter <1010> and some external scientific publications as well  (2, 5, 8, 9, 17). Due to the involvement of advanced statistics, statistician assistance should be sought. The proposed analytical comparability activities for a specific HPLC risk categorization does not preclude a more stringent assessment being applied (e.g., if the analytical method is to be applied in a situation where even a very small difference in means could be problematic).

Conclusion
Analytical method comparability or equivalency remains as a more science-driven and a less regulated field, and different practices currently still exist among pharmaceutical companies. Harmonized practices will not only improve industry compliance to ensure product quality and patient safety, but also reduce the regulatory filing burden on method change control, which in turn will prompt innovation in pharmaceutical analysis. In this position paper, the IQ working group on analytical method comparability recommends a risk-based approach for analytical method comparability for HPLC assay and impurities methods in registration and post-approval stages. It is the working group’s hope that this paper can help to stimulate further discussions among the industry and health authorities to define acceptable industry practices for assessing analytical method comparability or equivalency in general.

Authors’ note
This position paper was prepared with the support of the International Consortium for Innovation and Quality in Pharmaceutical Development (IQ). IQ is a pharmaceutical industry association formed in 2010 with the mission of advancing science-based and scientifically driven standards and regulations for pharmaceutical and biotechnology products worldwide. Visit www.iqconsortium.org for more information.

References
1. D. Chambers et al., Pharm. Technol., 29 (9), 64-80 (2005).
2. M.J. Chatfield et al., Quality and Reliability Engineering International, 27 (5), 629-640 (2011).
3. W.W. Hauck et al., Pharmacopeial Forum, 35 (3), 772-781 (2009).
4. FDA, Draft Guidance, Comparability Protocols-Chemistry, Manufacturing, and Controls Information. (Rockville, MD, February 2003).
5. USP 36-NF 31, General Chapter <1010>, “Analytical Data-Interpretation and Treatment,” 452-464.
6. FDA, Draft Guidance, Analytical Procedures and Methods Validation for Drugs and Biologics (Rockville, MD, February 2014).
7. B. Kleintop and Q. Wang, “Practical Application of Very High-Pressure Liquid Chromatography across the Pharmaceutical Development-Manufacturing Continuum,” in Characterization of Impurities and Degradants Using Mass Spectrometry. (B. Pramanik, M S. Lee, G. Chen, Eds., (John Wiley & Sons, Inc., 2011) pp. 213-229.
8. P.J. Borman, et al.,  Analytical Chemistry, 81 (24) 9849-9857 (2009).
9. G.B. Limentani et al., Analytical Chemistry, 77 (11) 221 A-226 A (2005).
10. ICH, ICH Q2 (R1) Validation of Analytical Procedures: Text and Methodology, 2005.
11. USP 36-NF 31, General Chapter <621>, “Chromatography,” 268-275.
12. P.J. Borman et al., Pharm. Technol., 34 (4), 72-82 (2010).
13. P.J. Borman et al., Pharm. Technol., 34 (10), 142-152 (2010).
14. P.F. Gavin and B.A. Olsen,. Journal of Pharmaceutical and Biomedical Analysis, 46 (3), 431-441 (2008).
15. S. Karmarkar et al., Journal of Chromatographic Science, 49 (6) 439-446 (2011).
16. M. Schweitzer et al., Pharm. Technol., 34 (2), 52-59 (2010).
17. M.J. Chatfield and P.J. Borman, Analytical Chemistry, 81 (24) 9841-9848 (2009).

About the authors
Hafez Abdel-Kader is a director at Sanofi; Mark Argentine is a senior research advisor, and Jeff Hofer is a research advisor, both at Eli Lilly; Nancy Benz is a director at AbbVie; Rick Burdick is a quality engineering director at Amgen; Marion Chatfield is a manager statistician at GlaxoSmithKline; Frank Diana is a vice-president at Endo Pharmaceuticals; Hui Fang and Yuan Huang are both senior scientists at Eisai; Shreekant Karmarkar and Lakshmy M. Nair are both senior research scientists at Baxter; Theresa Natishan is a director at Merck; Andrea M. Pless is an associate director at Teva; Qinggang Wang* is a principal scientist at Bristol-Myers Squibb, qinggang.wang@bms.com; and Zeena Williams is a senior associate director at Boehringer Ingelheim. * To whom all correspondence should be addressed.

Recent Videos
Lee Cronin, founder and CEO of Chemify
Related Content