Risk-Based Validation of Commercial Off-the-Shelf Computer Systems

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology-11-01-2005
Volume 2005 Supplement
Issue 6

Risk analysis and evaluation of software and computer systems is a good tool to optimize validation costs by focusing on systems with high impact on both the business and compliance.

This article describes how to adopt risk-based approaches for the validation of commercial computer systems used in the regulated pharmaceutical industry. This paper will help to guide readers through a logical, risk-based approach for computer system validation. It offers recommendations on how to define risks for different system and validation tasks and for risk categories along the entire life of a computer system. The scope of this paper is limited to Commercial Off-the-Shelf (COTS) systems and does not include risks typically involved during software development.

The article contains two parts. Part one deals with risk assessment, in which we discuss approaches to categorizing computer systems into high, medium, and low-risk levels. (These levels serve as an example. Any ranking of levels of risk that is relevant to the product and the manufacturer may be substituted. The thought process of ranking is the same.) Part two offers recommendations for validation steps for the different categories as defined in part one.

Introduction

Computer systems are widely used in pharmaceutical industry for instrument control and data evaluation in laboratories and manufacturing. They are also widely used for data transmission, documentation, and archiving. When used in regulated environments they should be formally validated. The main compliance-related purpose of their validation is to ensure accuracy and integrity of data created, modified, maintained, archived, retrieved, or transmitted by the computer system. In addition, a computer validation, typically, is a pre-requisite to obtaining reliable system operation and the highest system uptime, which are business requirements of the industry. Depending on the complexity and functionality, validation of computer systems can be a huge task.

The efforts for validation should be balanced against the benefits, which means the amount of work should be in line with the problems that can occur if the system is not fully validated. The mechanism to balance benefits against investments is risk assessment in which we define the extent of validation according to the risk a specific computer can have on data integrity, and ultimately, product quality and safety. The risk-based approach should enhance industry's ability to focus on identifying and controlling critical functions that affect product quality and data integrity.

Industry task forces have recommended risk-based approaches for validation for a long time. For example, Good Automated Manufacturing Practice (GAMP) has a chapter in its "Guide for Validation of Automated Systems in Pharmaceutical Manufacture"(1). Also, the United States Food and Drug Administration has recognized the importance of risk-based compliance. This became most obvious when the FDA announced its science and risk-based approaches as part of the Twenty-First Century drug Good Manufacturing Practice (GMP) initiative in 2003 (2).

"We will focus our attention and resources on the areas of greatest risk with the goal of encouraging innovation that maximizes the public health protection," said FDA Commissioner Mark McClellan at an FDA–industry training session (8). David Horowitz added, "there are two elements to a risk-based approach to inspections: We need to go to the right places and we need to look at the right things" (8).

One reason for this risk-based approach is FDA's limited resources to inspect all manufacturing sites every two years.

"We have over 6000 domestic drug facilities and the number of GMP inspections that we have been able to inspect has declined by about two thirds in the last 20 years. So we can't take the chance that we are squandering our limited resources on lower risk facilities. That would prevent us from doing a minimum level of scrutiny and oversight and working with the higher risk facilities," Horowitz said (8).

In the meantime, FDA has begun to allocate its resources based on risk. For example, beginning in the fall of 2004, FDA began using a risk-based approach for prioritizing domestic manufacturing site inspections for certain human pharmaceuticals. This approach should help the Agency predict where its inspections are likely to achieve the greatest public health impact (2).

The FDA is not only taking advantage of the risk-based approaches, but also encourages the industry to do so, for instance, in software and computer validation. The industry guidance on General Principles of Software Validation states:

The selection of validation activities, tasks, and work items should be commensurate with the complexity of the software design and the risk associated with the use of the software for the specified intended use (3).

The same guide has also specific recommendations on what is expected for lower risk systems:

For lower risk devices, only baseline validation activities may be conducted. As the risk increases, additional validation activities should be added to cover the additional risk.

The "FDA Part 11 Guidance on Scope and Application" states:

We recommend that you base your approach (to implement Part 11 controls, e.g., validation) on a justified and documented risk assessment and a determination of the potential of the system to affect product quality and safety, and record integrity.

The most specific advice for risk-based compliance of computer systems came from the Pharmaceutical Inspection Convention's, "Good Practices for Computerized Systems Used in Regulated Environments" (5). It has several recommendations related to risks:

For critical GXP applications, it is essential for the regulated user to define a requirement specification prior to selection and to carry out a properly documented risk analysis for the various system options. This risk-based approach is one way for a firm to demonstrate that it has applied a controlled methodology, to determine the degree of assurance that a computerized system is fit for its intended purpose.

The inspector will consider the potential risks, from the automated system to product/material quality or data integrity, as identified and documented by the regulated user, in order to assess the fitness for purpose of the particular system(s). The business/GXP criticality and risks relating to the application will determine the nature and extent of any assessment of suppliers and software products (5).

Basically, this means the FDA and other agencies expect a risk assessment for each computer system, otherwise full validation is required. Companies without justified risk assessments will not be able to defend their selected level of validation. The real value in a comprehensive risk-based validation approach is in doing exactly the right amount and detail of validation for each system.

The principle is quite clearly illustrated in Figure 1. Costs for validation increase when going from no validation to 100% validation. Full validation for a COTS system would mean, for example, the testing of each function of the software under normal and high load, across and beyond the expected application range, and this for each possible system configuration. In addition, whenever the system is changed, may it be computer hardware, operating system, or application software, full revalidation would require that the same tests be rerun.

Figure 1: Risk vs. validation costs.

In today's rapidly changing computer environment, this could possibly mean that the system would be used 100% for testing. At the same time that testing increases, the risk of unexpected system failure decreases, because errors found during testing can be corrected or work-around solutions can be found and implemented.

The optimum testing is, obviously, somewhere between zero and 100%. The range depends on the impact the software or system has on (drug) product quality. For example, a system used in early drug development stages will have a lower impact and require less validation than a system used in pharmaceutical quality control.

In the past, companies frequently have applied the principles of such risk-based validation, but the rationale behind it was not documented and the approach was not implemented consistently within a company. The extent of validation depended more on individual validation professionals than on a structured rationale. As explained earlier, in new guidance, the FDA suggests that industry base the extent of its validations on a 'justified and documented' risk assessment.

Most confusing to the industry has been finding a structured way to prioritize risks. The FDA has been asked frequently to prepare a matrix of regulated processes indicating the level of risk associated with each. The FDA has made it very clear that this will not happen, because each situation is different. However, they have released criteria to be used in making these determinations. These are defined as: impact on product quality and patient safety.

General advice came from FDA's John Murray when he answered questions concerning FDA's expectations at the Institute of Validation Technology (IVT) Computer System conference in May 2004:

Really, I don't recommend you do a detailed risk assessment on every record in the building. I think you need to set up a systematic way of doing it—and you are going to put certain records in certain categories from the very beginning. If a record is used to release product and this record is incorrect and you release an unsafe product – I would make that your highest category, direct impact to public health (15).

Risk-based validation takes two steps: Define the risk category—for example, high, medium, and low—and define the extent of validation for each category according to guidelines as laid out by the company.

One final comment before we start with risk-based approaches. The model proposed in this paper has two objectives. The first is to get started quickly to take immediate benefit of the risk-based approach. Start with a qualitative risk assessment based on experience with the same or similar systems and gain further experience for full risk management for later implementation. The second is to fulfill FDA requirement of basing the extent of validation for each level on justified and documented risk assessment.

It is quite obvious that there are no generally accepted models to copy, and there is no universal solution. Each company must figure out the answers for itself because success really does depend on the unique situation of a company. The model suggested in this article is just one example for implementation. The FDA would allow many others. For example, this model suggests three risk categories: high, medium, and low. It also would be acceptable to have only two: high and low, or five and more. All models would be accepted as long as the approach is justified and documented.

Approaches for risk assessment and management

The National Institute for Standards and Technology (NIST) has defined the term risk as:

The probability that a particular threat-source will exercise (accidentally trigger or intentionally exploit) a particular information system vulnerability and the resulting impact if this should occur (12).

The types of risks a pharmaceutical company deals with include patient risk (safety and efficacy of drugs), regulatory risks [FDA 483's, Warning Letters (WLs), product recalls, etc.], and financial risk due to, for example: inability to get products approved for marketing, inability to ship finished products, or consequences of unauthorized disclosure of trade secrets and private information.

Risk management is the entire process from identifying and evaluating the risk to defining risk categories, and taking steps to reduce risk to acceptable levels. Risk assessment includes the first two parts: analysis and risk evaluation.

There are a number of standard risk assessment techniques available and widely used in the industry. The most important ones include the Failure Mode and Effects Analysis (FMEA) approach, Fault Tree Analysis (FTA), and the application of Hazard Analysis and Critical Control Point (HACCP) methodology. All three methods have been described in brief by H. Mollah (9).

An approach widely used in medical device industry is based on the International Organization for Standards (ISO) 14971.10 While FMEA and FTA are based more on quantitative, statistical data, the ISO approach is more qualitative in nature. The concept is to determine risk factors based upon their likelihood and severity, the mitigation of those risks, and monitoring and updating the process as necessary.

The model, as described by GAMP (1) is similar but adds detectability as another criterion: the more likely the problem will be detected, the lower the risk. Labcompliance has developed an extensive risk management master plan using the concept as described in the ISO standard (10).

For the scope of this publication, we follow the approach as described in the ISO standard. The model presented in this paper is more qualitative than quantitative and is very much based on the experience of users, validation groups, and auditors either with the same or with similar systems. For the scope of this paper, we introduce readers to the concept of full risk management, but then only focus on risk assessment. However, bear in mind that some of the current validation tasks, such as vendor assessment and even testing, are already steps towards the mitigation of risks involving computer systems.

Risk management overview

Risk management of a commercial computer system starts when the system is specified and purchased, continues with installation and operation, and ends when the system is taken out of service and all critical data have been successfully migrated to a new system.

The approach we take is to divide risk management into four phases, as illustrated in Figure 2.

Figure 2: Risk management (used with permission).

The phases include risk analysis, risk evaluation and assessment, and ongoing evaluation and control.

Risk analysis. Define computer system components and software functions. Identify potential hazards and harms using inputs from system specifications, system administrators, system users, and audit reports.

Risk evaluation and assessment. Define the severity, probability, and risk of each hazard, for example, by using past experience from the same or similar systems. Determine acceptable levels of risk and identify the hazards that would need mitigation to reach those levels. Identify and implement steps to mitigate risks.

On-going (re)evaluation and control. On an on-going basis, evaluate the system for new hazards and changes in risk levels. Adjust risk and mitigation strategy as necessary.

These activities should follow a risk management plan and the results should be documented in a risk management report.

Risk analysis

The first step in the risk management process is the risk analysis, sometimes called risk identification or Preliminary Hazard Analysis (PHA). The output of this phase is the input for risk evaluation. Inputs for risk analysis include:

  • specifications of equipment including hardware and software;

  • user experience with the same system already installed;

  • user experience with similar systems;

  • IT staff experience with the same or similar network equipment;

  • experience with the vendor of the system;

  • failure rates of the same or similar system (mean time between failures) and resulting system downtime;

  • trends of failures;

  • service records and trends;

  • internal and external audit results.

Inputs, for example, can come from operators, the validation group, Information Technology (IT) administrators, or from Quality Assurance (QA) personnel as the result of findings from internal or external audits.

The project manager collects input on potential hazards including possible harm. For consistent and complete documentation, forms should be used. The forms should have entry fields to include relevant data on the individual who made the entry, risk description, possible hazards and harms, probability of occurrence, and possible methods of mitigation. An example is shown in Table I.

Table I: Template for the identification of risks.

Occasional problems and harms with computer systems include, but are not limited to the following:

  • Hard drive failure on local personal computers (PCs) or on the server computer can cause severe system downtime and loss of data.

  • Loss of network connectivity due to hardware failure, for example, the network interface card, can cause system downtime;

  • System overloads can cause a slow-down of operations and system downtime.

  • Inadequate vendor qualification or absent specifications on vendor support purchasing agreements can result in reduced uptime because of missing support—in the case of hardware, firmware, or software problems.

  • Inadequate or absent documentation of installation can make it difficult to diagnose a problem.

  • Inadequate or absent verification of security access functions can result in unauthorized access to the system.

  • An insufficient or absent plan for system backup can result in data loss in case of system failure.

  • Poor or absent documentation of hardware and software changes can make it difficult to diagnose a problem.

  • Inadequate quality assurance policies and procedures or inadequate reviews can lead to poor system quality.

Risk evaluation process

This phase is used to categorize and prioritize the risk from a business and compliance, or health risk standpoint.

Data should be entered into a form with entry fields for risk descriptions, business (continuity) impact, product quality, safety, and compliance impact, as well as probability of occurrence. An example is shown in Table II and the various impacts are described below.

Table II: Template for risk evaluation.

Impact on business continuity. This is related to a company's ability to market a new product and its reliance on system uptime for continuous shipment of product. Evaluating these issues will answer these questions: in currency, how big would be the losses due to delays of new product approval and shipment stoppages?

Impact on product quality. The question here is whether the system has an impact on product quality. This question asks whether the system impacts the identity, strength, safety or efficacy of a drug. A direct impact on product quality means that any failure cannot be corrected before a new drug is approved for marketing or before a batch is released for shipment.

For a "high-risk" classification, the probability of detecting the problem would be low or zero. An example is an analysis system used in quality control where analysis results are used as criteria for the release of product.

Impact on human health and safety. Includes consumer safety and environmental hazards. An example of high severity would include circumstances whereby poor product quality could cause adverse effect to the health of patients or users.

Note: Because an impact on health and safety can only occur when there is also an impact on product quality, we combine both factors.

Impact on compliance. This is related to the risk of failing regulatory inspections and receiving single or multiple WLs or inspectional observation reports. A typical compliance issue is the insufficient integrity of regulated data.

There are other indirect affects wherein the health of a patient or a worker is affected, such as claims against the company, product recalls, a negative reputation for the company, etc.

Information from this category will be used to calculate an overall risk factor. In our example, the risk categories are converted into numeric values such that: high = 3, medium = 2, and low = 1 (See Table III).

Table III: Template to determine the overall risk factor.

Risk factors are calculated using the following formula:

(Business Impact + Safety + Compliance Impact) × Probability of Occurrence = Risk Factor

Factors contributing to risk

High-risk factors. Examples of factors contributing to high-risk levels include those related to product quality and health and safety, business continuity, and regulatory compliance.

Product quality and health and safety.

  • Systems used to monitor, control, or supervise a drug manufacturing or packaging process.

  • Systems used in a production environment for testing, release, labeling, or distribution of products;

  • Users interact manually with the system and data having the ability to manipulate data.

  • System failure can have direct impact on product quality.

  • No or low probability that the problem will be detected or can be corrected;

  • Product quality problems may lead to death or serious and permanent injury.

Business continuity.

  • System must run 24 hours a day, 7 days a week.

  • Highly complex hardware, software, and system configuration;

  • Highly customized;

  • Unskilled operators;

  • No work-around solutions;

  • Vendor unrecognized in the pharmaceutical industry; no support from vendor, e.g., no documented evidence on validation during development, or no phone or on-site support in case of problems.

Compliance.

  • Used for GXP-regulated applications

  • System failure can impact data integrity or can cause loss of data.

Low-risk factors. Factors contributing to low severity risk levels include those related to product quality, health and safety, business continuity, regulatory compliance, and probability.

Product quality.

  • System is used in early product development stage.

  • System is fully automated and relies on well-validated processes.

  • High probability that problem will be detected and can be corrected.

Health and safety.

  • System failures or lack of data integrity do not have any impact on human health.

Business continuity.

  • used occasionally;

  • highly skilled operators;

  • widely used commercial systems;

  • no customization;

  • work-around solutions available;

  • full support from recognized vendor (e.g., documented evidence on validation during development, local language phone support or on-site support in case of problems).

Compliance

  • not used in regulated applications

  • Failure of the system does not have an impact on data integrity and cannot cause loss of data.

Probability. Probability should answer the question, What is the likelihood that the system will fail, generate wrong data, or that data are lost?

Probability should be expressed in occurrence within a set time period. We recommend using five categories:

  • Frequent (e.g., once every month);

  • Probable (e.g., once in 1–3 months);

  • Improbable (e.g., once in 3–12 months);

  • Occasional (e.g., once in 1–3 years);

  • Impossible.

We use past experiences from the same or similar systems to estimate probability.

Importance of a risk management master plan

The most significant task during the risk assessment process is to define criteria for criticality, which determines the final risk level. For example, this question frequently comes up: what if an inspector questions my decision? There are no absolute measures, so a dispute may occur. Discussing this question today is similar to an industry discussion of 10 to 15 years ago concerning computer validation when the frequently asked question was, How much validation is enough?

Answering this question about validation was nicely solved with the development of the Validation Master Plan (VMP). Companies developed such master plans on a fairly high level to guide validation specialists through the validation process by explaining the procedure for easy understanding, offering templates for convenient implementation, and giving examples on what to validate for different systems.

In the meantime, such VMPs have become a legal requirement in Europe through Annex 15 of the European GMP directive (16). The US FDA may not ask for a VMP; the inquiry may be for the company's approach to validation. The VMP, and the examples it contains, has become the perfect document to help answer any question about the level of validation.

An equivalent document in the area of risk assessment is a risk management master plan. Such a document should be developed at a fairly high level within the company. It should describe the company approach to risk management and assessment and should include templates for risk identification, evaluation, mitigation, and control. It also should include criteria and examples for severity and probability. The master plan can be used to derive risk management plans for individual projects. The main advantages are increased efficiency, and, even more importantly, consistent implementation.

A risk management master plan should also include examples of factors that impact risk categories. This is important to ensure a consistent approach in the company risk assessment. An example with some recommendations is shown in Table II.

Examples

Examples are quite useful for getting an idea of what type of systems fall into the different categories. Another type of question that is frequently posed is whether, for example, a laboratory management system or a documentation system falls in the high, medium, or low-risk category. Sometimes even systems from specific vendors are mentioned. This is the wrong question. The risk is not dependent mainly on the system but more on the records created, evaluated, transmitted, or archived by the system.

A Laboratory Information Management System (LIMS) in a non-regulated research department is not a high-risk system, at least not from a compliance view. On the other hand, a LIMS in a pharmaceutical quality control laboratory is most likely a high-risk system because the records have a high impact on product quality.

Both the International Society for Pharmaceutical Engineering (ISPE) and the Pharmaceutical Research and Manufacturing Association (PhRMA) have given examples for what may qualify as high-risk. The PhRMA wrote a letter to the FDA on Nov. 29, 2001 related to the "Proposed FDA Guidance on the Scope and Implementation of 21 Code of Federal Regulations (CFR) Part 11." The letter included a ranking of five systems related to their risk on product quality. Those with the highest risk were manufacturing batch records and manufacturing LIMS and Quality Assurance (QA) systems (13).

The ISPE wrote a white paper on the "Risk-Based Approach to 21 CFR Part 11" with the recommendation that the focus of efforts should be on records that have a high impact, i.e.: those records upon which quality decisions are based. Examples of high impact records include batch records and laboratory test results (14).

Examples of records with low impact include environmental monitoring records not affecting product quality, training records, and internal computerized system information such as setup and configuration parameters. Other examples are planning documents and Standard Operating Procedures (SOPs) for non-critical operations.

GAMP has published a Good Practices Guide: A Risk Based Approach to Compliant Electronic Records (16). This document illustrates examples of records that have high, medium, and low impact on risk.

In general, systems fall into the high-risk category when they have a direct impact on product quality and patient safety. Examples are systems used in pharmaceutical manufacturing and quality control such as electronic batch record systems, analytical control systems, also document management systems and data bases with high-risk records. For example, wrong analytical test results that are used as a criterion to release a batch are highly critical, because there is no further testing and the product is released to the market immediately. An example of a system with high impact on patient safety is a distribution record system. If a product must be recalled because adverse effects on patients have been identified and some of the distribution records are lost, incorrect, etc., the product cannot be completely removed from the market, thereby having a high impact on patients.

Examples of systems in the medium-risk category include systems that are used to qualify and monitor the systems defined as high-risk. These would also include configuration management software.

Examples of low-risk systems include word processing systems that are used, for example, to generate validation records. The reasons for relegating these systems to the low-risk category include the relatively low likelihood that they would have errors, the likelihood that errors would be detected by proofreading, and, in this case, the likelihood that such errors would have no direct impact on product quality or patient safety.

Validation tasks

Once the risk level is identified, validation steps can be defined. Risk level information is used for considerations such as:

  • In what detail do we specify the system? For example, for a low-risk system, we only prepare a high-level system description and for high-risk systems we develop detailed system requirement specifications.

  • How extensively do we test the computer system? For example, high-risk systems will be tested under normal and high load conditions. Test cases should be linked to the requirement specifications.

  • How much equipment redundancy do we need? For example, for high-risk systems we should have validated, redundant hardware for all components. For medium-risk systems, redundancy of the most critical components is enough, and for low-risk systems, there is no need for redundancy.

  • How frequently must we back-up data generated by the system? While a daily back-up is a must for high-risk systems, weekly incremental back-up is sufficient for low-risk systems.

  • What type of vendor assessment is required? For example, high-risk systems will require vendor audits while, for medium and low-risk systems, an audit checklist and documented experience from the vendor should be enough.

  • What requirements of Part 11 should be implemented in the computer system? For example, high-risk systems' computer generated audit trails should be implemented, while for low-risk systems, a paper-based, manual audit trail is enough.

Validation tasks should be defined for each phase starting from planning through specification settings, vendor qualification, installation, testing, and on-going system control.

The tasks should be consistent within an organization for each risk category. They should be well documented and be included either in the risk management master plan or in the validation master plan.

Table IV summarizes examples with validation activities for each validation phase and task.

Table IV: Examples of validation tasks.

For some validation tasks other factors should be considered besides the impact of the system on product quality. One such factor is vendor qualification. The question of how much to invest depends on two factors: product risk and vendor risk. Factors that impact vendor risk include:

  • experience with the vendor (software quality, responsiveness and quality of support);

  • size of the company;

  • company history;

  • represented and recognized in industry, e.g., Bio/Pharma;

  • expertise with (FDA) regulations;

  • future outlook;

  • How likely is the company to stay in business?

Table IV is recommended as a starting point for a commercial, networked Off The Shelf (OTS) system with user specific configurations (i.e., network configurations). Such an automated system would fall into category four, as defined by GAMP (1). This table can be extended to a third dimension to include systems or software that have been developed for a specific user (GAMP category five) and to include systems that do not require any user specific configurations (GAMP category three). GAMP categories indicate the level of system customization. The extent of validation is lower for GAMP category three and higher for systems in GAMP category five.

The principle is shown for early validation phases in Figure 3. Of course, to generate such information is more time consuming and only makes sense if several systems in GAMP category three or five should be validated.

Figure 3: Validation tasks for system risks and GMP categories, showing only validation phases.

Conclusion

Risk analysis and evaluation of software and computer systems is a good tool to optimize validation costs by focusing on systems with high impact on both the business and compliance. Substantial cost savings are possible for medium and low-risk systems. Validation activities of a low-risk system can be limited to documenting which systems have been used. The risk is less dependent on the type of system than on the type of records generated by the system. For example, a LIMS system used in a research environment has a lower compliance risk than the same system used in pharmaceutical quality control.

Regulatory agencies require companies to base the extent of validation they complete on a justified and documented risk assessment. To do this efficiently, we recommend the following steps:

1. Develop a risk management master plan. This describes the company's approach to risk assessment and has templates and examples for easy and consistent implementation. This plan should also include validation tasks for each risk category.

2. Develop a risk management project plan for each computer system validation project. Use the risk management master plan approach as a source to define steps, owners, and deliverables.

3. Identify risks, possible hazards and harms and define the risk category, for example: high, medium, and low. This should be based on likelihood and severity. To estimate the severity, look at the records handled by the system and at their impact on product quality and consumer safety.

4. Determine validation tasks for each lifecycle phase. Use the approach, templates, and examples from the risk management master plan

5. Develop a risk management plan with a sound justification and the documentation of your results.

For the long term, we recommend that risk assessment be extended to full risk management with an action plan for risk mitigation and on-going review and control.

Ludwig Huber, PhD, is a compliance program manager at Agilent Technologies, tel. 1 49 7243 602 209, ludwig_huber@agilent.com

References

1. "GAMP Good Automated Manufacturing Practice, Guide for Validation of Automated Systems in Pharmaceutical Manufacture," Version 3, March 1998, Version 4, December 2001.

2. US Food and Drug Administration,"Pharmaceutical CGMPs for the Twenty-First Century: A Risk-Based Approach," www.fda.gov/oc/guidance/gmp.html and "FDA Issues Final Report on its '21st Century' Initiative on the Regulation of Pharmaceutical Manufacturing," www.fda.gov/bbs/topics/news/2004/NEW01120.html (Rockville, MD, Sept. 2004).

3. US FDA, General Principles of Software Validation: Final Guidance for Industry and FDA Staff, (FDA, Rockville, MD, Jan. 2002).

4. US FDA, Guidance for Industry. Part 11, Electronic Records; Electronic Signatures—Scope and Application (FDA, Rockville, MD, Aug. 2003).

5. Pharmaceutical Inspection Convention, Good Practices for Computerized Systems Used in Regulated Environments (PIC/S, Geneva, Switzerland, Jan. 2002).

6. US FDA, Code of Federal Regulations, Title 21, Food and Drugs, Part 11 "Electronic Records; Electronic Signatures; Final Rule; Federal Register 62 (54), 13429-13466.

7. Pharmaceutical Inspection Convention, Good Practices for computerized Systems in Regulated "GXP: Environments, (DRAFT) (PIC/S, Geneva, Switzerland, Jan. 2002).

8. DIA/FDA Industry Training Session, May 2003.

9. H. Mollah, "Risk Analysis and Process Validation," BioProcess Int., 2 (9),(2004).

10. ISO 14971:2000, "Medical Devices—Application of Risk Management to Medical Devices," (ISO, Geneva, Switzerland, 2000).

11. Labcompliance, Risk Management Master Plan, 2004.

12. G. Stoneburner, A. Goguen, and A. Feringa, Risk Management Guide for Information Technology Systems. Recommendations of the National Institute of Standards and Technology," NIST Special Publication 800-30 (NIST, Gaithersburg, MD, July 2002).

13. PhRMA, "Letter to the FDA, Related to Proposed FDA Guidance on the Scope and Implementation of 21 CFR Part 11," on Oct. 29, 2001.

14. International Society for Pharmaceutical Engineering, White Paper, "Risk-Based Approach to 21 CFR Part 11," (ISPE, Tampa, FL, 2003).

15. J. Murray at the Institute of Validation Technology "Computer System Validation" conference, May 2004.

16. "Qualification and Validation," Annex 15 to the EU Guide to Good Manufacturing Practice, 2001.

17. ISPE, GAMP Good Automated Manufacturing Practice, Good Practice Guide: A Risk-Based Approach to Compliant Electronic Records and Signatures(ISPE, Tampa, FL, Feb. 2005).

Recent Videos