Nearly eight years have elapsed since the US Food and Drug Administration's 21 CFR Part 11 regulations on the use of electronic records and electronic signatures went into effect (1). In Sept. 2003, FDA issued a guidance document covering the scope and application of Part 11, which described how the agency intends to interpret and enforce the requirements during its ongoing re-examination of the regulations (2). Many in the pharmaceutical industry view the guidance as a positive development that will lead to a simplified FDA approach to Part 11 and a significant reduction in the industry's compliance burden. But this shift in FDA's interpretation and its intended use of enforcement discretion has not ended the controversy and confusion surrounding Part 11 and its requirements.
Nearly eight years have elapsed since the US Food and Drug Administration's 21 CFR Part 11 regulations on the use of electronic records and electronic signatures went into effect (1). In Sept. 2003, FDA issued a guidance document covering the scope and application of Part 11, which described how the agency intends to interpret and enforce the requirements during its ongoing re-examination of the regulations (2). Many in the pharmaceutical industry view the guidance as a positive development that will lead to a simplified FDA approach to Part 11 and a significant reduction in the industry's compliance burden. But this shift in FDA's interpretation and its intended use of enforcement discretion has not ended the controversy and confusion surrounding Part 11 and its requirements.
To understand the controversy and find clarity in complying with Part 11 requirements, it is helpful to examine the basic reasons for the disconnects between FDA and the industry. It also is useful to examine the application of basic good software and systems engineering practice and the use of hazard control methodologies as the key scientific solutions to the problems.
The root causes
Industry observers have suggested many reasons for the controversy and confusion surrounding Part 11. These discussions typically have focused on FDA's efforts and the industry's arguments that the agency failed to provide timely and sufficiently specific guidance to fill in all the details that lurk beneath the high-level principles outlined in the regulations. The root causes of the problems, however, point to both FDA and the industry, and can be grouped roughly into three categories: FDA's reality gap; the industry's limited transfer of process-based quality concepts to electronic systems; and a widespread lack of understanding of some of the basic requirements of good software and systems engineering practice.
FDA's reality gap. Contrary to popular belief, FDA did not create a novel or unique standard when it issued the final rule in 1997. Most of the Part 11 requirements are based upon well-established, basic concepts and principles of good software and systems engineering practice. FDA's original intent was for Part 11 to reflect a common-sense approach to ensuring data integrity through the routine application of those basic concepts and principles.
The agency, however, was not very effective in helping the regulated industries to understand that point. Indeed, everyone would have been much better off from the start if Part 11 had been entitled, "Good Software and Systems Engineering Practices," instead of starting the cycles of debate by focusing on electronic records and electronic signatures.
From the outset of its efforts to develop Part 11, FDA fell into a classic trap by failing to recognize the vast difference between its perception and reality. The agency did not realize the substantial gap between what it perceived or presumed to be the level of software and systems engineering practice in the pharmaceutical industry and the reality of the industry's approach to computerized systems. Despite some exceptions, the industry as a whole had not fully embraced or internalized many of the basic, fundamental concepts of good practice in this area. As a result, the common level of practice was far below what FDA perceived as the industry's starting point.
This reality gap set the stage for the controversy and confusion that followed the issuance of the final rule. It also explains why FDA underestimated the economic impact of the new regulations and why the industry has struggled to understand and consistently comply with key requirements ever since. Even though the reality gap has narrowed through years of Part 11 remediation efforts, many regulated companies still fall far short of basic good practice.
Process-based quality concepts and computerized systems. To understand why the reality gap occurred, one must look at where the industry was in the early 1990s, when the first discussions about the issues that culminated in Part 11 were taking place. During the 1980s, both the industry and the regulatory authorities focused a lot of effort on the burgeoning concepts and requirements for process validation. As a result of that experience, the pharmaceutical industry adopted a new approach to managing quality and control, and eventually shifted from its longstanding product-based (quality control [QC]) focus to a more process-based (quality assurance) focus. Since then, the industry has developed a tremendous body of knowledge and experience in dealing with process-based issues. (The industry now is facing another significant step in its evolution and is moving from a process-based to a systems-based approach to quality).
Today, pharmaceutical firms do not question the need to validate production processes. It has become second nature for companies to validate manufacturing processes in a planned, structured, controlled, and carefully documented way. In this regard, the industry readily accepts and understands the notion that one cannot test quality and integrity into a product. In addition, companies now understand that product quality and integrity result from following a sound, scientifically-challenged, well-controlled process against which defined, confirmatory spot-checks are performed through quality control testing.
What the industry did not appreciate in the early 1990s—and still seems to struggle with—is that the same concepts apply to software and computerized systems, and that validation in the electronic context also results from following a planned, rational, controlled, and documented process. In other words, the industry generally has not effectively transferred its basic understanding of process-oriented quality concepts into the electronic environment. This lack of transference and the inconsistent, substandard level of software and systems engineering practice helped to create and continue to feed the industry's share of the root causes of the controversy and confusion.
Understanding the basic requirements. Despite FDA's guidance and enforcement efforts and all of the public discussions about Part 11 to date, the basic requirements and their perceived costs still are considered as unnecessary and unreasonably burdensome by many in the industry. This view continues to be driven by serious and widely held misunderstandings. For example, validation often is viewed as an externally imposed regulatory add-on, rather than as a key fundamental component of good software and systems engineering practice.
Validation is a fairly straightforward scientific concept that has been in use for a long time. Validation is nothing more than developing and maintaining software and systems in a way that provides assurance of data integrity and evidence that the software reliably does what it was intended and designed to do. The assurance and evidence that validation provides are not simply a matter of good science. They also are basic attributes of good business practice when storing information and making important business decisions by using software and electronic data.
Because of the lack of understanding about the nature and meaning of validation, companies waste tremendous effort and resources in straining to avoid it or in going above and beyond what is rationally necessary based on intended use. Validation must be more widely understood as a flexible, scalable, subjective concept that, by definition, is based on the potential effects and risks of each system. Thus, validation cannot be reduced to an all-or-nothing approach or a set of rote, cookbook steps that eliminate the need for fact-specific and application-specific scientific judgment.
Companies often point to numerous other examples of FDA's inspectional criticism of practices related to code review, traceability, and documentation of requirements and test results. The industry ordinarily perceives such observations as bureaucratic regulatory nitpicking, instead of recognizing that FDA is focused on and concerned about areas such as code review, traceability, and documentation simply because they are basic components of good software and systems engineering practice.
Shifting the culture
The differences in views and understanding between the industry and FDA regarding good practices in this area continue to cause inspectional problems and the pain and expense of regulatory enforcement. Since 1997, Part 11 has been a high-profile center of attention and a catalyst of a significant, but grudgingly accepted, culture change in the industry's approach to software and computerized systems.
Why is such a culture change necessary? Because good software and systems engineering practices still are not yet embedded in the industry's culture. A widespread lack of understanding remains about the logical and scientific basis for some of the key requirements, which fuels companies' and individuals' resistance to adopting practices that will routinely meet those requirements. The Part 11 requirement for an audit trail is a prime example. Many people in the industry still view this rule as an additional, unnecessary requirement that is imposed by the regulators to facilitate inspections and to deter and detect fraud. This mindset is misguided, because an audit trail is a rational, basic component of good practices and nothing more than a record of changes, the primary purpose of which is for the company's benefit, not the regulators'.
An accurate and complete audit trail is critically important to the pharmaceutical firm when a manufactured product fails to meet its final release specifications and must be rejected. When a failure occurs, the company must conduct a detailed failure investigation to determine what went wrong and when. Smart companies even recognize that this is as much a matter of good business practice as it is a clear requirement under the good manufacturing practice regulations. If an accurate and complete audit trail isn't available, the company may not be able to pinpoint a definitive root cause of the failure and assure itself or the regulatory authorities that it can continue to manufacture without running into the same unacceptable variation in the finished product.
As another example, experienced systems developers in the pharmaceutical industry usually defend their standard practices as sufficient and "fit for purpose," even if they don't always document their work or adhere to other stated regulatory expectations. In this regard, developers often say the requirements of Part 11 are cumbersome and technically unnecessary. Nonetheless, when asked if they would feel comfortable if their standard practices were used to develop the guidance system for a commercial airliner, the typical reaction is "absolutely not." They then become visibly uncomfortable when they are reminded that if an airliner's guidance system fails to work as intended, hundreds of people could die. In contrast, if some of the systems they are developing don't work properly, drug products could be inappropriately released and thousands, if not hundreds of thousands, could be harmed. The fact that this is not clearly and consistently understood as a driving factor for improving the level of software and systems engineering practices in the industry further underscores the need for a far-reaching culture change.
FDA's guidance and the industry's expectations
Following the issuance of Part 11, piecemeal guidance flowed out of FDA in official but draft guidance documents and in unofficial statements and opinions provided as "podium policy" by agency officials at various meetings and conferences. Many of the industry's complaints about the lack of agency clarity are well-founded. Indeed, FDA's interpretations of the requirements shifted over time and some of them led to scientifically irrational and unnecessary extremes.
FDA's inconsistent and occasionally over-reaching guidance contributed to the pharmaceutical industry's difficulties, but it was the underlying state of the industry's practices that presented the core problems. Because the industry did not fully understand or adhere to basic good software and systems engineering practices, companies expected (and, in many cases, needed) FDA to be much more explicit, prescriptive, and definitive in its Part 11 guidance. But, it was unreasonable and unrealistic to expect that FDA would or even could teach the industry about software and systems engineering, especially because the concepts and standards were not new and the industry had much greater technical capabilities than the regulatory authorities could apply to these issues. In this context, the more it appeared that FDA was vacillating or making up the rules as it went along, the more the industry struggled and became entangled in internal debates about the basic expectations and how to effectively meet them.
Reliance on vendors for software and computerized systems
For the most part, the pharmaceutical industry has relied very heavily on vendor-developed and vendor-maintained software and systems. In general, two key observations apply in this regard. First, the industry historically has not applied the same degree of rigor and control to its management and qualification of software vendors that it has routinely applied to suppliers of ingredients or other components of the manufacturing process. Second, the industry tends to have a tremendous amount of blind (or at least very nearsighted) faith in its vendors and both its in-house developed and purchased systems.
As an example of the industry's blind faith, a company that had developed a system in-house nearly 20 years earlier had been relying on an outside vendor to maintain the system for several years. Because the firm had used the system without any apparent problems for a long period of time, the company was very confident about its integrity and reliability and proudly defended the system when FDA challenged its validation. Much to the company's surprise, when forced to take a hard look at the system, the company discovered that a fundamental design flaw had been built into the way the software rounded numbers. As a result, many years of QC laboratory analyses and results were called into question, and it required an extraordinary, time-consuming, and costly effort to reassess the system's data and convince FDA that no further regulatory action was necessary with regard to the products that were on the US market.
Common organizational approach
In general, information systems (IS) or information technology (IT) functions have not been approached as a core competency in the pharmaceutical industry. In most companies, IS/IT departments are established and managed primarily as a service function to the rest of the business. In other words, the IS/IT organization ordinarily is viewed as an administrative or infrastructure necessity that is there to respond to and support the company's more critical core business demands in research and development, manufacturing, and commercial operations. As a result, the IS/IT departments' objectives and goals typically are driven more by system delivery deadlines and budgets than by quality and good practice.
The typical corporate structure also has been a key problem in fueling companies' difficulties with software and computerized systems. Most companies have an organizational structure in which the development function and the business (i.e., system use and maintenance) function are separated in terms of their operational responsibilities and financial controls. As a result, the budgets for IS/IT development and the use and maintenance phases of systems are managed separately by independent parts of the company.
Because of this separation of fiscal responsibility, systems tend to be developed and then "tossed over the fence" to the business function, which must then deal with the consequences of any development shortcomings. Under this approach, the development organization is considered to have met its objectives if it delivers a new system on time and with its operating budget, even if key development steps (such as integration or system testing) were skipped to meet the delivery deadline. Despite its shortsighted nature, this approach is not uncommon and companies often incur change and maintenance expenses that far outstrip what it would have cost to properly develop the software or system in the first place.
Continued debate and dance around the predicate rules
In its Part 11 scope and application guidance, FDA described its "narrowed" interpretation of the regulations and explained how the agency intends to apply its enforcement discretion during the ongoing re-examination of Part 11 (2). In summary, the guidance more explicitly stated what FDA had been saying for some time: until the agency decides how it intends to go forward with Part 11, it will not enforce many of the Part 11 requirements per se, but will fully enforce the requirements of the predicate rules. For some FDA-regulated industries, this shifted enforcement focus will significantly reduce the effect of Part 11 requirements on their businesses. That is not the case, however, for the pharmaceutical, medical device, and biologics industries because many of the Part 11 requirements are also effectively required under the predicate rules.
FDA's primary focus on the predicate rules makes good regulatory and scientific sense. The problem is that the industry still has widely varying views of the need and mechanism for adequately ensuring system and data integrity, beyond what is and what is not explicitly stated in the other various regulations in Title 21 of the CFR. For example, FDA certainly understands that validation is and has been required and enforced for decades in areas where the published language of the predicate rules does not include the term validation (e.g., the Part 211 pharmaceutical GMP and Part 58 GLP regulations). It should be clear by now that many of the predicate rules implicitly require validation. For example, both Parts 58 and 211 include requirements for equipment to be of "appropriate design." The term equipment has been interpreted to include computers, and validation is the way to demonstrate that a computer is of "appropriate design." Not all companies understand these implicit requirements, however, and some adhere to the mistaken belief that if the predicate rules do not explicitly mention validation, then validation is not required. Both FDA and the industry must recognize this disparity and exercise care in making simple references to the "validation requirements of the predicate rules."
A science-based solution: using hazard control
FDA is in the process of adopting and applying a risk-based (what otherwise might be stated as a hazard control) approach to Part 11. This approach should bring a breath of fresh scientific air to both FDA and the regulated industries. Some companies recognize that this was a rational and inevitable shift in FDA's thinking, and that hazard control is not just the latest quality fad. In fact, those who have experience in other industries understand that hazard control is a common fact of life in engineering design activities.
As noted previously, the concepts and standards of good software and systems engineering practices are not new. The same can be said with regard to formal hazard control methodologies, all of which have roots in methods that were originally developed by the US military in the 1940s. They include various long-standing and common engineering tools that involve the systematic identification and evaluation of potential hazards in a system or process, and the careful consideration, development, and implementation of design changes or other controls intended to avoid the occurrence or mitigate the consequences of each identified hazard. This kind of approach also is intended to accommodate and address the full scope of a "computerized system" (which FDA defines as software, hardware, procedures, and people) so that additional control mechanisms and other safeguards beyond those designed into the software (such as additional procedural controls and confirmatory checks) are properly understood and incorporated into the use of the whole system.
FDA's suggestion for the development and use of a "justified and documented risk assessment" for key computerized systems issues (2) also makes sense from both a scientific and a regulatory perspective. The difficulty is that the agency's conceptual rationale once again may be heading for another manifestation of the reality gap described previously. The concepts and methods of hazard control and risk mitigation are not well understood or routinely used for software development in many pharmaceutical companies. Unlike the knowledge and capabilities that have been developed in the medical device industry regarding the understanding and management (i.e., appropriate mitigation) of risk, the pharmaceutical industry generally has much more limited experience in the formalized application of hazard control methodologies. This has often created problems in FDA inspections, simply because the decisions about what was done or should have been done (e.g., with regard to the scope of testing) were not grounded in good science or logic or based on a thorough understanding of the design and intended functionality of the software and system. Without the scientific and logical foundation provided by the rigorous application of a hazard control methodology, this kind of approach appears arbitrary and is very difficult for the inspected company to defend.
FDA and the industry do not need to reinvent the wheel or struggle to create an FDA-specific set of expectations for justified and documented risk assessments. The proper use of a hazard control methodology will lead to documented, scientifically justified bases for the design and level of controls to be applied to a specific computerized system, in accordance with its intended use. In other words, each system would be required to be appropriately designed and controlled based on documented use- and risk-based scientific reasoning—a notion that is completely consistent with the flexible and scalable nature of validation. In this regard, a word processing system used to generate hard-copy standard operating procedures that are verified by proofreading, review, and approval would require fewer controls than a system that has a greater potential to affect product, process, record, and data integrity and is less capable of having its results and outputs fully verified.
The application of formal hazard control methods admittedly becomes more complex in instances in which the industry has limited knowledge of the details of the software design, structure, and data flows, which is often the case with vendor developed and supplied software (especially for large, complex systems such as enterprise resource planning [ERP] systems). This places even greater emphasis on the science behind design and implementation decisions, and requires subjective, rational assumptions to be made and included in the analysis.
Positioning a company for success
To position itself for success in its Part 11 compliance, a company should, first and foremost, focus on logic and good science in everything it does, instead of allowing the organization to drive operations based on compliance as a target. The company should ensure it has sufficient internal capabilities and experience with good software and systems engineering practices, and then adopt them as the company's standard approach to ensuring reliability and data integrity in electronic systems.
Companies should embrace the fact that the formal application of hazard control methodologies is a core competency that will be as critical to an organization's success as its development pipeline—not just with regard to software and systems, but also for process and formulation development, manufacturing changes, and the entire life cycle of a medical product in the twenty-first century.
Gordon B. Richman is vice-president of strategic compliance consulting and general counsel at EduQuest, Inc., 1896 Urbana Pike, Suite 14, Hyattstown, MD 20871, tel. 301.874.6031, gordonbrichman@EduQuest.net.
References
1. Code of Federal Regulations, Title 21, Food and Drugs (General Services Administration, Washington, DC, April 1, 2005), Part 11.
2. US Food and Drug Administration, Guidance for Industry, Part 11; Electronic Records; Electronic Signatures—Scope and Application (FDA, Rockville, MD, August 2003).
*This article is an update to "The End of the 21 CFR Part 11 Controversy and Confusion?" Pharm. Technol. Eur.,15 (9), 51–56 (2003).
UK Medicines Manufacturing Skills Centre Stresses Skill Development after Budget Announcement
November 8th 2024A £520 million investment for manufacturing capacity was announced by Chancellor of the Exchequer, Rachel Reeves, but academic and industry leaders stress the money should be used to train personnel.