A Risk-Based Approach to Data Integrity

Publication
Article
Pharmaceutical TechnologyPharmaceutical Technology-07-02-2015
Volume 39
Issue 7
Pages: 46–50

Heightened regulatory scrutiny of data integrity highlights the need for comprehensive procedural reviews and strategies for managing mission-critical information.

Buena Vista Images/getty images

The regulatory requirement for data integrity is not new and was stated in 21 Code of Federal Regulation (CFR), Part 11 in 1997 (1). In the area of cGMP, regulatory focus on the integrity of electronic and paper-based data has increased sharply. Systems that formerly were given only superficial reviews have started to come under intense scrutiny. Standalone raw data-generating systems and business processes, as well as interfaced business and production control systems, present a large pool of crucial business information with which data integrity issues can occur.

Reviewing regulatory citations concerning data integrity, including FDA warning letters and European Medicines Agency statements of GMP non-compliance, invariably leads to the conclusion that the current focus of the regulator lies strongly with systems involved in generation of quality-control data. Numerous early citations were caused by fraudulent behavior; therefore, focus also has been on the few tools that can detect such behavior after the fact. The primary detection tool is a system’s capability to write a detailed audit trail subject to the rules of 21 CFR 11. Therefore, some industry approaches to ensuring data integrity will concentrate on these types of systems and their respective audit trails.

Two other important aspects of data integrity include the validated state of a process or a computerized system (ensuring accuracy of generated or recorded data), and the management of critical authorizations (protection of data to avoid integrity breaches during operation).

When implementing measures to establish, maintain, and review data integrity across an organization, the following steps should be followed:

Awareness. It is crucial that employees at all levels understand the importance of data integrity and the influence that they can have on the data with the authorizations assigned for their job roles. This understanding can be achieved with a relatively short, simple training session across an organization. More detailed sessions are required for process, system, and data owners; this training should describe the responsibilities for data within each employee’s remit, as well as accountability for and consequences of accidental or intentional integrity breaches.

Standardization. The standardization step should be based on available regulatory guidance, such as the definitions (2) from the UK’s Medicines & Healthcare products Regulatory Agency (MHRA), to ensure a common understanding of terms and concepts. This step should include, but not be limited to, interpretation of available government regulations and guidelines, internal procedures, terminology and concepts, as well as levels of risk for data.

Gap analysis. A subsequent gap analysis of processes and systems, with emphasis on the existing controls for data integrity and their compliance with regulations, will yield the basis for the next step in the process: the determination of the risk associated with each process or system and the data generated or modified by it. As with any risk assessment, thresholds for mitigating action should be set before assigning criticalities to the individual data elements and their controls. In general, any such risk-based approach should be based on accepted standards, such as ICH Q9, Quality Risk Management (3).

Risk determination. The completed risk determination will provide the basis for implementation of new required controls, in addition to existing ones. GMP-compliant businesses often will have data integrity under good control. The determined level of risk should be taken into account when deciding whether to implement technical or procedural controls.

Once implemented, the systems, controls, and data should be reviewed periodically, at a frequency commensurate with the determined risk, type of system, and industry guidance/regulatory requirements. Table I indicates the difficulties associated with the different types of systems.

 

Quality control lab
(data acquisition)

Manufacturing
(data acquisition)

Business
(data processing)

Validated state
Stable, usually easy to control and maintain (no frequent changes)
Stable, usually easy to control and maintain (for single-product facilities)
Highly variable, frequent changes, numerous interfaces
Audit trails
Limited, usually compact and relatively easy to analyze
Limited, usually easy to analyze (operator logs), critical data in batch records
Extensive, difficult to separate by batch/product, difficult to analyze
Critical authorizations (physical and logical access)
Diversified, difficult to manage centrally, frequent access by diverse people
Limited, easy to manage (exception: package units, skids)
Extensive, difficult to manage and control

Audit trails
According to agency warning letters, FDA expects that reviews of audit trails are done as part of the release of each single batch. Equally, the notion that reviews of audit trails from analytical release tests should be done for each test is widespread. It is obvious, though, that indiscriminate application of such rules will generate a large number of reviews of perfectly accurate audit trails, revealing no untoward activity, and-ultimately-not generating value, ensuring product quality, or improving patient safety.

To perform meaningful audit trail reviews, it is important to ask what the review aims to accomplish. A review to determine whether an audit trail is functioning correctly should be limited to the initial validation phase of a computerized system. Typically, for the systems listed in Table I, audit trail functionality is a standard, off-the-shelf feature, possibly configurable to define what activities the end user wants to record in the audit trail. There is no need to requalify the correct function of an audit trail on a periodic basis. A proper operational qualification test will establish valid functionality, and requalification should be limited to system tests after major software upgrades. Of course, prior to implementation, an audit trail should demonstrate it is capable of recording events with sufficient granularity. If this is not possible, it will be difficult to perform meaningful and value-adding reviews of the recorded information.

Reviewing an audit trail to establish data integrity requires prior definition of critical items to be reviewed. For this definition to be meaningful, the relationship between the audit trail elements and the critical quality attributes (CQA) being tested by an analytical instrument, or the critical process parameters controlled and measurements recorded by a control system during manufacturing, should be established. There should be a strong relation between the criticality of test results and the frequency and depth of review of associated audit trails. Audit trails for analytical tests that do not have bearing on CQAs do not have to be reviewed to the same degree. To support this practice, a scientifically sound definition of the CQAs is required prior to implementation of the review cycle. Development of analytical methods, manufacturing, or business processes should include the definition of these critical attributes. If these attributes are not defined, it will be difficult to decide at a later time which data integrity breaches are critical in terms of patient safety and which are not.

 

 

 

For a chromatography data system, for example, analysts will require some flexibility to work with the data acquired by the system and the connected instrument to account for changes in system performance. Extensive manipulation, however, can have the effect that results-which were outside of specification during data acquisition-can later be in specification. For such analytical tests, critical entries in the audit trail to be reviewed for high-risk (release) samples must include manual integration events or changes to processing method integration events.

In another example, the completion of sample well templates on microtiter plate readers after data acquisition causes a misordering of events, which will only appear in the audit trail. Audit trail entries indicating such deviations from established procedure should be included in the review.

In system audit trails or logs (as opposed to data audit trails), certain patterns of activities should be reviewed. Repeated failed logins, which may indicate fraudulent break-in attempts, are a prime example. Algorithms for detecting such activities that are built into a system should be enabled. A review of the output of such an algorithm replaces the actual physical review of the raw audit trail for these events. Read/write errors to and from data storage, which could indicate a breach of data integrity due to hardware failures, should also be checked.

Events not included in the list of critical items that will not improve patient safety or compliance, or add value, should be excluded from any review by default. This action becomes especially important for the review of audit trails from enterprise resource planning (ERP) software used in the supply chain and materials management. When configured accordingly, these systems can generate large amounts of audit-trail data with normal daily transactions.

Business process steps executed automatically by the system, such as the promotion of a document from one status to the next after an electronic signature by the user to release a document for production, can generate many audit trial entries. In a company with hundreds of system users, this activity may result in hundreds of megabytes of audit trail information to review. A restrictive filtering process is needed to eliminate all non-critical events and focus the review on the critical entries. On these types of business systems, a clear definition of the critical entries is the only way to perform a meaningful review in support of data integrity.

Critical authorizations
As shown in Table I, data acquisition systems in manufacturing may require the lowest amount of effort to control access security, because these systems typically are physically separated from other systems (non-networked) and, in many cases, are only accessible using physical access controls (badges, keys).

Access to laboratory systems and processes will be highly varied, with stand-alone instruments and linked instrument computers. For each instrument, the requirements for data integrity apply, whether the system complies with 21 CFR Part 11 or the test results are printed and the electronic information is discarded. Commonly, access to data and instruments is based on what the drug manufacturer considers to be critical authorizations; these permissions warrant periodic review. As with the audit trail events, critical authorizations must be defined prior to launching the review cycle to generate consistent and useful reviews. Notably, the review of critical authorizations will not include analysis for fraudulent access attempts, but only the status and history of authorization levels and issued permissions.

System administrator permissions for all systems, regardless of the area in which they are used, should be reviewed periodically. System administrations can act outside enforced business process and have direct access to database tables via management tools that may not include input validation or other data-integrity protection safeguards. It is crucial to control these authorizations to a high degree to avoid intentional or accidental data corruption.

Super-user permissions, which may be needed to change recipes on process control systems or test methods on laboratory systems, also should be considered critical, because activities of these users may cause incorrect data further along the management process.

Authorizations needed to create records in the system or to initiate the generation or acquisition of information need not necessarily be classed as critical authorizations, and as such would not be included in the periodic review (except for business reasons, such as license retirement, etc.)

Validated state
A properly validated computer system or business process will support the maintenance of data integrity. It is therefore crucial that the baseline validation is kept up to date to show that all changes have been tested in accordance with assigned risk criteria. Often, the primary aim of a review of the validated state of a system is to show that it still complies with regulations for computerized systems, such as 21 CFR Part 11 or EU GMP Annex 11 (4). For systems subject to few or even no changes, there may not be added value in reviewing this documentation periodically. Unless there is a degradation of performance of the system due to its nature or mode of action, a review of the validated state at an appropriate frequency will confirm the state of control and compliance required by regulations.

In particular, business systems such as large-scale ERP or document management systems are often subject to frequent changes in configuration and functionality. For these types of systems, a periodic review-including all change control records, service tickets, and documentation changes-will be extensive. Determination of the cumulative effect of changes is also difficult and not always entirely accurate under these circumstances. It is, therefore, necessary during collection of information for the review to only select changes and modifications that may have a potential impact on data integrity for the area of patient safety and product quality. For a cGMP determination of data integrity, human capital information, or financial data integrity may not be applicable and can be excluded from review. Tickets and change requests in these areas can be omitted during such an assessment. It has proven useful to tag change requests as GMP-relevant or business- relevant during an impact assessment by the quality unit as part of the regular change control process. Such a tag will allow easy filtering during review of the validated state. The same principle should be applied to business process deviations or help-desk tickets.

For all types of reviews and for all types of systems, the actual detection of a data integrity breach should cause process deviations to be raised followed by subsequent corrective and preventive action. This action can include, if warranted, an increase in the frequency of the particular review cycle. Table II offers guidance for review frequencies, based on the Good Automated Manufacturing Practices (GAMP 5) software categories of a system (5) and an arbitrary scale of risk (high/medium/low). The scale should to be determined according to the proximity of the system to the regulated product and by the potential impact a data integrity breach would have on patient safety and product quality.

 

Table II: Suggested review frequencies for software by risk class.

Risk class

 
5
4
3
1
High
3 months
6 months
12 months
24 months
Medium
6 months
12 months
24 months
For cause
Low
24 months
36 months
For cause
For cause

 

Conclusion
The number of computerized data acquisition and processing systems in the pharmaceutical industry is growing quickly and with it the number of records generated that are inextricably linked with the regulated products manufactured by the industry. The situation is complicated by the integration of systems, interfaces between the systems, and conversions, calculations, and compression of information that may take place during transmission. The knowledge generated from these data and information is directly used in the manufacture of drugs. Data integrity and its practical maintenance are therefore crucial to the safety of patients and the quality of healthcare products. By using a risk-based approach and the presented principles, it is possible to generate meaningful reviews and proof of data integrity.

References
1. Code of Federal Regulations, Title 21, Food and Drugs (Government Printing Office, Washington, DC), Part 11, pp. 13465-13466.
2. MHRA, GMP Data Integrity Definitions and Guidance for Industry, March 2015.
3. ICH, Q9, Quality Risk Management, step 4 version (2005).
4. EMA, EudraLex Volume 4 Good Manufacturing Practice, Part 1, Annex 11 Computerised Systems (revision January 2011), 2011. 
5. ISPE GAMP 5: A Risk-Based Approach to Compliant GxP Computerized Systems, International Society of Pharmaceutical Engineering, 2008.

Article DetailsPharmaceutical Technology
Vol. 39, No. 7
Pages: 46-50

Citation:
When referring to this article, please cite it as K. In Albon, D. Davis, and J.L. Brooks , "A Risk-Based Approach to Data Integrity ," Pharmaceutical Technology39 (7) 2015.

Recent Videos