Accelerating Time-to-Insight Throughout the BioPharma Lifecycle

Publication
Article
In the Lab eNewsletterPharmaceutical Technology's In the Lab eNewsletter, September 2022
Volume 17
Issue 9

Innovation is driven through a strong digital backbone.

The next generation of biopharmaceutical therapies promises to significantly advance human health by providing new ways to tackle some of the most complex diseases. While the innovation behind these novel therapies is cutting edge, many of the processes involved in their development and manufacturing have failed to keep pace with the modern approaches adopted by other industries.

The lifecycle and processes required to bring a therapy to market are complex and involve significant milestones such as regulatory approval. However, the vast majority are supported by manual tasks and in silos of data and knowledge. In an industry where patient safety is paramount and speed to market has a direct impact on human health, data and related insights hold the key to accelerating timelines and unlocking innovation.

To bring promising new therapies to the patients who need them faster, an approach for bringing people, processes, and data together across the full lifecycle is needed. By ingraining scientific understanding and know-how into a data backbone that supports the therapy on its journey from research to manufacturing, many of the bottlenecks and quality risks, triggered by manual tasks, can be engineered out. This approach results in a solid data foundation that unlocks the power of artificial intelligence (AI) and digital twins, delivering the insights needed to drive innovation, accelerate regulatory filing and technology transfer, and ultimately, bring high-quality therapies to market faster.

Post-pandemic acceleration

The need to bring new therapies to market faster was always present, but it became even more crucial with SARS-CoV-2. The Pfizer–BioNTech vaccine, which went from initial research to the first vaccination in nine months, raised the question, “Does it really need to take 10 years and $2 billion to deliver life-changing therapies to patients?” (1).

Towards the end of 2021, 74% of novel drugs approved by FDA (2) underwent an expedited development and quality assessment phase. Interestingly, not all of them were COVID-19 vaccines, demonstrating a fundamental shift toward accelerating development and manufacturing.

This is further amplified when considering the number of products that will go off-patent in the near future. Over the next eight years, 190 drugs and 69 blockbusters (with $236 billion revenue) are expected to hit a patent cliff (3). This eventuality places immense pressure on the biopharmaceutical industry to bridge the gap with next-generation blockbuster drugs.

Current state of biopharma lifecycle

Despite the huge investment in R&D and groundbreaking innovations in genomic medicine, the typical timeline for bringing a new therapy to market can be as long as 10 to 15 years. Many of the patients who require these cutting-edge personalized therapies are sick and need access to these treatments much sooner than that.

When focusing on the key components of the lifecycle, we find many manual, paper-based processes and data silos that are not well connected. The industry has adopted some digital technologies, but often these tools are not working in harmony. This results in experts wasting 20% of their time on data administration rather than improving and accelerating processes (4). Due to data inaccessibility, especially at the development stages of the lifecycle, 10–20% of the work has to be repeated (5), wasting time and resources unnecessarily. At the operational level, this is highly inefficient and introduces significant quality risks. From a strategic perspective, the opportunity to use high-quality data to drive innovation is lost, and, with so many manual tasks, only a small proportion of the data is typically used for advanced analytics.

Beyond the operational foundations, the milestones involved in regulatory submission and technology transfer from clinical production to commercial manufacturing require rapid access to high-quality data and the insights that help define and control the process as it scales. With the growing trend towards contract manufacturing, the problems of managing and transferring data and insight are extending beyond the corporate firewall.

The potential returns for biopharmaceutical companies are significant for those who can bring order to this complex data landscape. The International Council for Harmonisation (ICH) has laid out guidelines that underline the importance of data, insight, and quality throughout the lifecycle of a drug. By demonstrating a robust understanding of the process, the level of regulatory scrutiny can be reduced. In turn, this presents an opportunity to further optimize processes post-approval, which can drastically improve the yield while reducing the costs and time to get therapies to patients.

Some companies have tried to integrate their legacy systems through integration layers and data lakes. However, while data lake integration does bring silos together in a single location, it doesn’t inherently solve the challenge of how those data sets are related to each other and what their impact is on the process performance and product quality. In other words, data scientists need to use ontologies and semantic enrichment to contextualize the data. That means the data lake method does not suffice to drive concrete understanding that will generate a meaningful data backbone. Regardless of whether the data science team does eventually manage to align and contextualize data in their data lake, they would have nonetheless spent significant amounts of time in the process.

Better data management

To overcome the drawbacks of current lab data management systems, a transition from legacy systems to cloud-based platforms is needed. Such platforms enable data integration from lab equipment and electronic notebooks without sacrificing the insight that the data conveys. The resulting digital data backbone could highlight the connection between process parameters and quality attributes, minimizing the time spent aligning and contextualizing data and allowing companies to shift focus to business outcomes, process optimization, and regulatory compliance.

A more intact digital data backbone leads to better predictive analytics in the manufacturing process, reducing the number of failed batches. Additionally, the increased quality of emerging data reduces the administrative burden on scientists and the need for excessive quality assurance.

Tech transfer with streamlined management

As explained in the ICH Q10 guidance (6), technology transfer is paramount to sharing product and process knowledge between suppliers and manufacturers, especially if outsourcing is involved. With hundreds of new contract manufacturing agreements made in 2021, outsourcing is undeniably on the rise.

Imagine a healthcare supplier that provides their contract development and manufacturing organization (CDMO) partner with poorly contextualized data. The CDMO would not be able to optimize the use of that data and would most likely set false expectations in terms of scale and timeline. Detection of faulty production would be discovered too late down the line, meaning the entire team would have to suspend their current projects and repeat previous work to address discrepancies.

A central cloud-based digital data backbone can mitigate the risks associated with outsourcing to a CDMO partner. By streamlining data management, biopharmaceutical companies improve their chances of partnering with the ideal CDMOs with the proper expertise. This helps ensure that the drug manufacturing process is efficient, cost-effective, and fast. Companies can then focus on their business goals while gaining access to the CDMO equipment and facilities.

However, CDMOs are only one part of the equation. The digital data backbone is extremely useful for seamless data sharing across all parties in the biopharmaceutical lifecycle. A good example is a collaboration between Cytiva and Biogen, which needed to assess the impact of raw material variability on manufacturing outcomes. By integrating process data from the manufacturer and the raw material data from the supplier, the companies could get a systemic perspective on the process, thereby detecting potential risks from raw material variability. These findings led to the process optimization to account for variability (7). Using similar integrative approaches paves the way for more reliant supplier-manufacturer relationships.

Use of digital twins

With streamlined data management, biopharmaceutical companies also gain access to more advanced technology. The premise of digital twins is particularly enticing. The ability to virtualize an entire lab experiment or manufacturing process means one can predict the outcome of possible safety issues, giving them the opportunity to optimize on a small scale. In this vein, the digital twin is a promising tool to reduce the drug development lifecycle and delivery to the patient.

However, constructing the “perfect” digital twin relies heavily on large volumes of enriched and contextualized data across different conditions. This is necessary to train the digital twin and improve its predictive and emulative power. The benefits of digital twins were demonstrated in messenger RNA (mRNA) vaccine research for SARS-CoV-2. Researchers used them to optimize the plasmid DNA (pDNA) to mRNA conversion, increase manufacturing capacity, and eliminate batch failures. In addition, establishing a digital twin allows researchers to repurpose existing data to study vaccine development for possible future variants (8).

Having worked on the Middle East respiratory syndrome coronavirus (MERS-COV), Moderna had the upper hand in vaccine development for SARS-CoV-2. However, it would not have been possible had they not retained the comprehensive experimental data from the MERS outbreak. Using the MERS-related data, Moderna could revive their insight into the mRNA-based vaccine development, dramatically accelerating the research phase (9).

Digitization of quality assurance

Reducing manual quality assurance (QA) steps can also accelerate drug development. In particular, preclinical studies can be delayed due to the scope of data to be reported by contract research organizations (CROs) because legacy paper-based processes for recording study data involve multiple QA steps to ensure quality and reproducibility. Ultimately, this slows progress and extends timelines.

Data digitization can significantly reduce time spent on QA and overcome reporting bottlenecks (10). Digital workflows for recording data pertaining to laboratory process control, instruments, and business workflows differentiate between original and modified data or calculated versions and enable easy tracking and reassurance of the original data. This reduces the time to create study reports considerably, freeing up resources to run more studies per year.

Overall impact

A well-curated digital data backbone accelerates the next generation of therapeutics through the lifecycle and assures quality, bringing products to market faster. Lengthy manufacturing and quality assessment does not correlate with a positive impact on patients. On the contrary, delays in bringing therapeutics to the market cause a decline in the survival rate and quality of life in patients suffering from life-threatening diseases. Furthermore, the lack of a digital data backbone could pose unknown risks and adverse effects on patients despite extensive efforts to perform a quality assessment. Therefore, digitization of data management is necessary to speed up innovation that makes medicines safer and more effective.

Similarly, biopharmaceutical companies also benefit from advancements in data management. Research has demonstrated that, had rituximab for non-Hodgkin’s lymphoma and trastuzumab for breast cancer been delivered to market one year sooner, patient willingness to pay would have skyrocketed by $310 million and $8 billion for the two drugs, respectively (11). Another study showed that first-to-market products have a 9–12% market share advantage over their competitors for specialty and parenteral drugs (12).

Early entry into the pharmaceutical market—mediated by integrated digital lifecycle management—not only improves chances of success but also makes post-launch modifications easier to implement, enabling more agility and opening opportunities for further cost-of-goods-sold improvements. Furthermore, thanks to shortened delivery times and increases in return-on-investment, the company can allocate more time and resources to refine its product to supply patients with unmet needs.

References

1. J.A. DiMasi et al., Journal of Health Economics 47, 20–33 (May 2016).

2. P. Cavazzoni, “Many Important Drugs Approved in 2021 as COVID-19 Pandemic Continues,” fda.gov, Jan. 14 2022.

3. J. Merrill, “The Next Big Patent Cliff Is Coming, and Time Is Running Out to Pad the Fall.”scrip.pharmaintelligence.informa.com, April 4, 2022.

4. IDBS, “Biopharma Development Data Management,” August 2021.

5. K. Moriss et al, “Making the Most of Drug Development Data,” pharmamanufacturing.com, Dec. 1, 2005

6. ICH, Q10 Pharmaceutical Quality System (April 2009).

7. Cytiva, “Industry 4.0: Embracing Digital Transformation in Bioprocessing,” cytivalifesciences.com, June 20th, 2022.

8. A. Schmidt et al, Processes 9 (5) 748 (April 2021)

9. Hill, Claire. “Covid-19’s Digital Legacy.” The Medicine Maker, July 14, 2021,

10. IDBS, “IDBS and AIT Bioscience Deliver First Paperless Bioanalysis Lab,” Press Release, Nov. 10, 2010,

11. E. Sun and T.J. Phillipson, Cost of Caution: The Impact on Patients of Delayed Drug Approvals (Manhattan Institute, 2nd Edition, June 2010).

12. M. Cha and F. Yu, “Pharma’s First-to-Market Advantage,” McKinsey, Sept. 1, 2014.

About the author

Pietro Forgione is the vice-president of strategy at IDBS.

Recent Videos
CPHI Milan 2024: Compliance and Automation in Aseptic Processing
Related Content