Branching out from roots in industrial biotechnology, Riffyn Data Systems (and its CEO Tim Gardner) is taking the message of data-driven process improvement to biopharma.
Machine Learning is being applied in a growing number of areas within pharma, including pre-clinical research, to focus efforts and speed up time to results. Leveraging this tool is Riffyn, whose cloud-based Process Data System, Riffyn Nexus, was designed to incorporate principles from Six Sigma (specifically its define, measure, analyze, improve and control [DMAIC] rubric), as well as pharmaceutical quality by design (QbD) and measurement systems analysis.
Results have already been seen in industrial biotechnology companies such as the industrial enzyme developer, Novozymes. Established in 2014, Riffyn has recently been moving into more pharma applications. During the first quarter of 2020, before the COVID-19 pandemic began, CEO Tim Gardner, a specialist in synthetic biology, discussed the company’s origins and goals with Pharmaceutical Technology.
PharmTech: What made you develop the Process Data System and start the company?
Gardner: Riffyn was born out of an industrial biotech environment, and knowledge gained from the large-scale process development and manufacturing of chemicals produced by genetically engineered species. I had spent years in industrial microbiology and biotech, as vice-president of R&D for a company called Amyris, which focused on large-scale bio-manufacturing of renewable chemicals. We edited and genetically engineered yeast to make products that ranged from detergents to makeup oil.
In industrial biotech, speed to market and low costs of production are key to success. Ideally, you want to minimize capital investment costs while developing core technologies, so you compete not only on performance but on price. Being able to do this required harnessing data in ways that few companies were capable of doing 10 years ago.
At that time, pharma hadn’t yet made the transition to data-driven process development and manufacturing because they weren’t under the same performance pressures with their processes.
Of course, they were under pressure from regulators and had to have the right regulatory mindset, but, in general, the focus was not on whether the process worked well but on whether it would be approved. In fact, the worst process in the world would still be OK if it received regulatory approval. This type of mindset does not drive innovation.
With Riffyn, we figured that we could take data-driven process improvement and combine it with the principles behind the ICH (International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use) guidelines from FDA and EMEA (the European Medicines Agency) and the QbD initiative. Those are fundamentally brilliant, some of the best technical directions for pharma that I’ve ever seen, and a recipe for data-driven process development.
Initially, few in the industry were really harnessing these concepts. There were too many difficulties entailed with practical issues (e.g., designing processes; integrating different teams of people; working on those processes and evolving them over time; capturing specifications; collecting data; and synthesizing data and specs of various forms of processes and versions). The goal was to put all of that together, to provide a holistic understanding of how manufacturing processes are working, but when QbD was introduced in 2004, it was too difficult to do.
However, it had already been done in other industries. Experiences that I had in industrial biotech showed it was not only possible to do, but that it could have a profound effect on R&D. In industrial biotech, utilizing QbD principles and data science practices led to four-to-five-fold increases in speed to market, cutting the average time from tech transfer to manufacturing down from more than a year, to three months.
We wanted to incorporate these principles into a software system that made it easy to harness them, not only for users in industrial biotech, but for pharma and biopharma development, to help usher in a transformation in speed-to-market and continuous improvement in the development and delivery of those products.
PharmTech: Since those early days, before you set up the company, have you seen a significant change in the industry’s acceptance of the fundamentals of QbD?
Gardner: We have definitely seen a shift of mindset and culture within the industry, and some change in practice. Today, the concepts of digitization and speed to market are much more commonly discussed as key transformations needed to bring biopharmaceutical organizations to the next level of progress.
These ideas are getting embedded more into practice. Recent surveys have shown that more than half of all pharma companies are using ICH guidelines around process validation in their daily work. That’s encouraging. But many of them are still using outdated technologies.
PharmTech: How would you describe the Process Data System? How does it work?
Gardner: It’s really a digitized Process Lifecycle Management environment, built around capturing production and analytical process designs and specifications and then accelerating improvement cycles for those processes until they are ready to transfer to manufacturing. The principles of DMAIC and QbD are embedded into its world view.
We had used these same principles at Amyris, to double the rate of catalyst and strain development without increasing headcount.
Measurement system analysis is another important principle we used. All those principles aim to define R&D processes explicitly, instead of leaving that as an afterthought. This process of definition begins with the process that generates data, then the next step is measuring performance, analyzing performance, and improving processes to reduce variability.
Reducing variability is so important. It’s an almost unspoken but understood objective in manufacturing, but not in the research and development space.
That is why there is a lot of churn in pharma R&D. As a result, people can end up making bad decisions, chasing phantoms, and going down the wrong road because they respond to noise in the system rather than to significant developments.
Once we took QbD into R&D, we achieved such fidelity in experimental answers that predictability, speed, and efficiency all improved. As a result, the number of experiments could be reduced, along with the level of overall churn, because we could understand what the critical process parameters were and take them right to the manufacturing floor.
At Novozymes, one of our first customers, we saw dramatic gains in R&D and tech transfer efficiencies (Read case study here). The company has been using the Riffyn Nexus for two years, during which it has increased capacity by an order of magnitude, and cut time to market by a factor of two, all with half the original number of staff members, Gardner says. Riffyn is currently working with a number of pharmaceutical and biopharmaceutical companies on pre-clinical and other development programs including bioassays, as well as formulation and bioprocess development.
We don’t support manufacturing like a manufacturing execution system (MES) or batch-control software system. Instead, we support the development of processes and their transfer to manufacturing (e.g., catalyst and enzyme development, formulation development, analytical methods and assays used to collect data). We can transfer those designs into a manufacturing context. Right now, this is accomplished mainly by sharing documentation. It’s like transferring blueprints.
In the future, though, we hope to incorporate digital transfers so that users can transfer a design file directly from process development to manufacturing, sending the information right to the process control system (much like a CAD/CAM [computer-aided design/manufacturing] system would do in discrete manufacturing).
PharmTech: When will this capability be introduced?
Gardner: We’re prototyping and developing with pharma partners and biopharma process equipment companies. It’s probably one to two years away.
PharmTech: What results are you seeing among customers in the biopharma space?
Gardner: We helped one pharma company, which had a team of four doing bioassay development, so that they could cut 2400 hours of labor each year from their processes using the improved data system, a 50% reduction in effort to achieve better outcomes from bioassays.
PharmTech: How has the technology helped reduce false leads?
Gardner: There are a few examples that come to mind, although we cannot give client names. In one case, a group was struggling with cell-line development. They would pick what seemed to be the best cell lines, but these would continue to fail at the next developmental stage.
We collected data from 14 of their past experiments, assessed the resulting data in aggregate, and saw that seven of the 14 experiments they had performed had used the same cell lines, so these seven points became internal controls for their assay system’s performance.
Normally you just look at the controls week by week, but we also tested and followed a number of other variables in the data set (e.g., feed lot of culture media and temperature) from week to week. We found variations in these values across the experiments. As it turned out, this variation was driving aggregate decreases in cell line performance, and not just in the cell lines that the teams had selected as the best options.
The team was really chasing artifacts rather than real factors, so they were reacting to higher temperatures or better media rather than better cell lines, so some cell lines were failing downstream.
When we statistically corrected the data to remove all the temperature and lot effects and then renormalized the data across the experiments, the team could see that not one single cell line had been improved. If they had performed this analysis first, they could have prevented significant waste and loss. However, when they took the lessons learned from this work, they were able to improve processes and got real hits almost immediately.
PharmTech: Why is there such a problem with reproducibility in pharma R&D, especially at the pre-clinical stage?
Gardner: One problem is that people are not transparently articulating what they’re doing. The first step in establishing this transparency is defining what you did, unambiguously.
Riffyn’s platform is designed to help with clear process definition, to get organizations to articulate their processes clearly, rather than fragments of the process, the bits and pieces that they may see on data boards. This can lead to huge boosts in reproducibility.
In addition, reproducibility requires a clear understanding of CPPs. In fact, in many ways, the concept of reproducing a process in the lab is exactly like scaling up a process. You need to understand CPPs, and, if you control them, the entire process can be much more efficient.
PharmTech: Today a growing number of biopharma companies are doing their own work in machine language and artificial intelligence, using open source code and developing inhouse processes. Amgen is one example. Will it be more difficult to penetrate if everyone is doing this type of work independently? Will this make the market more competitive in the future?
Gardner: It's actually making it easier. We take an open ecosystem approach, and one that is vendor neutral in terms of access to data and process design. We adhere to, or provide interchangeability with S88 and S95, with the goal of Riffyn becoming part of a modular ecosystem that can be plugged or unplugged. In general, the more open, the better.
At this point, most of our work in tech transfer is partnered.
PharmTech: How does the software work and what does the IT connect into at a customer facility?
Gardner: It’s cloud-based and can be accessed via web browser, but we can integrate it within customer networks so that it is embedded. All that is required is a web browser. As in VIZIO where you can have flowcharts, Riffyn starts by creating a flow diagram that represents the assay, manufacturing, or formulation process to describe ingoing process.
This can be replicated in the lab so that users can collect all data from instruments, from manual entry. What might have taken 40 hours of data integration in the past can be done in 1 to 2 hours.
PharmTech: Does this data go into a Laboratory Information Management System (LIMS)?
Gardner: In fact, many customers are replacing LIMS with our system because it can integrate LIMS type data in the context of data from many other sources.
PharmTech: Are contract research and development companies a part of your client base now?
Gardner: We expect to see that part of the business grow. Because our system is cloud-based, it can provide a means for contract development and manufacturing organizations to gain real- time access to unfolding experiments.
Clinical Supply Planning in Europe - Balancing Cost, Flexibility and Time
December 19th 2024The packaging and distribution of clinical supplies is a fundamental piece to the overall success of a clinical trial, and advance preparation can help establish a more efficient supply chain. Selecting the best geographical location for those activities, however, depends on the clinical trial protocol, business decisions, and even the investigational medicinal product (IMP) being studied.