Skip to main content

Pfizer has created a clinical trial modeling tool to mitigate study risk during the protocol design and study execution phases. Jonathan Rowe talks to Moe Alsumidaie about the purpose of these predictive models.

Pfizer has developed a predictive clinical trial quality risk modeling tool to mitigate study risks during the protocol design and study execution phases. We had the opportunity to interview Jonathan Rowe, Executive Director, Head of Clinical Development Quality Performance and Risk Management at Pfizer to elaborate on these predictive models. Moe Alsumidaie: Can you describe the purpose of the predictive models developed by Pfizer? What are they supposed to accomplish? Jonathan Rowe: There are many models in the GCP quality performance space that we have developed and continue to refine. A relatively simple model is a correlation model, where we correlated the performance of our clinical trial process to select GCP outcomes as defined in ICH E6. Examples of GCP results could be factors such as, did the patients consent properly or are the rights, safety and well-being of the subjects protected. We take these GCP results and, in conjunction with the clinical trial quality metrics we collect, build correlation models to see if any of these metrics can predict whether we are going to have problems achieving GCP results. Additionally, we build a time scale into the models, so a study team might be able to predict the likelihood of a GCP issue months in advance. This is an early warning system, so we can make sure our GCP results are good. This model was built using several dozen existing clinical trial process quality measures that we have in the clinical development space and statistically assessing their relationship within and across a large cohort of studies.

Another, more complex model was built to help Pfizer predict the risk of certain quality events in a clinical trial. We recently began running this model on our new protocols to strengthen our understanding of quality risk areas and proactively mitigate quality risk. We call the model, appropriately, the study’s risk prediction model. For this model, we started by analyzing quality event data such as standardized protocol deviations, significant quality events, and protocol changes, from 406 studies, and looked for relationships with the information that we collect early in the design of the protocol. Pfizer clinical trial teams are required to perform a study quality risk review by, among other things, carefully reviewing a database of possible study risks, called a question bank, and then establishing measures to mitigate these risks. The Question Bank is part of our Integrated Quality Risk Management Planning (IQRAMP) for protocols. These questions can range from simply considering the risks associated with the study, such as “is the mechanism of action new?” Will the study be multinational? How many exclusion criteria are there? to questions about more complex risks. The study’s risk prediction model statistically correlated quality events to how the question bank questions were answered, in conjunction with study attributes. What we’re trying to accomplish is improve quality risk planning. Obviously, the more you know about your risks, the better you can mitigate them from the start. This type of analysis is perhaps the first in the industry and we are currently using it to integrate it into our development process. MA: What is the impact of these predictive models on the study team’s decisions during protocol design? JR: Since we have the bank of questions and can correlate how these questions are answered to quality outcomes, a study team can review their protocols and take steps to reduce the risk of quality events by modifying a component of the protocol or by planning mitigation measures. For example, some protocols may inherently pose a quality risk due to complex dosing, and this dosing regimen should persist. The study’s risk prediction tool and question bank will enable the team to thoughtfully and proactively mitigate and be more vigilant in the high-risk area to reduce errors, deviations, etc We hope to continue to pursue our goal of reducing quality events. A short-term view of the study’s risk prediction model is to use it to support appropriate surveillance. For example, if a study is expected to be high risk, the team may want to exercise due diligence in monitoring, whereas a low risk study might mean more of a risk-based approach. Understanding risks enables better resource planning. MA: What were the challenges you faced when developing these models? JR: Some of the modeling challenges arise from different paradigms or processes that may have been used in previous studies. Maybe something that was a risk years ago isn’t today. Conversely, new risks are identified that may not have been in our risk bank in the past. Models should be based on studies that reflect modern test operations and infrastructure. We continually renew the model, which requires a lot of thought and validation. Doing these updates ultimately generates a much better model. We have currently achieved 80% accuracy in predicting clinical trial quality issues, and as we continue to add more studies and update the model, we expect accuracy to improve to 85%. We will never achieve 95% to 100%, but if we achieve 85% to 90% accuracy, it will greatly improve the predictability of trial quality performance. MA: Do you expect to see something like this come through a data sharing initiative (i.e. via TransCelerate)? JR: Sharing the approach is not a problem. The problem is that we don’t know if our results can be transferred to another company if that company has different processes for running clinical trials. In our model, we not only use data from the question bank, but we also introduce study attributes, up to 90 different variables, into the model. If our clinical trial process is different from that of another company, or if we collect variables that are unique to us, the model results may not be translated; each business is unique. MA: Pfizer is a large company and has access to many studies. Is this model also appropriate for small pharmaceutical companies that have much less data, say less than 10 trials? How can these companies access features similar to those your team has developed? JR: Having more data gives you more statistical power. If you have completed 10 studies, because you want to be able to tie quality results to risk planning, the approach can work if you also have effective clinical trial quality risk management processes in place. This model combines effective planning with quality results achieved. We used 406 studies across a number of therapeutic areas and when we optimized our models per therapeutic area, we used maybe a fifth of that data, and still generated very good power to achieve the kind of ability what we were looking for. In summary, this initiative is not necessarily about optimizing the protocol, although it certainly makes people think. It’s really about mitigating GCP quality risk. It allows us to understand which trials are risky and it allows teams to say, “I know what my risks are and these are the mitigating measures that I will put in place in order to ensure the quality of the study.”