Sarah Williams (chair), Kimberley Haquoil, Zhizheng Wang, Johannes Cepicka, Stella Jinran Zhan
In this conference session, the speakers discuss various aspects of decision-making in confirmatory trials, including the choice of study population, the use of surrogate endpoints, an RShiny package to help with go/no-go decisions and a discussion of the two confirmatory trial paradigm.
Practical experience designing an adaptive enrichment study - Lucy Keeling, Sam Miller
"Scientific advances over recent years have enabled discovery of targeted therapies, ie with potential for increased efficacy in a defined subgroup of patients. These advances bring challenges for drug developers and regulators. Development of these compounds comes with additional costs and complexity, such as development of a reliable diagnostic. The pharmaceutical company therefore desire to identify the best population to treat as early as possible, perhaps using a relatively short-term outcome. Regulators require efficacy to be confirmed in the population to be treated using established, possibly long-term, endpoints. The common goal is to deliver these medicines to patients as rapidly as possible. There is therefore a need to apply trial designs that enable efficient decision-making and demonstration of efficacy. Fortunately, multiple statisticians have contributed to the development of robust statistical theory to allow flexibility to use surrogate endpoints for decision-making regarding the population and still ensure rigorous type-I error control for efficacy. However, before a company can apply these methods in practice they must understand the risks and benefits of the decision rules and analysis method in the specific circumstances of their trial. We will report results of a simulation study where we evaluated the operating characteristics of several potential adaptive trial designs. KerusCloud was used to simulate patient-level data with multiple correlated outcomes and subpopulations and apply interim decision-rules to seamlessly select the final analysis population. As a result, our collaborators could make the best decision regarding the trial design to use in full knowledge of the practical implications. "
Relationship between Change in UACR and eGFR Slope, a Meta-analysis - Zhizheng Wang, Christian Källgren, Magnus Andersson
"Urine Albumin-to-Creatinine Ratio (UACR) and estimated glomerular filtration rate (eGFR)-slope are two key bio-markers for CKD. Due to the slow progression of CKD, it is desirable to find proper surrogate endpoints for more efficient drug development programs. Two recent separate meta-analyses (Heerspink et.al 2019, Inker et.al 2019) have studied respective bio-markers of treatment effect. However, the link between these two is still missing. In this study, we tried to find a connection between them. We used a meta regression approach to combine the studies included in these two meta-analyses and explored the relationship between UACR and eGFR-slope, with chronic and total slope at several different timepoints. The primary interest is in the relationship between reduction in UACR at six months and effect on total eGFR-slope at two years, which we used to plan our Phase 3 programs and to calculate probability of technical success (PTS) based on early program results. For the primary objective we estimated an R2 of 14.74% and p-value <0.01 for UACR. For a subset of studies with patients having CKD at baseline, the model reached a peak R2 at 43.71%. The eGFR-slope can be predicted given UACR from Phase 2 trial and PTS can be calculated as a conditional probability given that eGFR-slope is a linear relationship with log mean UACR. This result guided us in evaluation of different Phase 3 program designs with eGFR-slope as the primary endpoint. Reference Heerspink, H. J., Greene, T., Tighiouart, H., Gansevoort, R. T., Coresh, J., Simon, A. L., ... & Keane, W. (2019). Change in albuminuria as a surrogate endpoint for progression of kidney disease: a meta-analysis of treatment effects in randomised clinical trials. The lancet Diabetes & endocrinology, 7(2), 128-139. Inker, L. A., Heerspink, H. J., Tighiouart, H., Levey, A. S., Coresh, J., Gansevoort, R. T., ... & Greene, T. (2019). GFR slope as a surrogate end point for kidney disease progression in clinical trials: a meta-analysis of treatment effects of randomized controlled trials. "
Optimal designs for phase II/III drug development programs facilitated by R Shiny applications - Johannes Cepicka, Marietta Kirchner, Heiko Götte, Meinhard Kieser, Lukas Sauer, Stella Erdmann
"In the planning of phase II/III drug development programs, sample size determination is essential, as it has a significant impact on the chances of achieving the program objective. Within a utility-based, Bayesian-frequentist framework, methods for optimal designs, i.e. optimal go/no-go decision rules (whether to stop or to proceed to phase III) and optimal sample sizes minimizing the costs while maximizing the chances of achieving the program objective, were developed recently1-4. By expanding the framework including even more aspects of practical relevance, a wide range of scenarios can now be addressed (e.g. the planning of optimal phase II/III programs with several phase III trials, multiple arms or endpoints; for time-to-event, normally distributed and binary endpoints; with discounting of overoptimistic phase II results; with limited budget and/or sample sizes etc.). By merging all aspects into a single modular framework and developing user friendly R Shiny applications and an R package (https://web.imbi.uni-heidelberg.de/drugdevelopR/), utilization is facilitated and the methods can be further adapted for specific scenarios. As regulatory authorities require the validation of any software used to produce records within clinical research5, a sophisticated quality assurance concept consisting of measures for archiving, versioning, bug reporting, and code documentation was followed, guaranteeing reliable results. 1 Kirchner M et al. 2016, Statistics in medicine, 35(2), 305-316 2 Preussler S et al. 2019, Biometrical Journal, 61(2), 357-378 3 Preussler S et al. 2021, Statistics in Biopharmaceutical Research, 13(1), 71-81 4 Kieser M et al. 2018, Pharmaceutical Statistics, 17(5), 437-457 5 US Food and Drug Administration 2003, https://www.fda.gov/media/75414/download"
Should the two-trial paradigm still be the gold standard in drug assessment? - Stella Jinran Zhan, Cornelia Ursula Kunz, Nigel Stallard
"Two significant pivotal trials are usually required for new drug approval by a regulatory agency. This standard requirement is known as the two-trial paradigm. However, several authors have questioned why we need exactly two pivotal trials, what statistical error the regulators are trying to protect against, and potential alternative approaches. Therefore, it is important to investigate these questions to better understand the regulatory decision making in the assessment of drugs’ effectiveness. It is common that two identically designed trials are run solely to adhere to the two-trial rule. Previous work showed that combining the data from the two trials into a single trial (one-trial paradigm) would increase the power while ensuring the same level of type I error protection as the two-trial paradigm. However, this is true only under a specific scenario and there is little investigation on the type I error protection in the full parameter space. In this work, we compare the two paradigms by considering scenarios in which the two trials are conducted in identical or different populations as well as with equal or unequal size. With identical populations, the results show that a single trial provides better type I error protection and higher power. Conversely, with different populations, although the one trial rule is more powerful in some cases, it does not always protect against the type I error. Hence, there is the need for appropriate flexibility around the two-trial paradigm and the appropriate approach should be chosen based on the questions we are interested in."