Past PSI Events

Conferences

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Scientific Meetings

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Training Courses

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Journal Club

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Webinars

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Careers Meetings

PSI One day Scientific meeting, South West: Designing and Analysing Adaptive Trial Design Studies

Bath University, Bath, BA2 7AY

Date: 24th June 2019

Location: Bath University, 10-4pm UK time 

Agenda:

 Time     Topic
 09:30 -10:00 Registration
 10:00-10:15 Welcome and Introduction to PSI Scientific Committee & South West Events
 10:15 -12:00 Group sequential and adaptive clinical trial designs

Professor Chris Jennison (University of Bath)
 12:00 -12:45 Lunch
 12:45 -13:30  Facilitating Personalised Healthcare With Adaptive Designs

Chris Harbron, Senior Principal Statistical Scientist, Roche
 13:30 -14:15 Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo (ACE Steering Committee, University of Sheffield)

 14:15- 14:30  Break
 14:30 -15:15 Monitoring Outcomes in Adaptive Designs

Sharon Barton, Associate Director Oncology & Early Clinical Development, AstraZeneca
 15:15 -16:00  A regulatory perspective of adaptive design trials

Beatrice Panico, Senior Medical Assessor, (MHRA)
 16:00 -16:15  Close


Presenters:
Chris Jennison (University of Bath), Munya Dimairo (University of Sheffield), Beatrice Panico (MHRA), Chris Harbron (Roche), Sharon Barton (Astrazeneca) 

Adaptive designs are clinical trials that allow for prospectively planned modifications to one or more aspects of the design based on accumulating data from subjects in the trial and can provide a number of advantages over non-adaptive designs.  During this meeting we will hear about adaptive sample allocation for phase II/III designs, a new CONSORT extension reporting guideline for adaptive designs, regulatory aspects and case studies.
Chris Jennison photo





Chris Jennison (University of Bath),
Christopher Jennison is Professor of Statistics at the University of Bath, UK. His PhD research at Cornell University concerned the sequential analysis of clinical trials and he has continued to work in this area for the past 35 years. His book with Professor Bruce Turnbull, "Group Sequential Methods with Applications to Clinical Trials", is a standard text on this topic and is widely used by practising statisticians. More recently, he has written with a variety of co-authors on adaptive trial design and over-arching optimisation of the drug development process.

Professor Jennison's research is informed by experience of clinical trial analysis at the Dana Farber Cancer Institute, Boston and a broad range of consultancy with Medical Research institutes and Pharmaceutical companies.

Group sequential and adaptive clinical trial designs

We shall describe group sequential methods for monitoring a clinical trial that compares a new treatment against a control. This methodology is applicable across a range of response distributions. When the primary endpoint is a time-to-event outcome, tests constructed using the error spending approach are able to accommodate the unpredictable numbers of events at each analysis. We shall see how group sequential testing can lead to an earlier conclusion of the trial and fewer patients recruited.

In some Phase III clinical trials, more than one new treatment is compared to the control. We shall consider an adaptive clinical trial in which two versions of a new treatment are to be compared with a control when the primary endpoint is overall survival. At an interim analysis, one of the two treatments will be selected based on observed progression free survival. Then, in the remainder of the trial new patients will be randomised between the selected treatment and the control.

This three arm trial requires an adaptive design. A key element of such a design is a closed testing procedure which protects the familywise type I error rate when two different null hypotheses may be tested. Another crucial component of the design is a combination test that can merge data from before and after the interim analysis. We shall discuss closed testing procedures and combination tests in general before applying these methods to our three arm trial. With this methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.

2T5A9074
Munya Dimairo (University of Sheffield)
Munya is a Research Fellow in Medical Statistics within the Sheffield Clinical Trials Research Unit at the University of Sheffield. He is involved in the design, conduct, analysis, and reporting of clinical trials. He is the lead Trial Statistician of an ongoing adaptive multi-arm multi-stage adaptive trial and IDMC Statistician on several trials. Munya is interested in the use of innovative trial designs and is collaborating on a number of initiatives to bridge gaps in the practical application of adaptive designs. For example, he is leading the development of the CONSORT Extension for randomised adaptive trials and the creation of an online platform to educate researchers across disciplines on the practical application of adaptive designs in randomised trials.

Introducing the Adaptive designs CONSORT Extension (ACE) Statement to improve reporting of randomised trials that use an adaptive design

Munya Dimairo on behalf of the ACE Steering Committee (m.dimairo@sheffield.ac.uk; mdimairo@gmail.com)

ACE Steering Committee: Munya Dimairo; Philip Pallmann; James Wason; Susan Todd; Thomas Jaki; Steven A. Julious; Adrian P. Mander; Christopher J. Weir; Franz Koenig; Marc K. Walton; Jon P. Nicholl; Elizabeth Coates; Katie Biggs; Toshimitsu Hamasaki; Michael A. Proschan; John A. Scott; Yuki Ando; Daniel Hind; and Douglas G. Altman

The reporting of adaptive designs (ADs) in randomised trials is inconsistent and needs improving 1–4. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.

We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting 1. Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in round one, two and across both rounds, respectively. Members of the CONSORT Group were involved during the development process.

This talk will summarise the development process and introduce the ACE reporting guideline focusing on new and modified reporting items. The ACE checklist is comprised of seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.

The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.

References

1.        Dimairo M, Coates E, Pallmann P, et al. Development process of a consensus-driven CONSORT extension for randomised trials using an adaptive design. BMC Med. 2018;16(1):210.

2.        Stevely A, Dimairo M, Todd S, et al. An Investigation of the Shortcomings of the CONSORT 2010 Statement for the Reporting of Group Sequential Randomised Controlled Trials: A Methodological Systematic Review. PLoS One. 2015;10(11):e0141104.

3.        Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.

4.        Yang X, Thompson L, Chu J, et al. Adaptive Design Practice at the Center for Devices and Radiological Health (CDRH), January 2007 to May 2013. Ther Innov Regul Sci. 2016;50(6):710-717.

MBeatrice Panico

Beatrice Panico (MHRA)
Maria Beatrice Panico is currently a Senior Medical Assessor in the Clinical Trials Unit at the Medicines and Healthcare products Regulatory Agency (MHRA).

She is a medical doctor, fully qualified in Neurology with a PhD in Neuroscience. She has extensive experience in pharmacovigilance in the pharmaceutical industry.

MHRA Presentation abstract

The MHRA supports innovation and several trials with innovative designs are already ongoing in the UK. Some innovative trials are ‘adaptive design trials’: modifying the conduct of ongoing trials increases the chance of the trial formally being a success (i.e. that the null hypothesis can be rejected). A central tenet of adaptive design protocols is that the adaptations are pre-specified in the protocol and are not made on an ad-hoc basis. Trials have to be safe and scientifically sound. It is therefore crucial that Sponsors of adaptive design trials provide the regulators with a strong scientific rationale why an innovative design is the best solution to address the trial objectives rather than a more traditional approach. The rationale should also discuss how the trial integrity will be maintained despite continuous adaptations.

Adaptations that can prove challenging in the current regulatory scenarios are addition of new Investigational Medicinal Products, new trial populations and some seamless Phase 2-3 trials.

Such changes can be introduced via substantial amendments.

However, if the proposed changes are so extensive that they change the nature of the initially approved trial (for example, they are not in line with the original research hypothesis, they make the data obtained up to the point of the amendment inadmissible or make the sponsor lose control of Type 1 error) then a new clinical trial application would probably be necessary. The decision is always on a case by case both for initials and amendments.

In conclusion:  adaptations can be acceptable if safe and scientifically justified. Early engagement with regulators is strongly recommended in order to address potential issues of concerns.

 Chris Harbron (Roche) Personalised HealthCare (PHC), targeting therapies to those patients most likely to benefit is becoming an increasingly key strategy within drug development. However during development a challenge is that there can be uncertainty on the optimal PHC strategy, both on the need for a selected population and the exact definition of a subpopulation, for example which assay to use to measure a biomarker and with what cutoff. Adaptive designs provide an efficient way of mitigating this uncertainty whilst still maintaining the overall rigour and operating characteristics of the trial.  I will review several published adaptive designs for biomarkers and describe in more detail a method for adapting a biomarker threshold at interim analyses to balance the power of the study and the precision in estimating the threshold.
 Sharon Barton (Astrazeneca)  Abstract: Within early clinical trials we aim to make robust decisions as early as possible, typically at a planned interim or final analysis using pre-defined decision criteria.  An alternative approach would be to include continuous monitoring on a key efficacy or safety endpoint in addition to these planned analyses.  This approach uses a predictive power calculation to assess the chance of observing a given rate or better.  The predictive power can be recalculated after each patient’s outcome is available and if the predictive power falls below a pre-agreed value then the arm/study may be stopped.  Simulation methods are used to evaluate the operating characteristics of the design and a monitoring plan is created detailing the decision rule after each patient.  An example will be shared outlining how this is planned to be incorporated into a trial using discontinuation rate due to adverse events in the first 4 weeks, but the approach can equally apply to efficacy endpoints.  Using continuous monitoring within a trial may result in quicker decisions that still have robust statistical characteristics. 

Short bio: Sharon Barton is an Associate Director, Statistics Team Leader within Oncology Biometrics at AstraZeneca in Cambridge, UK.  She joined AstraZeneca in 2017 and currently leads a team of statisticians supporting early clinical development.  Prior to joining AstraZeneca, Sharon worked at GlaxoSmithKline for 14 years within both early and late phase clinical development across a broad range of disease areas.  Prior to joining GlaxoSmithKline, Sharon worked as a statistician for the contract research organisation PPD. 


Registration is now closed. 


Upcoming Events