Video-on-Demand Library


Collection
Keyword or phrase

Topic

13 July 2022

How do we combine detailed patient level data into an informed representation of the patient? Solutions to this problem are presented by Bodo Kirsch. All visualisations are available on the Wonderful Wednesday blog.

Read more...

How do we combine detailed patient level data into an informed representation of the patient? Solutions to this problem are presented by Bodo Kirsch. All visualisations are available on the Wonderful Wednesday blog

Patient level data contains demographic information as well as exposure, concomitant medications, adverse events and laboratory data. These can be presented in one plot or multiple aligned plots. Interactive visualisations are shown allowing to expand and collapse selected details. The use of color and pre-attentive attributes is supporting easy interpretation of the data. The next challenge is to visualise ranking data. See the Wonderful Wednesday homepage for more detail.

Wonderful Wednesdays are brought to you by the Visualisation SIG. The Wonderful Wednesday team includes: Bodo Kirsch, Alexander Schacht, Mark Baillie, Daniel Saure, Zachary Skrivanek, Lorenz Uhlmann, Rachel Phillips, Markus Vogler, David Carr, Steve Mallett, Abi Williams, Julia Igel, Gakava Lovemore, Katie Murphy, Rhys Warham, Sara Zari, Irene de la Torre Arenas

21 June 2022

Dr Francq discusses the need for analytical methods to deliver unbiased and precise results. Her talks in detail about confidence, prediction and tolerance intervals in linear mixed models and the interpretation of statistical results.

Read more...

Dr Francq discusses the need for analytical methods to deliver unbiased and precise results. Her talks in detail about confidence, prediction and tolerance intervals in linear mixed models and the interpretation of statistical results.

15 June 2022

This session is a collection of presentations considering approaches to dose finding in early phase trials.

Read more...

Pavel Mozgunov; Andrew Hall; Lizzi Pitt

This session is a collection of presentations considering approaches to dose finding in early phase trials.

Practical Implementation of the Partial Ordering Continual Reassessment Method in a Phase I Combination-Schedule Dose-Finding Trial  - Pavel Mozgunov 
There is a growing medical interest in combining several agents and optimising their dosing schedules in a single trial. Evaluating doses of several drugs and their scheduling in a single Phase I trial simultaneously posses a number challenges, and specialised methods to tackle these are required. While several suitable designs were developed and proposed in the literature, the uptake of these methods is slow and implementation examples of such advanced methods are still sparse.  
 
In this presentation, we will share our recent experience of developing and implementing a modified model-based Partial Ordering Continual Reassessment Method (POCRM) Design for 3-dimensional dose-finding in a Phase I oncology clinical trial in patients with advanced solid tumours. In the trial, doses of two agents and the dosing schedule of one of them can be escalated. We will provide a step-by-step overview on how the POCRM design was implemented and communicated to the trial team. We will demonstrate a novel approach to specify the design parameters that is more intuitive for communication and will demonstrate a number of developed visualisation tools to demonstrate the statistical properties of the design. This included both performance in a comprehensive simulation study and in individual scenarios. The proposed design went through evaluations of health authorities and was successfully used to aid the decision-making in the ongoing trial.  
 
Decision making under uncertainty in PI-II dose finding trials in Oncology - Andrew Hall 
There is increased interest in dose finding methods in oncology using both toxicity and efficacy endpoints with targeted therapies. A phase I trial design proceeds in stages with a decision as which dose to give the next group of patients made after every stage. Bayesian decision theoretical approaches have previously been found to be in theory ethically and scientifically sound. In practice however, it is challenging to specify a utility function capturing clinical preferences while maintaining good operating characteristics sensitive to specification.  
 
Outcomes from treatments are not deterministic; utilities are a measure of preference when facing an uncertain outcome. We consider situations where preferences/utilities are defined with respect to outcome probabilities. In doing so clinicians can account for individual patient risk while meeting wider trial objectives, i.e. identifying a recommended phase II dose. We argue attitudes to risk in this setting follow heuristics from prospect theory. Namely they are framed from the perspective of a reference point, with a risk averse attitude for perceived gains, and risk seeking for losses. Additionally, with loss aversion it is ethical to avoid losses more so than to pursue gains. The bivariate utility is formed by inspecting utility independence axioms to describe the payoff between two separate utility functions for efficacy and toxicity.  
 
I will explain why heuristics from prospect theory to structure utilities around outcome probabilities are justified in dose finding trials, and show this leads to consistent and in some scenarios improved operating characteristics over designs specifying value rather than utility functions.  
 
Building the bridge from PhD to practice: optimising phase I trials using estimand-style formulation - Lizzi Pitt 
We have developed a framework to obtain optimal dose escalation schemes for phase I trials. The emphasis is on fully specifying the aims of the trial up front: if you tell us what you want the trial to do, we can find the optimal dose escalation scheme for your specific trial. We achieve this using dynamic programming. We have considered trials with one binary safety endpoint with a variety of aims, as well as trials that also include a binary efficacy endpoint.  
 
This research was conducted as a PhD project and focussed on a first in human trial with a particular generic structure. How do we translate this theory into practice and apply the methodology to a real trial, with a different structure?  
 
We shall present reflections on some case studies of real phase I trials with different structures and covering different therapeutic areas. Spoiler alert! The key is in the problem formulation. This framework encourages early discussion on what makes the trial a success, what quantity the trial seeks to estimate and how the information will be used in phase II. This brings the flavour of estimands to phase I trials and creates trial designs that are fit for purpose, facilitate decision making and enable us to learn more about the treatment earlier.

14 June 2022

This session, brought to you by the Data Sharing ESIG, is based on their recent paper “Synthetic data use: exploring use cases to optimise data utility” and will cover the following topics: • Overview of synthetic data - what it is and where it could be used • How synthetic data can be produced • Examples where synthetic data has been used.

Read more...

Lisa Winstanley (AstraZeneca), Finn Janson (Roche), Mimmi Sundler (AstraZeneca)

This session, brought to you by the Data Sharing ESIG, is based on their recent paper “Synthetic data use: exploring use cases to optimise data utility” and will cover the following topics:
• Overview of synthetic data - what it is and where it could be used
• How synthetic data can be produced
• Examples where synthetic data has been used.

14 June 2022

COVID 19 presented a variety of challenges in the conduct and analysis of ongoing clinical trials, including additional protocol deviations that lead to increased missing data and the occurrence of unforeseen intercurrent events. This session includes 3 talks covering the following aspects: 1) How regulators approached the acceptability of changes in conduct and handling of missing data or intercurrent event; 2) A case study of a Phase 3 trial where the collection of the primary endpoint was impacted, as well reconsideration being needed of dosing, analyses, data collection and site monitoring; 3) A case study of the development of a treatment for COVID-19, navigating emerging scientific information and evolving regulatory pathways.

Read more...

Khadija Rantell (MHRA), Asa Hellqvist (AstraZeneca), Nicola Scott (GSK)

COVID 19 presented a variety of challenges in the conduct and analysis of ongoing clinical trials, including additional protocol deviations that lead to increased missing data and the occurrence of unforeseen intercurrent events. This session includes 3 talks covering the following aspects: 1) How regulators approached the acceptability of changes in conduct and handling of missing data or intercurrent event; 2) A case study of a Phase 3 trial where the collection of the primary endpoint was impacted, as well reconsideration being needed of dosing, analyses, data collection and site monitoring; 3) A case study of the development of a treatment for COVID-19, navigating emerging scientific information and evolving regulatory pathways.

Interpretation and translation of clinical trial outcome impacted by COVID-19 pandemic:
COVID 19 presents a variety of challenges in the conduct and analysis of ongoing clinical trials, including additional protocol deviations that lead to increased missing data and the occurrence of unforeseen intercurrent events. In response to these difficulties, regulatory agencies issued guidance on how to assess potential impacts on ongoing clinical trials. This stressed the importance of collecting additional pandemic related data in order to distinguish between pandemic and non-pandemic related intercurrent events and to select an appropriate strategy for handling them. Selecting appropriate statistical analyses, with justifiable and plausible assumptions, is also critical to the delivery of reliable interpretable results targeting an agreed estimand that can be reliably translated into a clinically meaningful and interpretable treatment effect for decision making. In this talk, I will provide examples of trial results impacted by COVID-19 pandemic, describe their approaches to handling intercurrent events and missing data, and provide regulatory feedback on their acceptability.

Delivering a clinical study during a global pandemic: Experiences from the OSTRO nasal polyps study:
The phase III OSTRO trial of Benralizumab in Nasal Polyps was ongoing at the beginning of the COVID pandemic in early 2020. The pandemic restricted the ability of subjects to travel and attend scheduled visits, and it prevented the collection of data. The nasal endoscopy procedures required to collect the co-primary endpoint Nasal Polyp Score (NPS) were put on hold by the sponsor because of the high risk, to both subjects and site staff, of exposure to COVID-19. More than 25% of subjects would miss their primary endpoint due to inability to collect the Week 56 endoscopy. The effort to overcome the difficulties introduced by the pandemic involved reconsidering dosing procedures, statistical analyses, and planned approaches to data collection and site monitoring. We introduced flexibility in dosing and data collection to allow subjects to remain at home. We allowed flexibility for data monitoring without sacrificing study integrity. The statistical testing hierarchy was updated to consider an earlier timepoint for primary testing, which allowed for nearly 100% pre-pandemic contribution from subjects. We also considered other sensitivity analyses to assess the impact of COVID by determining on a subject level when the pandemic first had an impact on their data. While a global pandemic cannot be anticipated, we can still learn lessons from these challenges and solutions. In other trials we have introduced flexibility in operations and planned for statistical sensitivity analyses. In this presentation we will discuss the OSTRO experience and how we learn from it.

Statistical and operational challenges of developing a treatment for COVID-19:
A case study of the development of treatment for COVID-19. A story of navigating the ever changing nature of the COVID-19 pandemic while focusing on the target of delivering a drug to patients as soon as possible; fluctuating background rates of infection, evolving disease knowledge, emerging variants, changing competitive landscape and new regulatory procedures.

13 June 2022

In this session we will introduce this new PSI Data Science SIG, its goals and current activities. Three talks will cover the following: An example of recent blog posts developed within the group to elicit engagement and discussions within the community and to help data science partitioners to expand their tool kits and data modalities. A presentation introducing current methods for feature selection and feature construction with their application to clinical data. A published project on predicting immunochemotherapy tolerability is used as the guiding example. A talk focused on an application of a work flow that combines traditional survival analysis techniques like KM curves and Cox models with state-of-the-art machine learning methods and explainers. As with all the activities of our SIG, our goal is to generate interest and discussion, not to advocate for particular methods, and to be introductory and practical.

Read more...

Domingo Salazar (AstraZeneca), Carsten Henneges (Syneos Health)

In this session we will introduce this new PSI Data Science SIG, its goals and current activities. Three talks will cover the following: An example of recent blog posts developed within the group to elicit engagement and discussions within the community and to help data science partitioners to expand their tool kits and data modalities. A presentation introducing current methods for feature selection and feature construction with their application to clinical data. A published project on predicting immunochemotherapy tolerability is used as the guiding example. A talk focused on an application of a work flow that combines traditional survival analysis techniques like KM curves and Cox models with state-of-the-art machine learning methods and explainers. As with all the activities of our SIG, our goal is to generate interest and discussion, not to advocate for particular methods, and to be introductory and practical.

Introduction to the PSI Data Science SIG: In this talk we will introduce this new PSI Data Science SIG, its goals and current activities. As an example, we will cover to recent blog posts developed within the group: one of the creation of effective dashboards and the other on the application of the particularities of the analysis of omics datasets. In general, the goal of our blogs is to elicit engagement and discussions within the community and to help data science partitioners to expand their tool kits and data modalities.

Feature Selection:
The presentation will introduce current methods for feature selection and feature construction with their application to clinical data. A published project on predicting immunochemotherapy tolerability is used as the guiding example to share learnings, and critically review and discuss approaches. Since feature selection is a wide-spread task, the aim is to work out and highlight the specialties and needs related to its application to clinical data.

Machine Learning in Survival Analysis: This talk will focus on an application of a particular tried-and-tested work flow that combines traditional survival analysis techniques like KM curves and Cox models with state-of-the-art machine learning methods and explainers. As with all the other activities of our SIG, our goal is to generate interest and discussion, not to advocate for particular methods. Also, in line with our philosophy, the talk would be introductory and practical.

13 June 2022

The COVID-19 pandemic is the greatest health challenge for a generation and has highlighted the critical importance of generating rapid and rigorous evidence for decision-making. The International COVID-19 Data Alliance (ICODA) was formed, a global coordinated, health data-led research initiative bringing together for- and non-profit companies and organisations, to enable researchers to access RCT and RWD health data to accelerate knowledge of the prevention and treatment of COVID-19. This talk will be in two parts; the first will be less technical and focus on the ICODA journey – how to successfully bringing different companies and non-profit organisations to collaborate towards a common goal, and how statistician played an essential role in this. The second part will present results from one of the ICODA projects evaluating existing medical interventions to support drug repurposing efforts.

Read more...

Jonas Haggstrom (Cytel)

The COVID-19 pandemic is the greatest health challenge for a generation and has highlighted the critical importance of generating rapid and rigorous evidence for decision-making for the treatment of COVID-19. As a response to this challenge the International COVID-19 Data Alliance (ICODA) was formed, an open and inclusive global coordinated, health data-led research initiative bringing together for- and non-profit companies and organisations working collaboratively to enable and empower researchers to access RCT and RWD health data in a responsible way, making use of innovative data science, contemporary tools and technology to accelerate knowledge of the prevention and treatment of COVID-19. This talk will be in two parts; the first will be less technical in its nature and focus on the ICODA journey from its inauguration in July 2020 to the current state with 12 driver projects, including more than 130 researchers, working on data from over 40 different countries – the ins and outs of how to successfully bringing different companies and non-profit organisations to collaborate towards a common goal, and specifically how statistician played an essential role to make it happen. The second part will present results from one of the ICODA projects evaluating the efficacy and safety of COVID-19 of existing medical interventions to support drug repurposing efforts.

13 June 2022

Recording of the 2022 PSI Conference Young Statistician Session from 1:30-3:00pm on Monday 13th June.

Read more...

Daniel Leibovitz, Jan Meis, Holly Jackson, Lauren Cowie & Alessandra Serra.

Recording of the 2022 PSI Conference Young Statistician Session from 1:30-3:00pm on Monday 13th June.
Talks in this session were as follows: Daniel Leibovitz “The Least Bad Option: A Simulation Approach to Minimizing Bias when Accounting for Treatment Switching in RCTs”, Jan Meis “Performance of different estimators in adaptive two-stage clinical trials with optimized design parameters”, Holly Jackson “An alternative to traditional sample size determination for small patient populations”, Lauren Cowie “Quantifying expert judgements using Bayesian elicitation techniques” and Alessandra Serra “A Bayesian multi-arm multi-stage clinical trial design incorporating information about treatment ordering”.

13 June 2022

This session is a collection of presentations considering theoretical aspects of the estimands topic.

Read more...

Camila Olarte Parra; Martina Amongero; Ekkehard Glimm

This session is a collection of presentations considering theoretical aspects of the estimands topic.

Hypothetical estimands in clinical trials: implementation of causal inference and missing data methods - Camila Olarte Parra 
The ICH E9 addendum outlines different strategies for handling intercurrent events but does not suggest statistical methods for their estimation. In this talk, we focus on the hypothetical estimand, where the treatment effect is estimated under the hypothetical scenario in which the intercurrent event is prevented. To estimate a hypothetical estimand, we consider methods from causal inference and missing data, establishing that certain ‘causal inference estimators’ are identical to certain ‘missing data estimators’. These links may help those familiar with one set of methods but not the other. We show that hypothetical estimands can be estimated by exploiting data after intercurrent event occurrence, which is typically not used. Moreover, using potential outcome notation allows us to state more clearly the assumptions on which causal inference and missing data methods rely to estimate hypothetical estimands and which (time-varying) variables should be adjusted for. The different estimators will be applied to a clinical trial conducted in patients with type 2 diabetes, where rescue medication needs to be available for ethical reasons. Accounting for rescue medication using the hypothetical strategy in this context, we will illustrate how the different estimators can be implemented, compare their assumptions, and their results. We will also discuss how to simultaneously account for other intercurrent events such as treatment discontinuation with either the treatment policy or hypothetical strategy. 
 
Treatment policy estimands for recurrent event data with missing data: COPD Vaccine case study using IPWC - Martina Amongero 
Under the Treatment Policy, whether an intercurrent event has occurred or not is irrelevant, the data will be collected and analyzed regardless. In presence of missing measurements (informative censoring), data need to be predicted based on plausible assumptions. For example, multiple imputation approach has been used to impute missing data based on similar subjects who remained in the trial.  
In this work, we explore an alternative causal inference approach called Inverse Probability of Censoring Weighting (IPCW). As a motivating example we consider a Phase 2 COPD Vaccine study where the clinical outcome is exacerbation (recurrent event) and the informative censoring is due to withdrawals. 
 
Intercurrent event time-based copy-to-reference method - Ekkehard Glimm 
The definition of intercurrent events (ICEs) for the estimand of interest is always linked to a specific strategy in regards to the analysis of the data. For estimands with multiple ICEs that use different strategies, it can be a challenge to define the imputation method and the analysis model. This becomes even more challenging, if missing data are imputed based on a multiple imputation approach. We will present an analysis method using only one imputation model, to impute data under the missing-at-random (MAR) or missing-not-at-random (MNAR) assumption which covers all different strategies we use for our multiple ICEs.  
 
The method that we propose is flexible, such that imputations can be done either based on the MAR assumption or using a reference based approach (MNAR), both within the same procedure in SAS® PROC MI.  
 
This method has various advantages: The data are imputed in one imputation step and hence the correlation and variability is based on the full set of patients. The method is simple to implement and intuitive, the switch to the reference group (if indicated) at the time of the ICE is implemented at a patient level and hence considers the pre-ICE profile and post-ICE profile of each patient. Furthermore, this method considers the half-life of the active treatment by including previous visits into the imputation model. Finally, it considers different imputation strategies without the need of running separate imputation algorithms. 

13 June 2022

In this conference session, the speakers discuss various aspects of decision-making in confirmatory trials, including the choice of study population, the use of surrogate endpoints, an RShiny package to help with go/no-go decisions and a discussion of the two confirmatory trial paradigm.

Read more...

Sarah Williams (chair), Kimberley Haquoil, Zhizheng Wang, Johannes Cepicka, Stella Jinran Zhan

In this conference session, the speakers discuss various aspects of decision-making in confirmatory trials, including the choice of study population, the use of surrogate endpoints, an RShiny package to help with go/no-go decisions and a discussion of the two confirmatory trial paradigm.

Practical experience designing an adaptive enrichment study - Lucy Keeling, Sam Miller 
"Scientific advances over recent years have enabled discovery of targeted therapies, ie with potential for increased efficacy in a defined subgroup of patients. These advances bring challenges for drug developers and regulators. Development of these compounds comes with additional costs and complexity, such as development of a reliable diagnostic. The pharmaceutical company therefore desire to identify the best population to treat as early as possible, perhaps using a relatively short-term outcome. Regulators require efficacy to be confirmed in the population to be treated using established, possibly long-term, endpoints. The common goal is to deliver these medicines to patients as rapidly as possible. There is therefore a need to apply trial designs that enable efficient decision-making and demonstration of efficacy. Fortunately, multiple statisticians have contributed to the development of robust statistical theory to allow flexibility to use surrogate endpoints for decision-making regarding the population and still ensure rigorous type-I error control for efficacy. However, before a company can apply these methods in practice they must understand the risks and benefits of the decision rules and analysis method in the specific circumstances of their trial.   We will report results of a simulation study where we evaluated the operating characteristics of several potential adaptive trial designs. KerusCloud was used to simulate patient-level data with multiple correlated outcomes and subpopulations and apply interim decision-rules to seamlessly select the final analysis population. As a result, our collaborators could make the best decision regarding the trial design to use in full knowledge of the practical implications. " 
 
Relationship between Change in UACR and eGFR Slope, a Meta-analysis - Zhizheng Wang, Christian Källgren, Magnus Andersson 
"Urine Albumin-to-Creatinine Ratio (UACR) and estimated glomerular filtration rate (eGFR)-slope are two key bio-markers for CKD. Due to the slow progression of CKD, it is desirable to find proper surrogate endpoints for more efficient drug development programs. Two recent separate meta-analyses (Heerspink et.al 2019, Inker et.al 2019) have studied respective bio-markers of treatment effect. However, the link between these two is still missing. In this study, we tried to find a connection between them.   We used a meta regression approach to combine the studies included in these two meta-analyses and explored the relationship between UACR and eGFR-slope, with chronic and total slope at several different timepoints. The primary interest is in the relationship between reduction in UACR at six months and effect on total eGFR-slope at two years, which we used to plan our Phase 3 programs and to calculate probability of technical success (PTS) based on early program results.  For the primary objective we estimated an  R2 of 14.74% and p-value <0.01 for UACR. For a subset of studies with  patients having CKD at baseline, the model reached a peak R2 at 43.71%. The eGFR-slope can be predicted given UACR from Phase 2 trial and PTS can be calculated as a conditional probability given that eGFR-slope is a linear relationship with log mean UACR. This result guided us in evaluation of different Phase 3 program designs with eGFR-slope as the primary endpoint.  Reference Heerspink, H. J., Greene, T., Tighiouart, H., Gansevoort, R. T., Coresh, J., Simon, A. L., ... & Keane, W. (2019). Change in albuminuria as a surrogate endpoint for progression of kidney disease: a meta-analysis of treatment effects in randomised clinical trials. The lancet Diabetes & endocrinology, 7(2), 128-139. Inker, L. A., Heerspink, H. J., Tighiouart, H., Levey, A. S., Coresh, J., Gansevoort, R. T., ... & Greene, T. (2019). GFR slope as a surrogate end point for kidney disease progression in clinical trials: a meta-analysis of treatment effects of randomized controlled trials. " 
 
Optimal designs for phase II/III drug development programs facilitated by R Shiny applications - Johannes Cepicka, Marietta Kirchner, Heiko Götte, Meinhard Kieser, Lukas Sauer, Stella Erdmann 
"In the planning of phase II/III drug development programs, sample size determination is essential, as it has a significant impact on the chances of achieving the program objective. Within a utility-based, Bayesian-frequentist framework, methods for optimal designs, i.e. optimal go/no-go decision rules (whether to stop or to proceed to phase III) and optimal sample sizes minimizing the costs while maximizing the chances of achieving the program objective, were developed recently1-4. By expanding the framework including even more aspects of practical relevance, a wide range of scenarios can now be addressed (e.g. the planning of optimal phase II/III programs with several phase III trials, multiple arms or endpoints; for time-to-event, normally distributed and binary endpoints; with discounting of overoptimistic phase II results; with limited budget and/or sample sizes etc.).   By merging all aspects into a single modular framework and developing user friendly R Shiny applications and an R package (https://web.imbi.uni-heidelberg.de/drugdevelopR/), utilization is facilitated and the methods can be further adapted for specific scenarios.  As regulatory authorities require the validation of any software used to produce records within clinical research5, a sophisticated quality assurance concept consisting of measures for archiving, versioning, bug reporting, and code documentation was followed, guaranteeing reliable results.  1 Kirchner M et al. 2016, Statistics in medicine, 35(2), 305-316 2 Preussler S et al. 2019, Biometrical Journal, 61(2), 357-378 3 Preussler S et al. 2021, Statistics in Biopharmaceutical Research, 13(1), 71-81 4 Kieser M et al. 2018, Pharmaceutical Statistics, 17(5), 437-457 5 US Food and Drug Administration 2003, https://www.fda.gov/media/75414/download"  
 
Should the two-trial paradigm still be the gold standard in drug assessment? - Stella Jinran Zhan, Cornelia Ursula Kunz, Nigel Stallard 
"Two significant pivotal trials are usually required for new drug approval by a regulatory  agency. This standard requirement is known as the two-trial paradigm. However, several  authors have questioned why we need exactly two pivotal trials, what statistical error the  regulators are trying to protect against, and potential alternative approaches. Therefore, it  is important to investigate these questions to better understand the regulatory decision  making in the assessment of drugs’ effectiveness.   It is common that two identically designed trials are run solely to adhere to the two-trial  rule. Previous work showed that combining the data from the two trials into a single trial  (one-trial paradigm) would increase the power while ensuring the same level of type I error  protection as the two-trial paradigm. However, this is true only under a specific scenario  and there is little investigation on the type I error protection in the full parameter space.   In this work, we compare the two paradigms by considering scenarios in which the two  trials are conducted in identical or different populations as well as with equal or unequal  size. With identical populations, the results show that a single trial provides better type I  error protection and higher power. Conversely, with different populations, although the one  trial rule is more powerful in some cases, it does not always protect against the type I error.  Hence, there is the need for appropriate flexibility around the two-trial paradigm and the  appropriate approach should be chosen based on the questions we are interested in." 

08 June 2022

In pediatric drug development special interest is in an adverse impact on the growth of the children. Zak Skrivanek presents examples to visually explore effects on growth. All visualisations are available on the Wonderful Wednesday blog.

Read more...

In pediatric drug development special interest is in an adverse impact on the growth of the children. Zak Skrivanek presents examples to visually explore effects on growth. All visualisations are available on the Wonderful Wednesday blog

There are various ways to display a course over time. Band plots show time on the horizontal axis, which can be interpreted intuitively. Animated scatter plots show the change over time in time lapse. Differences in the distribution can easily be spotted in violin plots or box-whisker plots. The next challenge is to design a graphical patient profile. See the Wonderful Wednesday homepage for more detail. 

Wonderful Wednesdays are brought to you by the Visualisation SIG. The Wonderful Wednesday team includes: Bodo Kirsch, Alexander Schacht, Mark Baillie, Daniel Saure, Zachary Skrivanek, Lorenz Uhlmann, Rachel Phillips, Markus Vogler, David Carr, Steve Mallett, Abi Williams, Julia Igel, Gakava Lovemore, Katie Murphy, Rhys Warham, Sara Zari, Irene de la Torre Arenas.

18 May 2022

Want to quickly explore your data in the browser? Looking to create, collaborate and share interactive visualizations with others? In this session, you will learn how to load data from a SQL database, view and interact with data in JavaScript, create visualizations using a variety of tools including Observable Plot/D3, configure & customize your visualizations through Inputs and much more!

Read more...

Mike Freeman
Want to quickly explore your data in the browser? Looking to create, collaborate and share interactive visualizations with others? In this session, you will learn how to load data from a SQL database, view and interact with data in JavaScript, create visualizations using a variety of tools including Observable Plot/D3, configure & customize your visualizations through Inputs and much more!

Links provided during the webinar:
Starter notebook: https://observablehq.com/@observablehq/psi-workshop;
Completed notebook: https://observablehq.com/@demoteam/psi-workshop;
Plot notebook: https://observablehq.com/@observablehq/plot;
Plot cheatsheets: https://observablehq.com/@observablehq/plot-cheatsheets
Page:

Upcoming Events

Latest Jobs