Developing and evaluating methods for the analysis of clinical trials

Clinical trials are a key tool for evaluating the causal effects of interventions in healthcare. This research programme develops and evaluates methods for the statistical analysis of clinical trials, such that results are robust and make best use of the available data. This ensures we get the most accurate answers to questions about the clinical effectiveness of interventions.
A key feature of the topics below is that the statistical analyses of our clinical trials need to be pre-planned, partly so that we cannot be accused of choosing an analysis that gives favourable results.

Estimands

Clinical trials take a long time to run and are expensive to run well. It is critical to ensure they are carefully designed to address the most relevant questions, not just the questions that are easiest to answer. We clarify the relevant question in our ‘estimand’. This involves defining the population, the treatment, the outcome variable, a population summary that describes the treatment’s effect on the outcome variable, and how we will handle the ‘intercurrent events’ that complicate analysis.
We have several streams of work relating to putting the ideas behind estimands into practice. These involve: non-inferiority trials in TB, survival data, cluster-randomised and stepped-wedge designs, factorial and multi-arm designs, and trials with treatment switching.

Missing data

Whatever we do to prevent and minimise it, missing data are to be expected in the majority of studies. In randomised clinical trials, certain types of missing data can waste the advantages of randomisation. For example, if a treatment is effective but people whose health is worst are more likely to withdraw from the study, this will tend to make the treatment appear less effective than it in fact is. However, because we do not see the data, we cannot tell if those who withdrew had worse outcomes or not. This poses a challenge for statistical analysis.
Our work on missing data has a rich history. We are currently working on issues such as how to handle missing components in composite outcomes, how to handle data truncated by death, multilevel multiple imputation, and missing data generated by wearables.
We worked with key stakeholders, including patient and public research partners and clinicians, to co-develop new guidelines on how to reduce, handle and report missing data in palliative care trials. The guidelines are relevant to all trials. You can read them here and download an infographic of the main findings here.

Sensitivity analysis

We can never certain that our assumptions about missing data are correct, so we need to do different analyses that make different assumptions, to see how much the result changes. This is known as ‘sensitivity analysis’.
Our work in this area primarily focuses on anchoring our assumptions to experts’ opinions and how to elicit these opinions, and basing assumptions about what missing data would have looked like around observed data from people randomised to different arms.

Covariate adjustment in trials

In the very simplest cases, the only data required to analyse a trial are the randomised arm and outcome. In reality, we tend to measure many other things, particularly just prior to randomising patients. These covariates tell us something about the outcomes we might expect. For example, people with more advanced disease when they join a trial might be expected to have worse outcomes. While randomisation protects trials against bias in estimating treatment effect, it can still be useful to adjust for such covariates. This usually increases how certain we are about a treatment’s effect (i.e. the statistical power).
There are several ways to adjust for covariates and we are working on how best to choose among them, as well as seeking to understand when and how covariate adjustment can go wrong, and how to prepare for this possibility.

Stratified medicine

While laboratory work continues to provide a rapid flow of possible strata (eg tumour subtypes in cancer) and associated targeted treatments, generating robust evidence of their efficacy, and superiority to the existing standards of care, remains challenging. We are working across the continuum from settings where strata are known before the trial begins, through to where we are using the trial data to identify strata for further investigation. For example, often during a study new information emerges, leading to increased focus on a particular patient stratum. In such settings, the patient stratum is unlikely to have sufficient power to resolve the questions.
We are developing approaches to gain information by using expert opinion to inform borrowing from larger patient strata (or multiple strata) within the trial. Alongside this, in collaboration with the Alan Turing Institute, we are empirically evaluating recent machine learning innovations for (i) detecting strata within which treatment effects differ and (ii) quantifying the strength of evidence for these differences.

Causal inference in trials

The primary question of clinical trials is typically about the effect of a decision to undergo a treatment. In answering this, we account for events that happen during the period in which patients are followed up, such as participants switching treatments. However, it is sometimes of interest to ask questions such as ‘what would have happened if participants had not switched treatments?’ (handling treatment switching) and ‘How did the treatment work? Did it work by first affecting a biomarker which in turn affected the eventual outcome (e.g. early effect on blood pressure reducing the risk of stroke)?’ This is known as mediation analysis. These questions are easy to articulate but hard to answer, and our research focuses on extending and evaluating some of the available methods to match the complexity of real trials.

Simulation studies

We need to be able to compare how different statistical methods behave. For example, one method for analysing a clinical trial may be biased, systematically returning an estimate that a treatment is more (or less) effective than it truly is. But how can we tell whether a method is biased? When analytic approaches are intractable, Simulation studies are a key tool for addressing such questions. We simulate hypothetical data that represent a study in important ways and then analyse the simulated data in one or more ways. Doing this just once will not tell us which is ‘right’, so we repeat the procedure many times and see whether a method is systematically wrong. Unfortunately, simulation studies are not taught well and many using them design and report their studies poorly. We have worked on a tutorial explaining how to use simulation studies and have several ongoing projects about how to improve their design and reporting.