Skip to main content

Research Repository

Advanced Search

Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study

Whitham, Diane; Turzanski, Julie; Bradshaw, Lucy; Clarke, Mike; Culliford, Lucy; Duley, Lelia; Shaw, Lisa; Skea, Zoe; Treweek, Shaun P.; Walker, Kate; Williamson, Paula R.; Montgomery, Alan A.; Bevan, Simon; Devall, Adam; Fairbrother, Kathryn; Goodman, Kirsteen; Hewitt, Catherine; Hobson, Rachel; Lawton, Sarah; Lock, Stephen; McDonald, Alison; Norrie, John; O’Brien, Alastair; Pearson, Sarah; Rhodes, Shelley; Snowdon, Claire; Thomas, Kim; Wood, Jill

Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study Thumbnail


Diane Whitham

Julie Turzanski

Mike Clarke

Lucy Culliford

Lelia Duley

Lisa Shaw

Zoe Skea

Shaun P. Treweek

Clinical Professor

Paula R. Williamson

Director Nottingham Clinical Trials Unit

Simon Bevan

Adam Devall

Kathryn Fairbrother

Kirsteen Goodman

Catherine Hewitt

Rachel Hobson

Sarah Lawton

Stephen Lock

Alison McDonald

John Norrie

Alastair O’Brien

Sarah Pearson

Shelley Rhodes

Claire Snowdon

Jill Wood


Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact.

The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials.

We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care.

We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70% and unimportant by 50% of participants voting for inclusion were retained.

Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds.

At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website.

We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.


Whitham, D., Turzanski, J., Bradshaw, L., Clarke, M., Culliford, L., Duley, L., …Wood, J. (2018). Development of a standardised set of metrics for monitoring site performance in multicentre randomised trials: a Delphi study. Trials, 19, Article 557.

Journal Article Type Article
Acceptance Date Sep 26, 2018
Online Publication Date Oct 16, 2018
Publication Date Oct 16, 2018
Deposit Date Oct 24, 2018
Publicly Available Date Oct 24, 2018
Journal Trials
Electronic ISSN 1745-6215
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Volume 19
Article Number 557
Keywords Multicentre randomised trials; Performance metrics; Delphi survey; Consensus meeting; Trial management
Public URL
Publisher URL


You might also like

Downloadable Citations