![]()
The 150-day ambition is part of a wider transformation agenda under the UKCRD programme, which aims to make the UK a global leader in clinical trials. It’s a bold vision, and one I fully support. But as we chase performance indicators, we mustn’t lose sight of the deeper purpose behind them: improving the system in addition to speeding it up.
Performance vs. Improvement: A Delicate Balance
There’s a tension here that anyone in research management will recognise. On one hand, we need hard metrics, clear, time-bound targets that drive accountability and focus. On the other we need softer, more developmental measures that help us learn, adapt, and improve. Performance metrics tell us how fast we’re moving. Improvement metrics tell us why we’re moving that way and whether we’re heading in the right direction.
Take study set-up times. The 150-day target is a powerful motivator, and it’s already catalysing change across the sector. But if we only measure the end result, we risk missing the story behind it. What were the bottlenecks? Which interventions made a difference? Where did collaboration flourish, or falter?
Learning from the Data
The NIHR Research Delivery Network (RDN) has made great strides in improving data transparency. The latest portfolio data shows over a million participants recruited across more than 4,500 studies in 2024/25. That’s a phenomenal achievement. But it’s the granularity of the data by region, specialty, and study type that offers real insight.
For example, we can now see where recruitment is thriving and where it’s lagging. We can track the impact of new contracting models like the National Contract Value Review (NCVR), which has helped standardise NHS pricing and reduce negotiation delays. These are the kinds of metrics that help us improve, not just perform.
Culture Matters
Metrics don’t exist in a vacuum. They shape behaviour and culture. If we focus solely on hitting targets, we risk creating a compliance mindset. But if we use metrics as tools for learning, we foster a culture of curiosity, collaboration, and continuous improvement.
That’s why I believe we need to talk more about why we measure, not just what we measure. Are we using data to punish or to empower? Are we celebrating progress, even when it’s messy or incomplete? Are we also listening to the voices behind the numbers: research nurses, trial managers, patients?
What do I think we need to do:
I believe it is important to walk this line carefully. Yes, track performance against the 150-day target, but we should also invest in improvement: streamlining our internal processes, piloting new tools, and listening closely to feedback from our delivery teams.
We should also be asking questions like: What does “good” look like in study set-up? How can we make contracting faster without compromising quality? Where can we remove duplication or friction? And crucially, how do we share what we learn so others can benefit too?
Looking Ahead
As March 2026 approaches, the pressure will mount. But I hope we can keep sight of the bigger picture. The goal is equally about hitting the 150 days and building a research system that’s faster and fairer, more efficient and inclusive, and more responsive and resilient. That means embracing metrics that help us learn, not just perform. It means celebrating improvement, not just achievement. And it means remembering that behind every data point is a person, often a patient waiting for better care, better treatments, and better outcomes. Let’s make sure our metrics reflect that too.