18 December 2015
Martin Bardsley, Director of Research, The Nuffield Trust - photo

Dr Martin Bardsley

Director of Research
Nuffield Trust

Choosing which elements to measure in order to understand quality of care is much like dipping your hand into a tin of Quality Streets at Christmas time. Out of the many quality measures – and indeed Quality Streets – we all fancy something a bit different. People have differing perspectives and are interested in different things. The QualityWatch Consensus on quality report aimed to understand which aspects of quality are most important to measure and whether different groups of people – such as patients, carers and clinicians – agree on this.

But before we get to gauging consensus, how do we decide how to measure quality of care? It’s clear that patients, professionals and the public all want high-quality care, but it’s less clear exactly what we mean by ‘quality’. As soon as you start to try and define quality, things get a bit messy. You quickly realise that there are lots of different elements to this ubiquitous little word.

There are some high-level structures that help to define quality in a way we can understand. For example: Donabedian’s method of breaking care into a triumvirate of structure, process and outcomes; Maxwell’s six dimensions; and Darzi’s three (see below) – not to mention a variety of variants and adaptations of these.

The single common definition of quality which encompasses three equally important parts:

  • Care that is clinically effective – not just in the eyes of clinicians but in the eyes of patients themselves;
  • Care that is safe; and,
  • Care that provides as positive an experience for patients as possible

Source: NHS England

But these high-level concepts are just the start. Things get more complicated when moving beyond defining the broad dimensions of quality to actually measuring it. There are so many different elements involved in delivering health and care services, and each has some quality dimension.

This has meant that approaches to measuring quality are often based on the pragmatic – that which is easiest to measure – resulting in many elements falling into the ‘too difficult to measure’ box. Nevertheless, if we believe that what matters should be measured (or should it be the other way around?), we need some way to break down the high-level notions of concepts like effectiveness into a series of metrics. This helps us stand some chance of obtaining relevant data or evidence to describe quality and how it’s changing. For example, the idea of prompt access to hospital services translates in part into a series of indicators about waiting times.

The process of deriving these measures is usually performed by a cadre of information specialists – people who know about the data and understand how to construct meaningful indicators. But the process of choosing which indicators are best is not just a technical exercise – implicit within the process are some value judgements about what is important. This process of indicator selection has rarely involved more than the specialists. The Consensus on quality report attempted to go further by gathering the views of specialists, patients and clinicians.

The fact that no one set of measures seemed to outshine the rest probably shows that quality is not easily boiled down into a few basic indicators. Rather, we need many different measures to understand the complex and multi-faceted nature of quality. So, just as we need the whole tin of Quality Streets, we also need many measures of quality to keep everyone happy.


Add new comment