International compatisons of UK care quality - image

Lucia Kossarova, Ian Blunt & Martin Bardsley

Nuffield Trust

What can the UK learn from international comparisons of healthcare quality? We offer an overview of our research. Download the full in-depth report and appendices now.

1. Background

International comparisons of the performance of healthcare systems have become a fairly common approach to supporting or refuting arguments for change in healthcare. In fact, in the United Kingdom (UK), comparisons with other countries have been influential in major policy moves – most noticeably in 2000 when the then-prime minister, Tony Blair, pledged to bring UK spending on health up to the European average by 2006 (Blair, 2000).

More recently, the National Health Service (NHS) Mandate describes an ambition for the health system in England to become one of the best in Europe and even the world (Department of Health, 2015), as does the Five Year Forward View (NHS England, 2014). This ambition should be understood in the context of the UK continuing to spend relatively little on healthcare when compared with similar countries of the Organisation for Economic Co-operation and Development (OECD).

International comparisons serve as an additional lens on the state of the quality of care provided nationally.

There have also been examples where comparative analyses have stimulated thinking about improvement to find better ways of doing things. For example, much of our interest in integrated care models stems from comparisons between England and the United States (US) (Curry and Ham, 2010) and has involved learning about population health systems in the US, Germany, New Zealand and Sweden (Alderwick and others, 2015).

More recently, we have seen examples of cross-sectional comparisons (Davis and others, 2014; The Economist Intelligence Unit, 2015), which compare countries at one point in time – most notably in the work of The Commonwealth Fund, which is sponsoring a series of comparative surveys (Davis and others, 2014).

For this research we were interested in extending the above approaches to look at change over time. We also wanted to be realistic about the strengths and weaknesses when looking at the indicators of the quality of healthcare. We do not believe that this should be a process of ranking countries; rather, it is a way of understanding how the UK is progressing over time relative to other countries and of identifying areas where more effort may need to be made. International comparisons serve as an additional lens on the state of the quality of care provided nationally.

2. Quality indicators

Although international comparisons are appealing and often newsworthy, there are many challenges involved in collecting high-quality and comparable data across countries and assessing differences in the quality of care between countries. For the analyses in this research, we chose to use data on an established set of indicators derived from national returns and collated by the OECD. The sources of these data derive from very different information and healthcare systems and we note that the way data are collected can bias the results. However, substantial improvements have been made in the quality and consistency of data collected, as well as indicators and methodologies used.

We use 27 internationally available and validated indicators to explore care in four sectors – primary care, acute (hospital) care, cancer care and mental health.

The Health Care Quality Indicators (HCQI) project – a project initiated in 2002 to measure and compare the quality of health service provision across OECD countries – includes some of the most robust indicators available. This research uses the HCQI data to understand what international comparisons tell us about changes in the quality of healthcare in the UK between 2000 and 2013 and provides a baseline for future comparisons.

When considering how the UK compares to other countries on selected quality of care indicators, we examine both where the UK is heading (trends) and where it stands relative to other countries (better/similar/worse). Ultimately, this analysis attempts to answer the crucial question: how can we use this information to improve the quality of healthcare in the UK?

It is important to note that the OECD HCQI indicators are selective and only touch upon the quality of care in the different healthcare systems where validated comparative indicators are available. In this research we use 27 internationally available and validated indicators to explore care in four sectors – primary care, acute (hospital) care, cancer care and mental health – across the following 15 countries: Australia, Belgium, Canada, France, Germany, Greece, Ireland, Italy, the Netherlands, New Zealand, Portugal, Spain, Sweden, the US and the UK. We also examine areas where the indicators are not quite ready for international comparison over time, mainly due to data quality and reporting issues.

3. Findings for the UK

Despite continuous improvements in the quality of the OECD data and indicators, we have to guard against making oversimplified statements, for example that the quality of care is either good or bad in one country or another. As we have noted many times before, quality is difficult to measure and the indicators we have only capture very specific aspects of care – not the totality.

The illustration below summarises the 27 indicators according to whether performance on these in the UK appears in general to be better than, similar to or worse than performance in other countries and whether trends since 2000 have been improving, stable or deteriorating.

>> Download this infographic.

The UK does not consistently overperform or underperform when compared with other countries. However, looking across the 27 indicators indicates that, compared with other countries, the UK’s healthcare system is better than average in some areas, while it requires significant improvement in others.

Absolute and relative trends – that is, whether the UK is improving or deteriorating and how it is performing in relation to other countries – are also mixed. It is encouraging that the UK is stable or improving in terms of almost all of the indicators (25 out of 27) and we would hope that the UK can at least maintain but ideally increase the speed of improvement. It is also encouraging that there is no indicator on which the UK performs worse than other countries and is deteriorating at the same time. However, it is worrying that the UK performs worse than most countries on 14 out of the 27 indicators and is deteriorating on two indicators.

Key findings

  • The nine indicators representing primary care are too mixed to highlight any patterns in trends or relative performance. Overall, the UK is performing better than most of the comparator countries on five out of nine indicators. However, its performance on the following two indicators is deteriorating: diphtheria, tetanus and pertussis (DTP) vaccination coverage (between 2012 and 2013) and the volume of antibiotics prescribed. Significant improvement could be made in areas where performance is average (three out of nine indicators) or low (one out of nine indicators) relative to other countries. We would particularly like to draw attention to the following:
    • Influenza vaccination rates in the UK seem to be consistently higher than in many OECD countries – an indication of a system that is capable of delivering population-wide prevention, largely through well-developed primary care.
    • Average but improving performance on childhood vaccination rates gives a small insight into the quality of services for children in the UK. More internationally comparable indicators are required to truly understand the quality services provided to children in primary care.
  • The over-use of antibiotics is an issue of global concern. Although the volume of antibiotics prescribed in the UK is rising, overall rates tend to be lower than those in other countries, but higher than Germany, the Netherlands or Sweden. However, there are indications that the UK is prescribing a decreasing proportion of second-line antibiotics (cephalosporins and quinolones) – restricted for situations when first-line antibiotics have failed – which is an indication of good prescribing practice.
  • Rates of notionally avoidable hospital admissions are relatively low for diabetes, but for asthma and chronic obstructive pulmonary disease (COPD) these rates are relatively high compared to the best performers.

  • With regard to cancer care, the UK has a somewhat contradictory position. Although it seems to perform relatively well on a range of measures of population screening, the survival rates for some common cancers are still relatively low in the UK. Compared with other countries, the UK performs very well on breast and cervical screening coverage. However, cervical screening coverage has been declining over time and the UK is also stagnating or significantly lagging behind in terms of cancer survival generally, raising concerns about potential delays in patients getting a diagnosis and being able to access effective treatment. A recent study carried out by the International Cancer Benchmarking Partnership suggests that differences in cancer survival rates are associated with differences in the readiness of primary care physicians to investigate for cancer in different countries, calling for initiatives that would facilitate primary care physicians’ ability to investigate and refer to specialists (Rose and others). Overall, there is clearly a need for ongoing initiatives to continue and to redouble efforts to understand how best to reduce the survival and mortality gap with other countries. It would also be useful if the OECD collected and provided comparative data on the quality of cancer care for children (in collaboration with other partners collecting data from cancer registries, e.g. the International Agency for Research on Cancer and the European Cancer Observatory).
  • Indicators representing acute care – stroke and acute myocardial infarction (AMI) – mainly show improvements but the UK continues to lag behind other countries. It is important to note that overall inaccuracies in routine data and differences in stroke care around the world make comparisons very challenging. Nevertheless, timely provision of high-quality acute stroke and AMI care is essential for preventing long-term disability and unnecessary deaths. A recent study compared AMI mortality in Sweden and the UK and reported that mortality rates in the UK are higher than those in Sweden and suggested that many thousands of deaths at 30 days might have been prevented or delayed if the same treatment was given in the UK as in Sweden. At the same time, the mortality gap between the two countries has narrowed over the past decade (Chung and others, 2014). While it is right that efforts to improve the quality of acute care services are being made, the size of the gap in mortality rates between UK and comparator countries is of some concern and needs to be better understood. It will be important to monitor whether the changes in the organisation and provision of acute care services will translate into further reductions in the mortality rates and the gap with other countries can be closed.

  • While some indicators exist in the areas of mental health, patient safety and patient experiences, work has mainly focused on improving and refining data collection so that meaningful comparisons can be made in the future.

4. Lessons for policy makers and health service managers

This type of descriptive international comparison of the quality of care over time is the first step towards building the evidence base necessary to identify problems and understand changes in the quality of care in the UK and other countries, just as was done with cancer survival, leading to the work done by the International Cancer Benchmarking Initiative. However, further analysis of trends in each of the areas of care (e.g. mental health, stroke and COPD) using a range of other quality of care indicators, and an analysis of what drives the UK’s performance on these indicators, are necessary before effective policies can be designed and implemented.

One danger with these sorts of comparisons is that, as with any ranking exercise, being top may not necessarily mean being good. So, for example, with regard to diabetes lower extremity amputations, although hospitalisation rates in the UK are relatively low, there is still an argument that with better care they should be even lower.

Overall, we would like to emphasise three lessons for policy-makers and health service managers:

  • International comparisons can be very powerful and could be used more widely. Although the depth of internationally comparable data is limited, there still remains substantial scope to increase the ways in which it is used to assess quality of care within the UK. One good example is how some of the measures published by the OECD are included in the NHS Outcomes Framework. Moreover, data emerging from a range of specialty-based comparative research projects could be used to provide learning from other countries’ performance and policies at the national and local level.
  • The challenges of using summary international indicators are well known. Perhaps the most important thing to remember is that these indicators are better at framing questions and initiating a debate than producing definitive judgements. Deriving useful learning means carrying out a thorough analysis involving quantitative and qualitative methods with a range of different stakeholders (e.g. researchers, patients and healthcare professionals), at different levels of the system (macro to micro), in order to validate and better understand the findings – such as the work being done through the International Cancer Benchmarking Partnership.
  • Consider the indicators in the context of the system. It is important to take a broad view of quality across measures and if necessary to undertake some work to test whether the differences are a true reflection of the quality of care provided. One indicator alone will not provide a complete picture of the quality of care provided. When a range of different indicators provides a consistent message, we can be more confident in the findings. Sometimes even a set of indicators does not reveal the full picture about the quality of care as important data may not be collected or easily available (e.g. data on the quality of services provided to children or data on the quality of mental healthcare).

It is important to note that this analysis compares average aggregate figures for the comparator countries as well as the UK. This masks variations within the four countries of the UK, or even regional and small area-level variation. In the future, it would be beneficial if comparisons were also made at disaggregated levels in order to improve the potential for learning.

The OECD has noted that ‘measuring quality is the first step towards improving quality and thus value in health care’ (OECD, 2010). Comparative indicators are only the start of a process of understanding what works. However, they are a necessary and valuable way forward in the identification and design of effective policies and actions to bring about change. We hope that policy-makers will use this information effectively, especially for indicators where the UK’s performance is average, low or deteriorating. Furthermore, we hope that the information will be used to better understand what the UK could learn from other countries and also what specific steps should be taken to improve performance in the next few years.