8 July 2015
Peter C Smith - Imperial College London - image

Professor Peter C Smith

Emeritus Professor of Health Policy
Imperial College London

International comparison has become a sure-fire media success story. The press and the web abound with comparisons of educational attainment, quality of cuisine, most liveable cities, levels of corruption, and happiness, to name but a few. We all know that many such comparisons are unscientific and of doubtful validity, but we still appear to have an unquenchable appetite for international comparison of any sort. 

In particular there has, over the last twenty years, developed some sustained interest in comparing population health and the associated health services, especially amongst high-income countries. But, notwithstanding the guaranteed widespread interest prompted by any international comparison, do such initiatives deliver any benefits, other than satisfying our inherent curiosity? Might they even create unintended adverse responses?   

An important undertaking

I would argue in the strongest terms that international comparison of health systems, if undertaken carefully, is an immensely important undertaking for two fundamental reasons. 

“International comparison of health systems, if undertaken carefully, is an immensely important undertaking.”

First, it acts as a major instrument of accountability under which the public can scrutinize and pass judgement on their government for its stewardship of the health system. The World Health Report 2000 was a very early example of international comparison of health systems. It was seriously compromised by flawed data and analysis. However, the principle of reporting directly to citizens on their health system’s performance was very important, and the power of such transparency was demonstrated by the energetic response to WHR 2000 of many governments.  

The second reason why international comparison is so important is that it can help governments pinpoint specific areas where the health system is not performing as well as it could, identify countries that appear to be performing better, and prompt a search for ways to improve. To their credit, UK governments have been amongst the strongest users of such comparison. For example, the results from the Eurocare project on cancer survival rates across Europe in the 1990s led to an acknowledgement that the UK was lagging behind many European counterparts, and prompted a sustained search for improvement.

Finding comparable data 

There are an increasing number of good sources for international comparison. The Organization for Economic Cooperation and Development (OECD) has been at the forefront for many years in assembling routine health system data, and has over the last 10 years become the major source for comparisons of health service quality in high-income countries. Population health data are also provided by the European Commission’s European Core Health Indicators website. The Commonwealth Fund has carved out a special niche in comparing patient experience and population views on health services, with an international survey that now covers 11 countries

To complement these quantitative sources, the European Observatory on Health Systems and Policies provides a huge qualitative resource that helps those seeking to understand the observed variations.

The start of the learning process

The next stage is for policymakers to select the most urgent areas for review, and to start to dig beneath the bald data. 

The QualityWatch report is an excellent example of how careful analysis and presentation of existing international indicators can be used to identify causes for concern. Nevertheless, however carefully they are done, such comparisons can only be the start of a process of learning and priority setting for policy makers. Detailed scrutiny is needed to understand the reasons behind variations. The next stage is for policymakers to select the most urgent areas for review, and to start to dig beneath the bald data. This will uncover surprises and anomalies, but should also point the way to further investigation and, in the longer-term, possible research, trials, and even policy reforms. 

Data is only part of the picture

But no one should rush to judgement. For example, the OECD data suggest that Japan has an astonishingly high rate of ‘avoidable’ hospital admissions for diabetes patients. But this is due mainly to a deliberate policy of ‘educational hospitalization’, under which newly diagnosed diabetes patients are given intensive personalized advice about managing their condition, with the intention of improving prognosis and reducing future health service utilization. Of course, the effectiveness of that policy should be evaluated, and its relevance for other systems assessed. However, its presence does mean that there is almost nothing directly to be learnt from comparison with Japan on this particular performance indicator.

A long way to go

“We even have trouble securing comparable information on the four countries of the UK, so there is still a long way to go.”

The QualityWatch report offers an excellent summary of some of the most important comparisons currently available. It also exposes the enormous gaps in coverage, and carefully explains why many of the indicators may be affected by national institutional arrangements, giving rise to caution when drawing inferences. 

To improve comparability, there is a major role for international organizations in standardizing definitions and promoting assembly of data, ideally at the level of individual patients, and making access as easy as possible. The EuroReach project, funded by the European Commission, showed practically how this agenda might be pursued. Yet we even have trouble securing comparable information on the four countries of the United Kingdom, so there is clearly still a long way to go. In the meantime, initiatives such as the QualityWatch report vividly demonstrate the benefits of truly comparable information, and one hopes that it will prompt concerted action to increase the quality and coverage of indicators in the future.

Comments

The data are indeed interesting, and raise many questions. However, I remain frustrated that none of the media reports which I have heard (mainly BBC and some newspapers and ther BMJ coverage) has pointed out that 'survival' is a very fickle beast, when not qualified. As Gigerenzer points out (Risk Savvy, 2014 Allen Lane), this measure depends critically on how early a specific diagnosis is made. The resulting 'lead time bias' can make survival appear better, even though the ultimate mortality rates may be no different. He cites the example of prostate cancer, where widespread PSA testing in the USA means that the diagnosis is made very early, with 5- and 10-year survivals being concomitantly better there than in the UK where the diagnosis is usually only made once symptoms appear. Ultimately, the mortality of the disease is similar in both countries. Even mortality is a measure significantly affected by the demographics and incidence of the disease within a population. None of these matters received any mention, merely that UK survival and mortality rates for specific cancers were lower in the UK than several other countries. Although I am not an epidemiologist, these distinctions seem fairly basic to the discussion. The lack of any discussion of them is potentially alarming for the lay audience, and frustrating and disheartening to the medical audience.
Kit Byatt (not verified)
(changed )

Dear Kit, Thank you for your comments. Our report tries to highlight both the benefits and challenges of comparing health systems internationally. As you point out, there are many factors that need to be examined as potential explanations for the variance between countries. In addition to reporting OECD cancer metrics, our report also highlights the in-depth studies being done by the International Cancer Benchmarking Partnership to better understand differences in cancer outcome. With best wishes, The QualityWatch Team
The QualityWatc... (not verified)
(changed )

Add new comment