The limits of aggregate performance ratings

Colin Leys | October 1, 2014 | Blog


The Department of Health is consulting on a proposal to require all registered providers of health and social services to display, at their reception desks and on their websites, the ‘aggregate’ ratings that the Care Quality Commission (CQC) is about to start giving them. The government says it wants ‘to place a clear legal requirement on providers to display the rating awarded by CQC, to ensure that this clear assessment of provider quality is accessible to the people when they use services’.

But will the CQC’s ratings really tell a patient or member of the public who ‘walks through the door of their local hospital’ ‘how well it is doing’? In 2013, at the health secretary’s request, the Nuffield Trust undertook a study which found that single ‘aggregate’ ratings (especially for hospital care, and especially if they are aimed at improving patient choice) are of doubtful value, and liable to discredit both the ratings and the rating organisation. It is not clear that these findings have been sufficiently taken into account.

In the first place CQC’s inspections have been at best annual, and rely on data that may already be a year old, so that a rating when displayed may be two years out of date. Unlike, say, a lift, where a notice saying it passed its inspection last year can reasonably reassure us that it is still in good working order today, conditions in a care home or a hospital (or individual departments of a hospital) can change fairly quickly, for better or worse, between inspections, due to changes in funding, staffing, management, organisation, and in the case of private providers, even ownership.

Second, past CQC inspections have been of very variable quality. There have been too many cases of positive reports by the CQC or its predecessor, the Healthcare Commission, on institutions that turned out to have serious problems  [1. Compare two successive CQC reports on the Clementine Churchill hospital, published within 11 months of each other: http://www.cqc.org.uk/sites/default/files/old_reports/1-102643500_BMI_Healthcare_Limited_1-128758653_BMI_The_Clementine_Churchill_Hospital_20130319.pdf; and   http://www.cqc.org.uk/sites/default/files/old_reports/1-128758653_BMI_The_Clementine_Churchill_Hospital_INS1-664782319_Scheduled_01-05-2014.pdf].

Since 2013 the CQC leadership has begun major improvements but the human resources available to it will not allow these to be effective quickly.

The Nuffield report stressed the importance of ensuring that whatever organisation does the rating, it must be adequately funded to do the job. This is not yet the case with the CQC. One recent hospital inspection is estimated to have cost about £300,000 [2. Plausibly estimated by Roy Lilley in his nhsManagers.net blog of 17 September 2014].

There are 160 NHS hospital trusts, some of which operate several hospitals, not to mention over 200 private hospitals with overnight beds. The extra £25m allocated to the CQC for the new-style hospital inspections will not allow spending on that scale to be replicated routinely, so the intervals between inspections seem bound to lengthen, making ratings liable to become more and more out of date. An even bigger limitation may be a shortage of professionally qualified people who can be called on to act as short-term members of inspection teams. There have been troubling suggestions that even the new regime is unsustainable.

But a more fundamental problem is that the overall rating the CQC has now started giving each institution (‘good’, ‘requiring improvement’, or ‘inadequate’ ) is a summary of summaries, i.e. of the aggregate ratings arrived for each of five separate dimensions of quality – ‘safety’, ‘effectiveness’, ‘caring’, ‘responsiveness’ and ‘leadership’. Even if we assume that each of these ratings is a valid indicator, if they differ – for instance if the rating for safety is ‘good’ but for responsiveness, ‘inadequate’ – what does it mean to summarise them? What weighting does each have in the aggregate rating? The Nuffield report pointed out that especially for hospitals, what patients need to know varies from case to case; to be useful to them the ratings need to be for individual departments and specialties. An overall rating of ‘good’ can actually mask serious problems in a particular area of a hospital’s performance, which may be the area that matters to a prospective patient.

The proposals for requiring the ratings to be displayed also seem remarkably weak. The way they are to be reported and displayed is left up to each provider, and even the requirement to display them at all is feeble. The CQC will have power to levy a £100 penalty for failure to display the rating, and if that doesn’t work, to take a provider to court for a fine of up to £500. It is not clear how the CQC proposes to police this, or whether it is budgeting for the cost of prosecuting potentially dozens of providers who might fail to display poor ratings; or how much even £500 would worry the management of a hospital with a turnover of millions. There must be a significant risk, as the Nuffield report pointed out, that both the ratings and the CQC may be discredited.

 

Support Our Work

CHPI is the only truly independent health think-tank dedicated to the founding principles of the NHS. To continue our work keeping the public interest at the centre of health and social care policy, we need your help.

Please support CHPI so we can continue to impact the health policy debate.

About the author

Avatar photo

Colin Leys

Colin is an emeritus professor at Queen’s University, Canada, and an honorary professor at Goldsmiths, University of London. Since 2000 he has written extensively on health policy. He is co-author with Stewart Player of Confuse and Conceal: the NHS and Independent Sector Treatment Centres.See all posts by Colin Leys