Discussion
This study evaluates the association between hospital rating and cardiovascular readmissions and mortality rates based on the national Swedeheart registry of ACS. The assessments were performed both at transition to public rating of hospitals and some years later, after modification of the rating score. We found no correlations between the quality index and mortality rates at a hospital level. Furthermore, we found no or weak correlations between the quality index and readmission rates. The incidence of ACS decreased during the study period, but our results do not suggest an association between the ranking score and mortality and readmission rates for patients hospitalised for ACS.
Variations in hospital provision of guideline recommended treatment were previously reported to explain 28% of the between-hospital variation in 30 days mortality in patients with an acute myocardial infarction included in Swedeheart 2004–2011, after adjustments for casemix.12 In the same report, variation in hospital treatment explained 22% of the variation in 30 days mortality after acute myocardial infarction in patients included in the Myocardial Ischaemia National Audit Project (MINAP) in the UK. This association is high compared with a study in USA, where only 6% of hospital-level variation in 30 days mortality rates for patients with an acute myocardial infarction was explained by public reported process measures.19 The discrepancy in associations between Swedeheart and MINAP may at least for the first study period be explained by the coverage for these registries. By the time of the study, both registries covered only 50%–60% of all patients hospitalised for an acute myocardial infarction, and the included patients were younger and less diseased, whereas we included every patient with ACS as primary diagnosis at discharge.20 21 The modified Swedeheart index (used from 2011) includes the degree of coverage, that is, the share of patients eligible for inclusion in the registry who is included, as a part of the index. However, this modification did not impact the lack of association between the score and cardiovascular events after an ACS. Another explanation might be that patients with ST elevation myocardial infarction (STEMI) are over-represented in the RIKS-HIA/Swedeheart population. A study of the association between in-hospital mortality rates and a composite index of performance measures found a weak significant association for STEMI but not for non-STEMI patients.22 For RIKS-HIA/Swedeheart, the selection bias is in part explained by the fact that some hospitals only register patients treated in cardiac intensive care units, which was also the original intention of the registry. Of note, the indication for admission to these units (or to a cardiology unit without intensive care facilities) varies between hospitals, especially for elderly patients.
Hospital characteristics and socioeconomic status are other factors not examined in our study, which explain variations in mortality rate between hospitals. A multivariable model including hospital characteristics and socioeconomic status explained 17% of the variation in hospital-specific 30 days risk-stratified mortality rates after an acute myocardial infarction.23 This notwithstanding, most of the variation in mortality rates between hospitals remains unexplained and warrant further study. Given that most of the variation in cardiovascular outcome may be explained by other factors than current in-hospital treatment regimens, it is reasonable to believe that hospitals in general have limited opportunities to deviate from secular mortality trends. This said severe non-adherence to guideline-based treatments is likely to result in adverse deviation from secular mortality trends.
In the worst-case scenario, such associations may even be reversed. Settings where public rating is linked to economical reimbursement may create an incentive to avoid taking care of patients with perceived high risk of worse outcome.24 Higher ratings have under such circumstances been linked to higher mortality.25
The weak associations between the quality index and readmission rates are in line with a larger study including 2700 hospitals in the USA, were the association between performance measures and 30 days all-cause readmission rate after acute myocardial infarction was low (r=0.10, although significant). Other evaluated conditions in that study (pneumonia and orthopaedic surgery) also had low associations (r=0.07 and 0.06, respectively) between performance measures and readmission rate.26
On average, hospitals increased their rating scores during the study period. This suggests improved performance measures, a higher quality of care, or maybe differences in reporting of the quality measures or the way the score was calculated. As hospitals achieve a higher degree of fulfilment regarding process measures, the differences between them will diminish. Although this is a good thing, the association between performance measure and outcome on a hospital level will be weaker when all hospitals have a high compliance to guidelines. Also, the differences in rating score in Swedeheart are disproportionately large, as compared with treatment regimen, as points are given by reaching a threshold value rather than by use of a continuous scale. Hence, the quality index may exaggerate the relative differences in treatments between hospitals. Another reason for weak associations between performance measures and outcome at a hospital level might be that each hospital has too few cases, rendering a large variability in mortality and readmission rate per hospital.
There are important limitations of this study. First, although the National Patient Register has an almost complete degree of coverage of ACS there is scarce information about risk factors and other potentially important confounding factors for prognosis for the individual patient. Thus, the comparisons are based on crude and partly unadjusted data. However, the use of risk-adjusted mortality rates to evaluate healthcare is also associated with pitfalls.27 Further, the Swedeheart quality index is not based on adjusted patient data, and case mix in different hospitals is not taken into account. Thus, we consider the crude analysis reflecting the way the quality index is used. However, the year-wise, cross-sectional analyses in the study do not allow us to follow patients over a longer time, nor to follow changes in the rating system.
Second, during the first part of the study period, RIKS-HIA/Swedeheart was focused on patients aged 80 years or less treated in coronary care or intensive care units. We included all patients with an ACS as main diagnosis at discharge on purpose, which included also patients older than 80 years of age and those hospitalised outside cardiac care units. We consider this to give a higher degree of coverage and a better reflection of the quality of ACS care for the individual hospital, and hence more useful comparison between hospitals.
Third, readmissions and mortality after discharge from hospital are influenced by the follow-up of the patient. Although Sweden has a rather standardised way of follow-up during the first year after an ACS, differences in follow-up will impact the correlations. This may be of special importance for elderly patients or patients with many comorbidities.
Conclusion
Mortality rates and readmission rates in patients hospitalised for an ACS associate poorly to a Swedish registry based index specifically evaluating quality of care at a hospital level. Ranking scores based on process measures might be one important dimension to better understand, evaluate and improve hospital healthcare. However, the use of ranking scores as a useful support for patient decision-making and to enhance quality improvement remains to be established. Further studies, with focus on finding process measures and quality indicators associated to relevant outcomes, may help us to develop and improve the quality scores.