With the rise in mortality rates in the early part of 2018 remaining a major concern across the country, Martin Gladding, Managing Director at maxwell stanley, explores how Trusts often overlook the final piece of the puzzle when deciphering mortality rates and outlier alerts; the patients that have survived.

Mortality alerts are generated from a multifaceted and complicated calculation seeking to understand the relationship between actual patient deaths and those patients that had been expected to die.

The traditional routes of examining care received by deceased patients and the recording of this care are fundamental to identifying potential improvements in treatment and pathways and, as a result, the ‘actual patient deaths’ figure is verified by the Trust. But if a Trust fails to verify the second component of the equation – the number of patients that had been expected to die – how can it be confident that the resultant mortality indicator drives an accurate outlier alert? 

Ultimately, if a Trust is not proactively validating its coded data for all patients prior to that data leaving the organisation, and the data does not reflect the true complexity of its patients, the total number of patients expected to die can go under-reported and the resultant mortality calculation will be top-heavy.  

When we discuss this with Clinicians we work with, there is often a realisation that this data not only drives the reporting of Trust performance metrics but also drives the reporting of their individual clinical outcomes. This can, therefore, offer the public a potentially inaccurate perception of Clinicians’ individual skills and proficiency.

And of course, my personal concern is that this inaccurate reporting of metrics, such as mortality rates, has far wider implications than the Trust’s own performance profile because this data is being used further up the chain to drive local and national healthcare decisions and influence the strategic direction of the NHS. 

So how should you validate your data before it leaves the Trust? 

As standard, proactive validation should be an in-month targeted review of coded data with coding amended prior to freeze, ensuring all relevant patient comorbidities and procedure complexities are captured.

The validation process should also seek to identify and correct incomplete and inaccurate clinical documentation to ensure the patient conditions are appropriately recorded at source.

Finally, additional data quality issues should be routinely addressed, such as reviewing signs and symptoms coding and recording of palliative care, all of which will also affect a Trust’s mortality rates.

The ultimate objective of an embedded, in-month data validation process is to provide the Trust with confidence that the data driving mortality rates and other performance metrics genuinely reflects the activity that has been undertaken and the complexity of patients that have been treated.

So, if you want confidence in the accuracy of your Trust’s mortality rates, be sure your data is telling the complete story of all your patients, including those that survived. 

If you would like some advice on how to be confident in the accuracy of your data, please contact me at martin.gladding@maxwellstanley.co.uk / 020 7582 2103.