Delivering complex service evaluations

In-depth evaluations of new services or models of care require a multi-faceted approach taking in all the relevant stakeholders to really deliver the insight that health planners need.

New services or models of care are complex in both their delivery and their impact, so evaluations must be equally multi-faceted; they can’t be one dimensional because they affect any number of other individuals or services – for example, a new intervention introduced into primary care will potentially impact on GPs and their staff through new ways of working, on the budgets of commissioners, the number of referrals into secondary care, requirements for support through social care – and of course, on the health and wellbeing of patients.

ICHP evaluations are always robust and are planned together with our partners and their stakeholders, including patient and public involvement. Before we start an evaluation, we develop a logic model which connects inputs (resources), activities and outputs to the intended impact of a programme/technology, thereby producing an evaluation framework that identifies the associated metrics and methods for our evaluations.

To do this, we meet with partners to scope out the aims and objectives. We also carry out literature reviews to find any related work that’s been done in the past, identifying the outcomes and methods used previously and building on those. We also work with clinicians to understand the existing service pathway so that we can better evaluate the impact of a change.

A good example of an extensive service evaluation is our recent work with the North West London (NWL) local maternity system, one of seven early adopter sites tasked with testing and implementing specific recommendations set out in the Better Births maternity review. NWL was testing the recommendations around continuity of carer and delivering better postnatal care.

This was a complex project, as the programme team needed to test its proposed new models of care using six recommended primary outcomes identified in Sandal et al. (Cochrane review), and secondary outcomes specific to each model were also measured at  NWL site locations. In the first phase we developed an evaluation framework and in the second phase we ran the evaluation and produced a report.

We worked closely with the dedicated NWL local early adopters team to develop a summary of each of the care models, a set of hypotheses, accompanying metrics and methodological approach. Together with our collaborators OnePlusOne (a leading research charity who supported the qualitative analysis), we also developed a qualitative evaluation outline.

We sought to externally validate our work through a range of stakeholders using an interdisciplinary panel of clinical, health economics and public health and statistical experts. To assess the patient and clinician experience, OnePlusOne interviewed 20 pregnant women and 20 midwives to come up with common themes and learning points, and an online survey received responses from service users on their experience.

In addition to the analysis of primary and secondary outcomes against a matched cohort, key performance indicators to measure the proportion of women booked onto a continuity of carer pathway and the level of continuity were reviewed in each model. We also compared the costs of setting up and running teams under the new models to existing models using data supplied by the programme team.

This mixed methodology of both qualitative and quantitative evaluation as well as a cost impact analysis allowed us to create a truly wide ranging and informative report for the client, with important learning points for other systems as they also implement the Better Births recommendations. By involving patients, midwives, other clinicians, academics and other stakeholders in the work, we were able to paint a picture of how the changes were impacting on the ground, supporting the local system in moving forwards with its new models of care.

By Dr Wayne Smith, ICHP Health Economic Lead.