You are currently using an unsupported browser which could affect the appearance and functionality of this website. Please consider upgrading to the latest version or using alternatives such as Mozilla Firefox, Google Chrome or Microsoft Edge.

Evaluation

Evaluation helps a programme or project or any other intervention to ascertain the degree of achievement or value in regard to the aim and objectives.  The primary purpose of evaluation is to gain and insight into the situation prior to an intervention and then to reflect on the implementation of that intervention over a period of time.

There are two main different types of evaluation:

  • Formative – which takes place during the design and development of the proposal of the programme or project
  • Summative – which takes place at the end of a completed project

There are numerous methods including:

  • After Action Review
  • Action research
  • Appreciative inquiry
  • Benchmarking
  • Clinical trials
  • Ethnography
  • Field analysis
  • Qualitative data collections – thoughts and perceptions via interviews and survey questionnaires, focus groups and world café events
  • Quantitative data collections – main emphasis on numbers and statistical analysis

For research conducted in ‘real-life’ settings, it cannot be assumed that the delivery of a complex intervention or its evaluation will be exactly as planned or intended.  It is therefore important to take a systematic approach to documenting and accounting the actual implementation.  In order to inform appropriate and effective implementation and scale-up of health and health service interventions, evaluations need to be useful and reflect the reality of the local context.

Evaluation includes identifying the evidence base for the problem and the potential solutions.  Identifying the right methodology for capturing information such as surveys, questionnaires, interviews or ethnography.  Evaluation starts at the beginning with the design of any intervention and is therefore used at the testing or pilot stages and not just at the end.  It is about evaluating the effectiveness of the implementation of any idea; awareness, understanding, dissemination and long-term monitoring.  Evaluation is to assess the effectiveness of the intervention, and to understand the change process and the variables that help and hinder that change process. 

Key lessons for successful evaluation are:

  • Foster a shared understanding across the entire team of what is trying to be achieved, what data needs to be collected and the processes and goals of the intervention
  • Plan communication methods to ensure staff are aware of what is happening and engaged with the implementation of the intervention so they feel part of the change and not that the change is being done to them
  • Recognise the vital role that evaluation adds in terms of generating meaningful results
  • Promote a continuous approach to evaluation among all team members so that they want to regularly collect and review data

In any change process challenges arise which all form part of the process of producing data for analysis and interpretation of the intervention.  The outcomes of complex behavioural interventions need to be understood as an interpretive process, subject to the varying influences of the people, processes and systems within which they are being implemented.

Evaluations of interventions within complex systems include negotiating a variety of day-to-day challenges that arise in the ‘doing’ of evaluation during the testing stage, and which reflect the specific dynamic networks of people, objects, and relationships. These issues are not always easily predicted during the planning or piloting phases of these types of studies, and require a number of supporting structures (e.g., mechanisms for communicating, documenting and reflecting on the reality of the evaluation).

At the heart of evaluation in complex adaptive systems is the need for ongoing, reflexive consideration of how the reality of doing evaluation can impact the meaningful interpretation of results, thus enhancing the understanding of the research problem.  Reflexivity has been recognised as an ongoing, critical ‘conversation’ about experiences as they occur, and which calls into question what is known during the research process, and how it has come to be known.  The aim of reflexivity, then, is to reveal personal perspectives, experiences, interactions, and broader socio-political contexts that shape the research process and the construction of meaning.  A reflexive approach would encourage greater awareness of the processes involved in enacting the protocol for an evaluation in a real-life context, and support more detailed reporting of these processes to aid decision-making.

A reflexive approach could also facilitate a systematic consideration of the entirety of the evaluation and its activities, not just discrete stages and acknowledge that conducting an evaluation is never as straightforward as a (comparatively) simplistic protocol would suggest on paper.   It requires efforts to create time and space to think about and carry out effective evaluation. However, taking this time will make for better practice in the long term, and that with increased practice, a reflexive perspective will become easier and more established in the implementation of healthcare interventions.

Monitoring Framework

Measurement is always wrong though sometimes useful.
Berwick 2019

Measurement and monitoring is all about understanding where you are in relation to where you want to be and it is not always about the statistics or the numbers.  It can be a feeling.  We can use the models to construct questions to understand how well the programme is performing and use these to engage people.  Knowing where you are now can help with anticipation about what could, should or might happen from here.  What needs to happen to reach our goals, what things need to happen, are we on track or not?  It helps build a shared understanding between staff and between staff and patients.  Measurement needs a purpose; to learn and to look at changes while at the same time not to worry about the actual position. 

Frameworks describes dimensions which can be included in monitoring in healthcare.  For example, in safety we can monitor:

  1. Past harm: both psychological and physical measures
  2. Reliability: building a ‘failure free operation over time’ and is about measuring behaviour, processes and systems
  3. Sensitivity to operations: the information needed and capacity to monitor safety on a day-to-day basis
  4. Anticipation and preparedness: the ability to anticipate and be prepared for things that may go wrong
  5. Integration and learning: the ability to respond to and improve from safety information

These offer people at all levels of healthcare organisations and the wider system some great insights into what they need to think about and shifts them to think more prospectively and proactively.  A framework pays as much attention to monitoring and ‘soft intelligence’ including through observation, listening, feeling and intuition, conversations with patients, families and staff, as measurement, hard data and numbers. 

Questions that can be asked:

Has patient care been safe in the past?
Study data associated with past harm but also design more nuanced measures of harm that can be tracked over time and can clearly demonstrate that healthcare is becoming safer.

Are our clinical systems and processes reliable?
The key issues is how can the basic reliability of healthcare be measured and monitored.  It is particularly hard because staff may not realise that they are being unreliable and they may have accepted poor reliability as their ‘normal’.  There is a lack of feedback from the system to tell the workers that they are being safe or unsafe and in many cases a lack of standardisation for people to be able to compare actual (work-as-done) care versus the standard (work-as-prescribed).   The only clear way to measure this is set out a number of processes that are expected to be as reliable as they can be across the whole system.  Then to have a baseline of data for these processes and then capture data over time to compare with the baseline.  Find some simple measures such as documentation of allergy status which we know should be done for all patients.

Is care safety today?
This is described as ‘sensitivity to operations’ which is a term used by high reliability theorists.  It includes awareness of all the conditions, pressures, circumstances that can impact on patient care every day (work-as-done).  It is suggested that huddles, briefings and de-briefings and interviews of staff and patients can be used to understand this.

Will care be safe in the future?
This is about anticipation and preparedness and trying to predict what may happen in the future.  Risk assessment and risk registers have been traditionally used for this.  Other models include human reliability analysis and failure modes and effects analysis – two methods for systematically plotting and examining a process and the ways in which it may fail.  However, in my experience, these are rarely understood or used in healthcare. 

Are we responding and improving?
This is clearly about the learning part of measurement and monitoring.  As has been set out in the section on safety myths, learning from incident reporting and incident investigation is patchy.

Find out more about quality improvement methodology in the NHS guide "First steps towards quality improvement: A simple guide to improving services" (PDF).