What we have learned – important reflections from our practice

Download the Guide

Welcome to the last of our blog series on Demystifying Evaluation from Karl McGrath and Dearbháile Slane.  

Over the last 4 blogs we have examined:

  • What an evaluation is and reasons for doing one
  • Different types of evaluation
  • Evaluation methods
  • Challenges and constraints of evaluation

In this post we remind you of the key points to consider when embarking on an evaluation.

What is an evaluation and why would you do one? 

As a reminder, our definition of an evaluation is:

‘A planned investigation of an intervention, according to specific questions of interest. It is carried out in a systematic and robust way, using reliable social scientific methods, to determine an intervention’s value, merit or worth.’

The reasons why you might wish to evaluate an intervention, service, policy, or practice include:

  • To assess and improve performance
  • To ensure accountability for public funding
  • To support decision-making and planning
  • To develop knowledge and understanding
  • To build organisational capacity
  • To improve quality

In our experience, it is useful to think about how you will monitor and evaluate an intervention before it is implemented, to ensure systems can be put in place to collect the data you will need to measure change and evidence your outcomes from the start.  

It’s also never too late to evaluate and even when there are few or no existing forms of monitoring data, there is always something that can be done to retrospectively evaluate an intervention.  

The evaluation process

A diagram of data collectionDescription automatically generated

The first stage of the process is arguably the most important stage, where evaluators, usually in collaboration with the commissioners of an evaluation, will seek to define the boundaries of an evaluation and gather and understand crucial information that will inform decisions about the evaluation design. These include:

A close-up of a few signsDescription automatically generated

Different types of evaluation

A group of colorful rectangular boxes with white textDescription automatically generated

Formative evaluations are intended to help improve the design, implementation, and outcomes of an ongoing intervention.  

Summative evaluations are intended to assess the impacts, effects and overall merit or worth of a completed intervention, but not usually to improve its implementation.

Process evaluations look at questions about the processes involved in a programme or innovation and answer questions on what, how, by who and for whom an intervention was developed, implemented, or conducted its work.  

Outcome evaluations look at questions about the effects and effectiveness of an intervention, to establish if (and sometimes by how much) it made a difference for the recipients or ‘beneficiaries’ of the intervention.  

Retrospective evaluation is an evaluation that looks back in time and usually take place at the end of an intervention.  

Prospective evaluation is an evaluation that is looking at the present, and the evaluation moves forward in time with the intervention being evaluated.  

Pragmatic evaluations try to apply the best evaluation method, for each evaluation question, that can be feasibly conducted within the real-world constraints of time and resources, and which provides practical knowledge for stakeholders.  

Theory-based evaluations place emphasis on developing a theory of how and why an intervention is intended to work and produce outcomes. The theory can then be used to guide the development of questions, data collection and data analysis methods.  

Evaluation methods

Data collection methods usually fall into two categories: quantitative research methods and qualitative research methods. When methods are combined, this creates a third category known as mixed methods research.

Quantitative research methods use numbers as data. They are often used to identify relationships between different aspects of the data (variables) that can be generalised to a wider population.  

Qualitative research methods use words and images as data. They usually seek to understand people’s experiences within their specific contexts.  

Mixed methods research uses either multiple quantitative research methods, multiple qualitative research methods, or some combination of both.  

Data collection methods

Interviews can be used to capture in-depth individual perspectives. Interviews tend to produce rich qualitative information and be more adaptable than questionnaires

Focus Groups also produce rich qualitative information. Focus groups can be used to capture a diverse range of perspectives and encourage interaction between participants, or when you want to confirm your analysis with a wide variety of ‘service user’ profiles.

Questionnaires can be used to understand the general characteristics or opinions of a group of people. It is more common to see questionnaires collect quantitative data, but they can also be designed to collect qualitative data.  

Observations involve the researcher carefully ‘seeing’ things (e.g. behaviours, activities, processes, relationships, events, etc.) in specific settings and then recording what they have seen.

Monitoring Systems can be used to collect data about a service or intervention in a systematic and continuous way. The data is usually recorded by the service staff and management, rather than the researcher.

Document Analysis, like observations, is often used in combination with other data collection methods as a way to compare and triangulate findings. It involves systematically reviewing documents that are relevant to your evaluation questions.

The challenges and constraints of evaluation

Budget constraints Evaluations do not have unlimited funding and ‘budget constraints’ are about keeping the costs of the evaluation within its budget.

Time constraints Evaluations will have a definite timeframe and the timing of an evaluation may not always be ideal for answering the questions of interest. ‘Time constraints’ are about conducting the evaluation within its agreed timeframe, or when its timing is not ideal.

Data constraints Evaluations often have to ‘make do’ with gaps or limitations in the data available to them. ‘Data constraints’ are about conducting an evaluation when critical information needed to address the evaluation questions is missing, difficult to collect or of poor quality.

Political and organisational constraints Evaluations do not take place in a vacuum. Political and organisational constraints almost always are present in different ways and to different extents. They can include: attempts to intentionally obstruct an evaluation; attempts to undermine the independence of an evaluation; and resistance by certain stakeholder groups to participating in the evaluation.

Ethical considerations and constraints Ethics are about how we should, and do, conduct an evaluation. As experienced researchers, all our evaluations follow certain ethical principles:

  • Do no harm
  • Informed consent
  • Voluntary participation
  • Confidentiality and anonymity.

Knowledge and skills constraints. A question for evaluators to ask themselves when designing an evaluation is ‘what kind of evaluation do we actually have the knowledge and skills to conduct?’

So, what next?

We hope this blog series has provided you with the information you need to get started on your evaluation.  

CES has a range of free resources to help you:

CES Guide to: Using Evidence

The CES Guide to: Working with Peer Researchers

The CES Guide to: Realist Evaluations

The CES Guide to: Inclusive Consultations

We are also offering a tailored online training course for organisations in the community and voluntary sector on ‘Evaluating Impact’. The course is split into 3 modules:

At the end of the course you will walk away with:

  • The ability to identify outputs, outcomes and indicators of success for your service
  • Knowledge of how to articulate the ‘theory of change’ of your service
  • The skills to choose the right evaluation method at different stages of your work
  • An understanding of how to develop an evaluation plan
  • The skills to analyse your data and tell the story of your evaluation findings.
“I have a better understanding of evaluation outcomes and outputs. I feel confident that in my next report I can give better and more informed answers.”
“The trainers are very friendly and skilled in this field. They help relate how the learnings can be applied to working life scenarios.”

Courses will begin again in September and you can find out more and register here

Contact us

CES has experience of delivering evaluations across a range of service areas and interventions. Our experts will work with you to deliver a robust and effective evaluation, which you can use to assess your performance, improve your services and communicate your value and impact.

More information on our evaluation service is available here

Get in touch to find out how we can help you at newprojects@effectiveservices.org

Related Guides

Related

Work with CES

Get in Touch