Welcome to the fourth blog in our series Demystifying Evaluation from Karl McGrath and Dearbhaile Slane.
Here we look at some of the common challenges and constraints that can arise during the evaluation process, and some potential strategies you can use to overcome or mitigate them. Understanding the challenges and constraints of an evaluation are crucial factors that feed into its design and implementation.
In their book RealWorld Evaluation: Working Under Budget, Time, Data and Political Constraints, Michael Bamberger and Linda Mabry shine a light on four common constraints for evaluations. These are:
1. Budget constraints
2. Time constraints
3. Data constraints
4. Political andorganisational constraints.
In CES’s experience, we can certainly testify to the frequency with which these constraints appear in evaluations. And to them, we would also add two more challenges and constraints we often address:
5. Ethical considerations and constraints
6. Knowledge and skills constraints.
Evaluations do not have unlimited funding and ‘budget constraints’ are about keeping the costs of the evaluation within its budget.
Conducting robust evaluations that produce trustworthy findings can be resources intensive.
Sometimes budget constraints and robustness come into conflict and trade-offs must be made. The budget might be too small to apply the ‘best’ evaluation designs and methods, but the evaluator still needs to produce findings that are meaningful and trustworthy.
Bamberger and Mabry suggest the following strategies to reduce the cost of an evaluation:
Evaluations will have a definite timeframe and the timing of an evaluation may not always be ideal for answering the questions of interest. ‘Time constraints’ are about conducting the evaluation within its agreed timeframe, or when its timing isnot ideal.
In CES, there are at least three common time constraints we come across:
Some of the strategies suggested by Bamberger and Mabry to manage the time constraints are similar to those for managing budget constraints, and include:
To this, we would also add, based on our experience:
Evaluations often have to ‘make do’ with gaps or limitations in the data available to them. ‘Data constraints’ are about conducting an evaluation when critical information needed to address the evaluation questions is missing, difficult to collect or of poor quality.
Examples CES regularly encounter include:
It is important to be transparent when describing your findings and conclusions. Data constraints will almost always weaken the certainty of your findings, even if the strategies to reduce data constraints are used. It can be easy, however, to fall into the trap of being overly confident in a set of findings even when data constraints do not support them. As evaluators, it’s important to be clear about the limitations of the evaluation and express an appropriate level of caution about the certainty of your findings.
Evaluations do not take place in a vacuum. Political and organisational constraints almost always are present in different ways and to different extents. Bamberger and Mabry use the term to refer “not only to pressures from government agencies and politicians but also to include the requirements of funding or regulatory agencies, pressures from stakeholders, and differences of opinion within an evaluation team regarding evaluation approaches or methods”.
These are not necessarily bad things, though they can sometimes have a negative influence. They can also be seen as a positive indicator of the importance of an evaluation to stakeholders.
Some political and organisational constraints include:
Managing political and organisational constraints effectively requires good teamwork and communication. It requires the evaluator to demonstrate effective stakeholder management and at times, it will require a firm approach to protect the independence of an evaluation
At the beginning of the evaluation, it can help to clearly define evaluation boundaries and collaboratively conduct stakeholder analyses. Defining evaluation boundaries with the evaluation commissioners can help to clarify where the evaluation must retain absolute independence and where it can accommodate stakeholder input and preferences.
A collaborative stakeholder analysis can also help to identify key stakeholders who might be unknown to the evaluator and understand their potential interests and influence on the evaluation.
Ethics are an essential part of our considerations and practice in every evaluation we do at CES. Ethics are about how we should, and do, conduct an evaluation.
As experienced researchers, all our evaluations follow certain ethical principles:
These are the minimum ethical standards we embed in every evaluation. When deciding which evaluation types, designs and methods are most appropriate for a particular evaluation, we have to be sure that the methods we use will adhere to these principles.
When working on sensitive topics or with vulnerable populations, there may be additional guidelines to follow. For example, the ethical standards required when working with children and young people may be slightly higher than those for adults. Another example is evaluations of palliative care. Palliative care is an especially sensitive topic with high risk of harm to patients and their caregivers. Specific recommendations on how to conduct ethical palliative care evaluations were published in 2013 by the MORECare expert group.
At CES, we are fortunate to have a team with a wide range of evaluation knowledge and skills. However, there is such an abundance of evaluation types, designs, methods and approaches that no evaluator can be skilled in everything… So, a perhaps obvious question for evaluators to ask themselves when designing an evaluation is ‘what kind of evaluation do we actually have the knowledge and skills to conduct?’.