Here are some brief descriptions of terms that you may come across on this website. 

  • A — Accountability

    The ability to demonstrate to key stakeholders that a programme works and that it uses its resources effectively to achieve and sustain projected goals and outcomes.

  • A — Activities

    What programmes develop and implement to produce desired outcomes.

  • B — Best practice

    New ideas or lessons learned about effective activities that have been developed and implemented in the field and have been shown to produce positive outcomes.

  • C — Communications/ Dissemination Strategy

    A plan that sets out who you need to communicate with, the messages you want or need to communicate, and what methods you propose to use to achieve this.

  • C — Comparison Group

    A group of people whose outcomes can be measured against those of another group who are taking part in a specific service or intervention. Comparison group members have characteristics and profiles similar to those of the service group, but members of the comparison group, but do not receive the same service.

  • C — Control Group

    A more rigorous form of comparison group: members of a control group are systematically and randomly assigned to the group from the same population receiving an intervention, but they do not receive that intervention themselves. Comparison of outcomes in the control group with outcomes for those who have received the intervention allows researchers to determine whether the intervention being studied was responsible for any changes that are observed.

  • C — Cultural Comptency

    Understanding and appreciation of cultural differences and similarities within, among, and between groups.

  • D — Data

    Information collected and used for reasoning, discussion, and decision making. In programme evaluation, both quantitative (numbers) and qualitative (views, opinions and experiences ) data may be used.

  • D — Data analysis

    The process of systematically examining, studying‚ and evaluating collected information.

  • D — Dissemination

    The deliberate attempt to spread information and encourage its use.

  • D — Dosage

    The amount of interaction or participation by an individual or group in an activity or set of activities run as part of a service or programme. Dosage can be measured by the number of contact hours, including the number of sessions, and the length of each session.

  • E — Effectiveness

    The ability of a programme to achieve its stated goals and produce measurable outcomes.

  • E — Evaluation

    A process of systematic investigation, preferably done using scientifically robust research methods, and used to assess the processes, outcomes and impacts associated with a programme, service or intervention.

  • E — Evidence-based Programme

    A programme that has consistently been shown to produce positive results by independent research studies that have been conducted to a particular degree of scientific quality.

  • E — Evidence-informed practice

    Practice based on the integration of experience, judgement and expertise with the best available external evidence from systematic research.

  • E — Experimental Design

    The set of specific procedures by which a theory about the relationship of programme activities to measurable outcomes will be tested. This allows conclusions to be made about how strongly the programme’s activities influenced the outcomes that have occurred. Experimental methods always include the use of a control or comparison group.

  • E — External (or Independent)

    Evaluation by an individual or team independent of the evaluation organisation or programme that is being evaluated.

  • F — Facilitator

    A facilitator’s job is to support everyone to do their best thinking and practice. The facilitator encourages full participation, promotes mutual understanding and cultivates shared responsibility. By supporting everyone to do their best thinking, a facilitator enables group members to search for inclusive solutions and build sustainable agreements.

  • F — Fidelity

    The degree to which the activities undertaken in a programme are true to the design of the original programme on which it is based.

  • F — Focus Group

    A small group of people with shared characteristics who typically participate, under the direction of a facilitator, in a focused discussion designed to identify perceptions and opinions about a specific topic. Focus groups may be used to collect background information, create new ideas and hypotheses, assess how a programme is working, or help to interpret results from other data sources.

  • G — Goal

    A broad, measurable statement that describes the desired impact of a specific programme

  • I — Impact

    A statement of long-term, global or overarching effects of a programme or intervention (See also Outcomes).

  • I — Implementation Science

    The study of methods to promote the systematic uptake of research findings and other evidence into routine practice, in order to improve the quality of services.

  • I — Incidence

    The number of people within a given population who have acquired a particular characteristic or status within a specific time period, often expressed as a rate per year (See also Prevalence).

  • I — Inputs

    Resources made available to a service or programme to help it achieve its goals, e.g. funding, staff, training, consultancy, premises or equipment.

  • I — Intermediary Organisations

    These work across the boundaries of research, policy and practice and help those who plan and deliver services to do their work in more effective, evidence-informed ways.

  • I — Internal Evaluation

    Evaluation by an individual or team from within the organisation or programme that is being evaluated.

  • I — Intervention

    An activity conducted with an individual or group, or within a community in order change behaviour and prevent or bring about improvement to a problem .

  • K — Knowledge Transfer

    The processes by which knowledge and ideas move from the source of knowledge to other potential users of that knowledge.

  • L — Logic Model

    A series of logical connections that link problems and/or needs with the actions taken to achieve change. Involves spelling out key assumptions about how actions are related to outcomes. Usually expressed in diagrammatic form. 

  • M — Manualisation

    The process whereby written manuals or guidelines are developed which state the objectives for each activity/session and the recommended structure, organisation, sequence, and duration of each session/programme. Helps to ensure that programmes are consistent with the original programme.

  • M — Mechanism of Change

    The specific process by which a given activity is thought to lead to a particular change or outcome.

  • M — Methodology

    A particular procedure or set of procedures used for achieving a desired outcome‚ including the collection of research or evaluation data.

  • M — Mission Statement

    Defines the fundamental purpose of an organisation or service, describing why it exists and what it does to achieve its vision. It is sometimes used to set out a ‘picture’ of the organisation in the future. A mission statement provides details of what is done and answers the question: ‘What do we exist to do?’.

  • M — Monitoring

    A counting process concerned with assessing whether agreed inputs have been made and whether key targets for service uptake have been achieved (for example, counting how many people use a service over a given period of time).

  • M — Multi-dimensional

    Having several levels or aspects.

  • N — Needs Analysis

    An examination of the existing strengths and deficits within a group, community or organisation. Usually involves gathering views and opinions, and factual data, and should enable those concerned to make an informed judgement about what changes are needed in order to achieve better outcomes.

  • O — Outcome Evaluation

    Systematic process of collecting, analysing‚ and interpreting data to assess and evaluate what outcomes a programme has achieved.

  • O — Outcome Indicators

    The measurement that will be used to determine whether the intended effect of a programme has occurred. (For example, an indicator of the desired outcome ‘improved maternal mental health’ from a service offering post-natal support might be a reduction in symptoms of depression amongst women using the service, measured using a standardised scale, over a specified period).

  • O — Outcomes

    Changes that occur as a result of interventions. Outcomes may be short- term or immediate, medium-term or intermediate, long- term or end. Short-term outcomes may include changes in knowledge, attitudes or simple behaviours; long-term or end outcomes are likely to be the result of many or sustained interventions and include changes in complex behaviours, conditions (e.g. risk factors), and status (e.g. poverty rates).

  • O — Outputs

    Number of service units provided, such as the number of parent education classes or number of client contact hours.

  • P — Participation

    Meaningful participation involves recognising and nurturing the strengths, interests, and abilities of people being worked with, through the provision of real opportunities for people to become involved in decisions that affect them at individual and organisational levels.

  • P — Practice Development

    A continuous process of improvement which aims to achieve the best outcomes for service users. This is brought about by helping practitioners to develop their knowledge and skills in line with the most up-to-date research on what works.

  • P — Prevalence

    The total number of people within a population who have a particular characteristic at a given point (often expressed as a ‘lifetime’ rate ie. People who have ‘ever had’ a particular experience or characteristic.

  • P — Process Evaluation

    Assessing what activities were implemented, the quality of the implementation, and the strengths and weaknesses of the implementation. Is used to produce useful feedback for programme refinement, to determine which activities were more successful than others, to document successful processes for future replication, and to demonstrate programme activities before demonstrating outcomes.

  • P — Process Indicators

    Signs that an intended process or plan is ’on track.’ For example, one process indicator showing success in developing a collaborative effort may be the development of an interagency agreement.

  • P — Programme

    A set of activities that has clearly stated goals from which all activities— as well as specific, observable, and measurable outcomes are derived. A programme sometime may incorporate a number of different services.

  • P — Protective Factor

    An attribute, situation, condition, or environmental context that works to buffer an individual from the likelihood of adverse effects of a particular problem.

  • Q — Qualitative Data

    Information gathered in narrative form by talking to or observing people. Often presented as text, qualitative data often serves to illuminate evaluation findings derived from quantitative methods.

  • R — Random Assignment

    A systemic but arbitrary process used in experimental research design through which eligible study participants are assigned to either a ‘control group’ or to the group of people who receive the intervention being tested.

  • R — Randomised Controlled Trial

    A type of evaluation study where people are allocated randomly to a group receiving a particular intervention or to a group that is receiving a different intervention, or not receiving the intervention at all. This is the best type of study design to determine whether an intervention is effective in causing change.

  • R — Replicate

    To implement a programme in a setting other than the one for which it originally was designed and implemented, with attention to the faithful transfer of its core elements to the new setting.

  • R — Resource Assessment

    A systematic examination of existing structures, programmes‚ and other activities potentially available to assist in addressing identified needs.

  • R — Risk Factors

    An attribute, situation, condition‚ or environmental context that increases the likelihood of a particular problem or set of problems, or that may lead to an exacerbation of a current problem or problems.

  • S — Sample

    In evaluation research, used to describe a fraction or sub-group of a larger population and intended to represent the larger population to a greater or lesser extent. Samples may be selected to be statistically representative (as in robust quantitative research), or they may be selected purposively (as in qualitative research) to reflect particular characteristics or issues of interest.

  • S — Stakeholder

    An individual or organisation with a direct or indirect interest or investment in a project or programme (e.g., a funder, programme champion, or community leader).

  • S — Standardised Tests

    Instruments of examination, observation‚ or evaluation that share a standard set of instructions for their administration, use, scoring, and interpretation and have been pre-tested for various properties that make them robust.

  • S — Statistical significance

    A situation in which a relationship between different variables occurs so frequently that it is unlikely to be attributable to chance or coincidence. The likelihood of statistically significant findings is closely related to the size of the sample being used, so that even small degrees of relationships between things are likely to be significant in very large samples, whilst comparatively large relationships between things may not be statistically significant in small samples (this is known as ‘statistical power’).

  • S — Strategic Plan

    A comprehensive plan for accomplishment in relation to stated goals and objectives. Ideally, the plan should cover multiple years; include targets for expected accomplishments; and propose specific performance measures used to evaluate progress towards those targets.

  • T — Targeted prevention

    Prevention efforts that most effectively address the specific risk and protective factors of a target population, and that are most likely to have the greatest positive impact on that specific population.

  • T — Target Population

    The individuals or group of individuals for whom a prevention programme has been designed and upon whom the programme is intended to have an impact.

  • T — Theory of Change

    A set of assumptions (‘hypotheses’), usually based on research, about a pathway of change, which forms the basis of the programme’s design. It outlines a causal pathway, from where things are now, to where they will be, by specifying what has to happen along the path for goals to be achieved.

  • U — Universal Prevention

    Prevention efforts targeted to the general population, or a population that has not been identified on the basis of individual risk or need. Universal prevention interventions are not designed in response to an assessment of the risk and protective factors of a specific population, but are in theory open to anyone who wants to access them.

  • V — Vision Statement

    A statement giving a broad, aspirational image of the future that an organisation is aiming to achieve.