skip to content
 

resources

Are you planning or already undertaking evaluation work? If so then this is a library of evaluation resources that we have curated just for your needs. It is created in the form of an evaluation pathway and under each of the steps we outline there are a variety of resources on areas such as how to manage an evaluation, understanding the ins and outs of data collection, working with service-end users, and communicating and acting on findings.

Have a think about what is it about your project that you need help with, and click the tab to find the resources below.

If you still need further bespoke assistance to your project, please check our webinars or book a drop-in clinic slot with Professor Andy Jones who is a public health academic with many years of experience undertaking public health evaluations.

Overview: Should I evaluate? To what degree? What should I consider?

If you want to understand how to undertake an evaluability assessment (a process that helps you understand if an evaluation is appropriate and, if so, how in-depth the evaluation should be), this guide is for you. It is created by the Methods Lab, an action-learning collaboration between the Overseas Development Institute (ODI), Better Evaluation (BE) and the Australian Department of Foreign Affairs and Trade (DFAT). The guide provides an overview of evaluability assessment and how it can be used for impact evaluation, insight in how to plan for an evaluability assessment as well a checklists and decision support tools to use during the evaluability assessment. The guide also provides insight into what to do once an evaluability assessment is completed, as well as overviewing lessons learned from evaluability assessments in practice. It can be found at

https://www.betterevaluation.org/sites/default/files/Evaluability_assessment_for_impact_evaluation.pdf 2015

The UK Department for International Development have also produced a simple tabular Evaluability Checklist that can be used as a standalone tool or incorporated as questions into an Evaluability Assessment whose results are expected to be summarised in a substantial written report. Download the report from https://www.betterevaluation.org/sites/default/files/2022-03/An%20Evaluability%20Assessment%20checklist.docx. The checklist is extracted from pages 19-23 of the following report: 2015 Davies, R., 2013. Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. Report of a Study Commissioned by the Department for International Development.

The United Nations Office on Drugs and Crime (UNODC) has designed an Evaluability Assessment template which takes the user through a simple set of steps to decide whether an evaluation of an intervention should take place. Whilst the template does not encompass detailed considerations around the evaluation it’s a useful quick decision support tool for deciding if to pursue the potential of undertaking an evaluation in more detail. To access the template go to https://www.betterevaluation.org/sites/default/files/2022-11/Evaluability_Assessment_Template.pdf

Planning: What am I evaluating? What questions do I hope to answer? Who should I speak to?

Logic model and theory of change frameworks

An evaluation framework is a structured and systematic approach used to assess and analyse various aspects of a project, program, policy, product, or process. A commonly used framework is the production of a logic model. The logic model provides is a visual representation and a systematic tool used to illustrate the relationships between various components of a program, project, or intervention. Logic models serve as a foundation for designing an evaluation plan. By specifying inputs, activities, and expected outcomes, the logic model can be used to define the data to collect and the indicators to measure. This facilitates the assessment of program effectiveness.

The W.K. Kellogg Foundation Logic Model Guide provides a clear and accessible introduction to the underlying principles and language of the program logic model so it can be effectively used in program planning, implementation, and dissemination of results. The guide also includes a ‘Forms Appendix’ that provides blank templates to copy when developing your own logic models. Find it at https://www.betterevaluation.org/sites/default/files/2021-11/Kellogg_Foundation_Logic_Model_Guide.pdf.

And another helpful template, which can be edited and completed on a computer screen, is one that is designed by The Early Intervention Foundation https://www.eif.org.uk/files/resources/eif-logic-model-template.pdf. This forms part of the very informative “Evaluation Hub” microsite which can be found at https://evaluationhub.eif.org.uk/

Like a logic model, a theory of change includes causal mechanisms to show why each intervention component is expected to result in the intended outcomes. Theories of change are not always linear, and effects can later influence causes. A logic model and a theory of change are thus two distinct concepts that help programs organize and illustrate their activities and goals. While the terms are often used interchangeably, there are some key differences between them.

Like logic models, the theory of change can come in many forms, including a diagram or a written narrative. The Theory of Change guide by Patricia Rogers for UNICEF provides an excellent introduction. The guide overviews the components of a theory of change and overviews the distinctive methodological approach to their development. It also demonstrates how theories of change can be useful for identifying the data that needs to be collected in an evaluation and how it should be analysed. It also highlights its use as a framework for reporting, and can be found at https://www.betterevaluation.org/sites/default/files/Theory_of_Change_ENG.pdf  

The Early Intervention Foundation has provided a very helpful template which, similar to their logic modelling template, can be completed online https://www.eif.org.uk/files/pdf/eif-theory-of-change-template.pdf, which is again taken from their Evaluation Hub microsite at https://evaluationhub.eif.org.uk/theory-of-change/

Evaluation key planning questions:

When planning an evaluation, it’s good to focus on answering a few high-level key questions. In this section you’ll find some resources that will help with that. To kick off the process you might find it useful to look at the Early Intervention Foundation ‘10 Steps for Evaluation Success’ framework which is a guide that describes how evaluation evidence can be used to turn a good idea into an intervention that ‘works’, as well as developing quality assurance systems so that interventions remain effective when offered at scale. This short video introduces the ’10 Steps’, and answers a set of questions about a theory of change: what is it, why does it matter, and how to use scientific findings to test the assumptions about how an intervention is designed to work. https://youtu.be/zZEvwA4_ifc

In terms of identifying evaluation questions, the following resources will be of use:

  1. Good Evaluation Questions – You can use this guide and checklist as you create evaluation questions. The resource was developed by the Centers for Disease Control and Prevention (CDC) Asthma Control Programme in the USA but can be readily applied to any topic or setting. The CDC suggest that the checklist can be used either as a communication tool to aid in developing the specific questions with the evaluation planning team, or as “double check” to review the questions already developed. https://www.cdc.gov/asthma/program_eval/assessingevaluationquestionchecklist.pdf
  2. Like the CDC offering, the Evaluation Questions Checklist for Program Evaluation, developed by the Evaluation Checklist Project, was developed to aid in developing effective and appropriate evaluation questions and in assessing the quality of existing questions. https://wmich.edu/sites/default/files/attachments/u350/2018/eval-questions-wingate%26schroeter.pdf
  3. Finally, the Pell Institute and Pathways to College Network has produced “The Evaluation Toolkit: Develop Evaluation Questions”. Based on the identification of evaluation goals, the guide is to be used as you begin the planning process by discussing the evaluation purpose and selecting appropriate questions. http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/develop-evaluation-questions/
The ‘soft’ side of an evaluation: Who should manage the evaluation and how? How do I develop and maintain relationships? Should I commission specialist skills?

In addition to considering the format and content of an evaluation, it is important to consider the governance at an early stage. This include considerations such as who will lead the evaluation and who is critical to involve to help ensure evaluation success. The following resources will be useful to help with this:

1) The Evaluation Toolkit provided by the Pell Institute and Pathways to College Network includes a module that supports users of the resource with the selection of an appropriate evaluation lead for any given context. http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/select-an-evaluator/ Linked to this it also provides a similar resource to help with the engagement of key stakeholders http://toolkit.pellinstitute.org/evaluation-guide/plan-budget/engage-stakeholders/

2) The “Better Evaluation” organisation has created a guidance on deciding who will conduct the evaluation. The resource considers the potential role of a range of different actors, including external contractors; internal staff; those involved in delivering services; peers; the community; or by a combined group. The tool can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/decide-who-will-conduct-evaluation. This guidance is part of the Rainbow Framework. The Rainbow Framework organises the methods and processes in terms of tasks that are often undertaken in monitoring and evaluation and uses colour coding for ease of action and use. More details can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework

3) Better Evaluation has also created a guidance on understanding and engaging stakeholders. Stakeholders are people with a stake in the evaluation, including primary intended users and others with some buy in and hence form a critical group in any evaluation. The guidance can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/understand-engage-stakeholders.

4) Finally, the Robert Wood Johnson Foundation have written an excellent, detailed, guide on early stakeholder engagement called “A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions”. The aim of the guide is to support evaluators to identify and engage their stakeholders in the evaluation process. The guide can be found at https://www.betterevaluation.org/sites/default/files/Preskill_A_Practical_Guide.pdf

Looking after evaluation participants: What ethical and legal considerations should I consider?

Whilst they can feel burdensome for individuals and organisations keen to start an evaluation, consideration of ethical and legal issues that might be relevant to an evaluation is a critical precursor to the commencement of any work. Time spent getting this right will ensure that everyone involved in the evaluation will be treated fairly and appropriately and will prevent unnecessary problems once the evaluation has started.

1) The Health Research Authority (HRA) along with the NIHR INVOLVE team have produced a review statement review to help guide researchers on how to appropriately plan and deliver public involvement in research. Whilst brief, the document sets out key considerations in an easy-to-understand manner. The statement can be found at https://www.invo.org.uk/wp-content/uploads/2016/05/HRA-INVOLVE-updated-statement-2016.pdf

2) Newcastle University have produced an excellent website that provides guidance for researchers on how to plan and conduct research that involves human participants. Whilst the guidance is specifically targeted at Newcastle University staff, it is available for all to use and relevant to anyone involving other people in their evaluation. The resource can be found at https://www.ncl.ac.uk/research/researchgovernance/ethics/ethicstoolkit/toolkithumans/

3) The Better Evaluation organisation has created a guide to help researchers understand and define ethical and quality standards to help guide their evaluation. In particular the guide contains resources to help research teams better understand the requirements of ethically sound and high-quality evaluations. The guide can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/define-ethical-quality-evaluation-standards. This resource, along with the other Better Evaluation resources, makes use of the Rainbow Framework. This framework organises the methods and processes in terms of tasks that are often undertaken in monitoring and evaluation and uses colour coding for ease of action and use. More details can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework

4) Better Evaluation has also created a set of ethical guidelines that cover the conduct of evaluation, including the responsibilities of those undertaking and managing evaluations. The guidelines themselves contain links to a range of resources that provide additional support for those wanting to embed ethical considerations at the earliest stage in evaluation planning. The guidelines can be found at https://www.betterevaluation.org/methods-approaches/methods/ethical-guidelines

Using what’s out there: Can/should I use existing data? Has this type of evaluation been done before?

Searching the literature to find out what’s already out there can be a daunting task but is a crucial step when planning an evaluation project. Yet finding out what is already out there, and in particular good examples of previous work that are relevant to your project, can be a challenge. Help is at hand though as guidance exists on how to search and where to look for relevant research.

Cochrane is a network dedicated to producing and disseminating high-quality evidence in the form of systematic reviews and meta-analyses of healthcare interventions. Cochrane's mission is to provide up-to-date, reliable information to help people make informed decisions about health and healthcare. Their training arm, Cochrane Training, has produced a handbook around their evidence reviews and Chapter 4 contains useful guidance on how to search and select studies. The focus is on their Cochrane reviews, but much of what is presented is generalisabile to searching for diverse sources. Find it at https://training.cochrane.org/handbook/current/chapter-04

The UK National Health Service has produced some easy-to-use guidance on literature searching. It’s targeted at NHS researchers, but of use to anyone. It can be found at https://library.medschl.cam.ac.uk/files/2019/02/Guide-NHS-Lit_search_protocols_protocols_2019.pdf  whilst the University of Minnesota has produced a simple but extremely clear guide, using the PICO concept, on how to use library resources to search the literature. It can be accessed by going to  https://libguides.umn.edu/LibraryResearchChecklist

Finally, you might find the PRISMA expanded checklist useful for reporting the results of your literature review if you wish to take a more formal approach. PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA primarily focuses on the reporting of reviews evaluating the effects of interventions, but can also be used as a basis for reporting systematic reviews with objectives other that evaluation (e.g. evaluating aetiology or disease prevalence). You can pick up the checklist at http://prisma-statement.org/documents/PRISMA_2020_expanded_checklist.pdf?AspxAutoDetectCookieSupport=1

Approaches to gathering and analysing evidence: How might I collect and analyse data?

It's always challenging to plan for the analysis methods of an evaluation as the exact approach to be undertaken will be dependent on the nature of any evaluation task and the data that will be available to the evaluation team. Nevertheless the “Better Evaluation” initiative has produced a range of resources that will help with planning for analysis.

1) The Better Evaluation organisation has produced guidance to help evaluators better describe activities, outcomes, impact and the in which any intervention is being delivered. A consideration of all these factors will help with the task of data collection and analysis, and the resource includes support for activities including data sampling, the use of outcome measures and data collection, data management, and analysis. The guidance can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/describe. This resource, along with the other Better Evaluation resources, makes use of the Rainbow Framework. This framework organises the methods and processes in terms of tasks that are often undertaken in monitoring and evaluation and uses colour coding for ease of action and use. More details can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework

2) Better Evaluation has also produced guidance to help evaluators better understand the causes of outcomes and impacts. In recent years there has been considerable development of methods for understanding causes in evaluations, and considerable discussion and disagreement about what options are suitable in which situations. The resource will take you through these considerations and provides tools to help you better interpret findings. It can be found at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/understand-causes

3) In the third of their resources presented in this series, Better Evaluation provides an overview of how to synthesise data from one or more evaluations, thus providing a bigger-picture overview. Find it at  https://www.betterevaluation.org/frameworks-guides/rainbow-framework/synthesise

4) The Pell Institute and Pathways to College Network have published a useful piece of a guidance on the treatment and analysis of different types of data. The guide covers considerations such as how to convert your raw data into usable information, what techniques can be used to analyse quantitative data, and those that can be used to analyse qualitative data. Helpfully it also considers the different software packages that are available for analysis. You can find the guide at http://toolkit.pellinstitute.org/evaluation-guide/analyze/

Communicating findings: Who do I need to communicate to? What is the best way to reach them? What type of information might they respond to?

The effective communication of research findings is a critical stage in the evaluation process as effective communication will help ensure that the evaluation has impact. The resources in this section provide guidance on how to be an effective communicator.

1) The Better Evaluation initiative has published guidance on “Reporting & Supporting the use of findings”. The guidance covers how use of any evaluation often depends on how well the reporting process meets the needs and learning gaps of the primary intended users. The guide covers how to effectively share the findings of an evaluation with the primary intended users as well as other evaluation stakeholders. You can find it at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/report-support-use-findings. This is a part of the Rainbow Framework which is an initiative to organise methods and processes in terms of tasks that are often undertaken in monitoring and evaluation. Further information on the Rainbow Framework can be found at  https://www.betterevaluation.org/frameworks-guides/rainbow-framework

2) The Pell Institute and Pathways to College Network have published a communication plan guidance document outlining the strategies that will be used to communicate the results of any evaluation. The plan helps users to understand how best to choose which results will be communicated, how they will be communicated and who they will be communicated to. Find it at http://toolkit.pellinstitute.org/evaluation-guide/communicate-improve/de... Unusually, but really usefully, the same team has also created a guide on how to evaluation the communications plan to ensure it is having the intended impact on the intended audience. You can find this at http://toolkit.pellinstitute.org/evaluation-guide/communicate-improve/evaluate-your-communication-efforts/

3) Last but not least the Health Foundation have produced a web toolkit on the communication of research findings. The clickable and clearly presented nature of the resource makes it an easy read. You can access it at https://www.health.org.uk/publications/communicating-your-research-a-toolkit

Acting on findings: How do I get evaluation findings implemented? What would success look like?

The implementation of findings from an evaluation – ultimately the reason an evaluation is undertaken – can be challenging for a range of reasons including difficulties with the appropriate curation of findings and the difficulty of staff turnover meaning that lessons are forgotten. These resources might help those trying to navigate these pitfalls.

1) Better Evaluation has produced guidance on document management processes and agreements to assist stakeholders involved in an evaluation to document decisions about the management of evaluative activities, including any processes for monitoring compliance with ethical and quality standards during the evaluation to maintain the success and sustainability of the findings. You can find the framework guide at https://www.betterevaluation.org/frameworks-guides/rainbow-framework/manage/document-management-processes-agreements. This is a part of the Rainbow Framework which is an initiative to organise methods and processes in terms of tasks that are often undertaken in monitoring and evaluation. Further information on the Rainbow Framework can be found at  https://www.betterevaluation.org/frameworks-guides/rainbow-framework

2) The Bruner Foundation Effectiveness Initiative has resulted in the production of a document on how to use evaluation findings and enable effective implementation of learnings. It’s a short PDF file but packed with information and can be found at https://www.betterevaluation.org/sites/default/files/EvaluativeThinking.bulletin.6.pdf