On the 18th of May 2023, the INDEED Thematic Workshop on evaluation of educational long-term programmes was organised by PPHS. During the workshop, the group was asked to simulate the evaluation process in a guided discussion using the previously presented INDEED Evidence-Based Evaluation Model and initiative as a case in the discussion. The discussion aimed to benefit the development of an INDEED evaluation tool to help plan and conduct evidence-based evaluations of PVE/CVE/DE-radicalisation initiatives. In addition, participants had the opportunity to share experiences, expertise and insights, and practises they may have on evaluation of long-term educational programmes, in particular in the field of PVE/CVE. Participants also exchanged key lessons learned, recommendations for improving evaluation of such programmes, and the mechanisms contributing to evaluation with other stakeholders coming from similar sectors.

The main takeaways are:

  1. The process of planning, designing and conducting a comprehensive external evaluation is not a one-man job, but a team effort. Ideally, if the programme budget can cover it.
  2. Properly planned, designed and conducted internal evaluation is also acceptable if it meets all the requirements of maintaining high quality and objectivity. Particularly when external evaluation is too expensive and exceeds the programme budget.
  3. It is important to instruct all persons directly and indirectly responsible for collecting data as part of the evaluation on how to properly carry out this process.
  4. In the area of prevention, there is still a shortage of people dealing with prevention and at the same time experienced in evaluation.
  5. In order for the evaluator to know exactly what the programme is to achieve and what aspects of it should be evaluated, the programme must contain correctly formulated: main goal (reflecting the diagnosed problem, e.g. related to radicalisation) and detailed operational goals (reflecting the causes of the diagnosed problem), and for each from these objectives, an indicator measuring its progress must be formulated.
  6.  When planning an evaluation and preparing a methodological workshop, it is worth drawing on the literature and various approaches to evaluation contained therein. This enables the use of existing and peer-reviewed scales and previously tested evaluation questions.
  7. Both success stories and failures related to evaluation are important to know what to do and what to avoid.
  8. It is worth collecting data (answers to evaluation questions) immediately after each completed task. Evaluation after an extensive block of tasks can be difficult, as complex material can be difficult for participants to remember.
  9. Evaluation tools must be designed in a methodologically correct manner, including adaptation to the target group (simplicity). They cannot be too extensive, because if there are too many questions in the survey or interview, the declining motivation of the respondents may be an obstacle.
  10. It is very important to understand and remember the purpose of the evaluation programme. Evaluation results should always be disseminated to stakeholders to show what has been achieved for future change or how to correct what has not been achieved; however, it is still quite common for the results to end up ‘in the drawer’.
  11. Formulating recommendations from the evaluation is mandatory.