EVALUATION PROCESSES IN WORKSHOP FORMATS
by Agnes Scharnetzky
Since its inception, JoDDiD has advised around 70 very different stakeholders from the field of political education in Saxony on their work. There are topics that come up again and again. First and foremost is the question of quality assurance and measuring effectiveness. It is completely understandable that providers and stakeholders strive to legitimize their own work, to report to the funding body and apply for new funding and, last but not least, to reassure themselves about the meaningfulness of their own work and with the desire for self-efficacy experiences, that their own offer is effective, that it is meaningful and makes a difference, in short, that their own offer has a perceptible and communicable added value for the political and democratic learning of the participants. This applies all the more to relatively formalized educational processes in clearly defined workshop, seminar and event situations, which are primarily accounted for in quantitative terms. This refers to political education processes in which it can be clearly stated how many participants are to be reached in how many standardized workshop formats. A classic case, which will be the focus of the following, is a political education provider that formulates an offer for schools that can be booked by teachers during lesson time. Of course, at the end of such a measure, not only the number of workshops and participants are of interest to all parties involved (participants, teachers, providers, workshop implementers and funding bodies), but there is also the requirement - especially in comparison to the sub-course as such - to prove that sustainable political learning has taken place through the specific offer. The questions that we then encounter in consultation processes at JoDDiD in discussions with the employees of the providers who design and coordinate the educational projects are "How can I measure the impact of the program? And how can I evaluate, document and present this measurement for future offers, for the funding provider, but also for acquisition?"
If we approach these questions with the typical JoDDiD political didactics lens, we have to admit that there are no simple answers because these questions are not particularly well illuminated in subject-specific didactic research.
There is hardly any impact research relating to specific measures in the workshop setting for extracurricular political education, especially in the context of the major state funding programs(Vielfalt tut gut; Demokratie leben; Weltoffenes Sachsen etc.). What is available are the global reports on the evaluation of the federal and state programs for the promotion of democracy. From the perspective of the didactics of civic education, it has been repeatedly stated in recent years that little is available (cf. Becker 2011, 114 ff.; Martin/Reichart 2020, 180). A notable exception is the study Wie politische Bildung wirkt by Balzter et al. (2014). Their approach is a biographical one: in eight case studies, they deduce from the further course of life where and how political education was retrospectively effective (ibid.). This is an exciting approach. In JoDDiD consultations and workshops, it is repeatedly discussed that at least part of the impact of civic education lies beyond the field of perception of the educators, because it sometimes has a lifelong impact and this effect cannot be measured at the end of an event or measure.
For various pragmatic and didactic reasons, political education programmes that work with standardized workshop formats - which will be the focus here - are often implemented by peers or teamers, i.e. not by full-time employees of the provider. This means that the implementers often have no or no completed professional training, they learn what they need for the specific assignment as political educators in the workshop in a training process designed and organized by the sponsoring organization - usually one or more weekend courses. In this respect, they automatically always have a dual role: they are learners who also act as teachers - this is often one of the main incentives to immerse themselves so deeply in the subject matter in the first place. How is success usually measured in these peer or team formats? Even before the feedback form, the number of people reached with an educational program is often used as an initial indicator of success for political educators - the more, the better. How many people were the peers or team members able to reach with their workshops? It is obvious that mere participation in a program does not in itself say anything about its effectiveness, let alone its success. Rather, it should be determined what increase in competence the participants in the workshops were able to generate. But how is this measured? Let's take a look at some examples:
One case of a peer-to-peer project with pupils was interesting, in which the provider developed a good approach to measurement. It was not the participating students who were surveyed, but the peers, who on the one hand took advantage of a much more extensive offer than a 90-minute workshop and on the other hand were much easier to reach thanks to the bond developed with the sponsor during the project. What can also be observed in other political education settings is utilized here: People who take part in political education become subjects of it themselves. Their increase in knowledge and competence, i.e. their learning , can become the more decisive factor with regard to the impact and success of a civic education project. An indicator of good civic education is, for example, the description of the young peers' experience of self-efficacy, which can have a very formative and empowering effect. At the same time, it refers to the peers' political judgment and action competence developed in the project. In this case, the data collection was implemented via a digital questionnaire for individual, digital completion. When designing the questionnaire, it is advisable to give respondents the opportunity to answer not only in text form but also, for example, in voice messages in an open question section in order to lower the hurdle of text generation.
Sometimes unsatisfactory results are obtained in the survey of peers and teamers, e.g. goals are not or not sufficiently achieved. One finding may also be that the self-efficacy experience did not meet the workshop implementers' own expectations. Different conclusions can be drawn in these cases. In the JoDDiD consultation cases, the conclusion in various cases was to reflect again with the team members on what their personal goal in their workshop actually was or is. This is where adjustments can be made. A simple example: peers are disappointed that the workshop participants were not able to reproduce all the facts at the end of the workshop or that they stuck to their opinions, i.e. that they were not prepared to reconsider or change their attitude. Instead, it can be a clearly low-threshold requirement that a political issue becomes a topic in the group, that a debate is stimulated or that political issues can be discussed. If team members and peers can perceive this as a valid result of their involvement, it can lead to relevant experiences of self-efficacy that have a positive impact on the political education program. Consequently, knowledge transfer and reproduction then take a back seat.
What we as the JoDDiD advisory team take away from the ongoing observations is the question of what the educators themselves - and not just in a logic of promotion or justification - actually describe as impact. Investigating this is a worthwhile question and can then also become the basis for further education, training and advisory services.
Finally, I would like to add three theses to this conclusion. These, too, can only be the first highlights that emerge from the ongoing observations of JoDDiD's advisory work and which need to be further investigated and developed:
- Evaluation understood as a direct measurement of the impact of individual political education events can only be validly presented if this is examined in longitudinal studies or rehistoricizing biographical methods. Both can hardly be achieved by individual projects. However, the fact that this type of measurement hardly exists says nothing about whether a project or an educational event is useful or not.
- Participatory peer evaluation processes not only provide information about the quality of the work, but also support reflection processes and the perception of self-efficacy and can therefore lead directly to an adjustment based on the evaluation results.
- In extracurricular civic education, civic education does not only take place for participants, but also among team members and peers in the context of planning, design, implementation and reflection.
Literature and references
Balzter, Nadine/Ristau, Yan/Schröder, Achim (2014): Wie politische Bildung wirkt. Wirkungsstudie zur biographischen Nachhaltigkeit politischer Jugendbildung (Wochenschau Verlag). Schwalbach/Ts.
Becker, Helle (2011): Praxisforschung nutzen, politische Bildung weiterentwickeln – Studie zur Gewinnung und Nutzbarmachung von empirischen Erkenntnissen für die politische Bildung in Deutschland. Ein Projekt der Arbeitsgemeinschaft deutscher Bildungsstätten (AdB) mit dem Bundesausschuss politische Bildung (bap), Teil I: Auswertungsbericht, https://www.adb.de/node/248.
Martin, Andreas/Reichart, Elisabeth (2020): Zum Einfluss der politischen Bildung an Volkshochschulen auf die Wahlbeteiligung. In: Schrader, Josef/Ioannidou, Alexandra/ Blossfeld, Hans-Peter (Hrsg.): Monetäre und nicht monetäre Erträge von Weiterbildung – Monetary and non-monetary effects of adult education and training (VS Verlag für Sozialwissenschaften), S. 175-212. Wiesbaden. https:// doi.org/10.1007/978-3-658-25513-8_7.