Previously, as we are working at schools, what we involve more is with assessment and evaluation in educational contexts. They are more in forms of score, grade, test, portfolio, interview, and etcetera. In fact, as Vedung (2010) discusses in his article, assessment and evaluation are actually societal phenomena, or more precisely, are a part of the endeavor to understand what is going on at different structures, institutions, or levels of the society, with which possible developments or renovations can be conducted.
There are, of course, our involvements in talks or reading materials with certain features of assessment and evaluation which are not related directly to what we are working with, interested in, or we are just accidentally involved with. For instance, we are often involved in certain debates, such as when we are being together with friends, attending seminars, or as simple as listening or watching news. Yet, our understanding in these cases is limited and partial, that what frequently occurs then is ‘fierce’ but bottomless debates instead of deliberations.
Generally, we can see say that we occasionally practice, read or listen to the forms of assessment and evaluation as classified by Veldung (2010) or the perspectives suggested by Chelimsky and Shadish. One of us is a physics teacher with more frequent involvement or stronger attachment to quantitative scientific inquiries and experimentations, while the other two work within social disciplines with more involvement with qualitative data. Our main problem here is actually about the scientific adequacy of what we do as we have yet trained to read or conduct rigorous assessment and evaluation.
Related to the functions of evaluation—building accountability, knowledge production, and societal development—we can say that we realize them as pivotal. As teachers, individually, we evaluate our students and ourselves with which we build accountability in what we do. We get informed based on the evaluation of the shortcomings and what to do next. Collegially, we exchange information, identify general phenomena, develop new plans at different levels—classes, school units, or our schools in general. There are then changes, trials and errors, or our returns to what are were previously conducted.
If it is to be compared with the engineering model of initiation, conduct and use of evaluation as schemed by Vedung (2010), we should admit that what we have done so far tend to be less-structured, simplified, and less-organized. At the schools we work for, for instance, we have yet conduct evaluation as a systematic means with which we systematically develop our teaching and learning activities. The changes we make ten to be sporadic, dependent on new seemed-to-be-working ideas instead of relying on their tested-workability. There are practices we ‘remember’ to preserve while there are best practices we ‘forget’ to maintain. In short, we can say that we are not accustomed to working based on data-based evaluation.
Loosely following the categorization of Chelimsky and Shadish, the following table tells some instances of general evaluation we conduct school level mostly yearly. There is a tradition of evaluation actually, yet, as it has been said previously, we are lacking of sufficient rigor and consistency.
Very good post teach me for good writing