“Tell us about a time when you enabled a learner to achieve beyond their own expectations and explain how you met their needs.”
Making a judgement about the effectiveness of an individual is probably nowhere more focused than in the interview situation. For it is in this highly charged environment that an interviewing panel draw conclusions about the likelihood of a person to meet the future needs of their organisation. And how do they do this? They ask the person being interviewed to tell them the “story” about how they have behaved in the past in response to a variety of challenges and circumstances.
This interviewing technique is known as Behavioural Questioning and is based upon the assumption that past behaviour is the best predictor of future performance in similar situations. Of course there is some validation of this narrative in the form of references from employers, etc.
The power of “storytelling” is also recognised in the realm of higher education where narrative methodologies are used with great effect in postgraduate studies up to doctoral level. Story collecting as a form of narrative inquiry allows the participants to put the data into their own words and reflect upon practice rather than merely relying upon the collection and processing of data.
It’s against this background that I want to explore the dominant method of evaluating the effectiveness of teachers, schools and educational systems – and the unintended consequences that such a model has generated. My argument being that we have a measuring (quantitative) weighted system, with qualitative (storytelling) being of secondary import, whereas I would turn that relationship on its head.
For surely the ultimate test for any education evaluation system is the improvement it leads to in outcomes for children and young people – and it is generally accepted that the factor which makes the biggest impact upon the effectiveness of that system is the quality of classroom teaching and learning.
Yet despite this knowledge it is an implicit fact that most school improvement systems are based upon the external collection and interpretation of student outcomes – with little reference to the quality of the teaching and learning process. The assumption being is that it is possible to improve school performance through external challenge. The problem with this system of school improvement is that it is based upon the premise that self-improvement cannot be relied upon in isolation.
Such external challenge has the unintended consequence of disempowering staff within the system to the extent that they feel pressurised to improve as opposed to tapping into their professional commitment to improve.
So if the dominance of the counting and measuring (quantitative) model has proven ineffective what might be the alternative? I think the answer lies in a parallel methodology that has had a transformational impact upon many of our classrooms over the last ten years. I am referring here to the notion of the Assessment is For Learning programme (AiFL).
The logic of Assessment is for Learning is based upon a realisation that simply giving a learner a mark or grade at the end of a course of study (summative assessment) does not enhance the learning, nor the teaching, process. In contrast where a teacher (and learner) use Assessment is for Learning to provide and reflect upon on-going feedback to revise and develop further the learning and teaching process – it actually enhances the final outcome and the effectiveness of the learner and the teacher in the future.
Now it seems to me that that our school evaluation models seem to comply with this simplistic paradigm. We use summative assessments – class, year group, school, authority, results – as the driver for change and make only passing reference to underlying stories which underpin the outcomes.
So what might a system look like that modeled itself upon AiFL? Let’s start by giving it a name – Evaluation is for Learning (EiFL).
I actually think we are beginning to see EiFL manifest itself in an incredibly exciting and organic manner within the Scottish education system in the form of pedagoo.org. Pedagoo represent a group of Scottish educators who are determined to describe and tell stories about their own practice in an open and transparent manner with the view to improving the quality of education they provide.
By tapping into what it is these educators are attempting to do in their own classrooms we begin to see an alternative to the dominant quantitative methodology, whereby teachers take the lead by sharing, reflecting upon and improving their practice. Imagine a school where every practitioner was “fired up” to the same extent and enabled and encouraged to participate at such a level, where they could share their stories with confidence and a passion for learning and professional inquiry – I’d put my money on that school any time!
School evaluation could be conducted in a similar manner with external evaluation focusing upon the narrative stories of managers and teachers as they describe how they are attempting to improve the quality of the education in their school.
The relationship between the stories (qualitative) and the counting and measuring (quantitative) in EiFL is reversed to the extent that the numbers are used for validation – not judgemental – purposes.
And before any of you think I’ve gone soft – if any teacher couldn’t answer the question posed at the top of this article I’d have extreme reservations about their competence – regardless of the outcomes of the students in their class.