Evaluation is For Learning

“Tell us about a time when you enabled a learner to achieve beyond their own expectations and explain how you met their needs.”

Making a judgement about the effectiveness of an individual is probably nowhere more focused than in the interview situation. For it is in this highly charged environment that an interviewing panel draw conclusions about the likelihood of a person to meet the future needs of their organisation.  And how do they do this? They ask the person being interviewed to tell them the “story” about how they have behaved in the past in response to a variety of challenges and circumstances.

This interviewing technique is known as Behavioural Questioning and is based upon the assumption that past behaviour is the best predictor of future performance in similar situations. Of course there is some validation of this narrative in the form of references from employers, etc.

The power of “storytelling” is also recognised in the realm of higher education where narrative methodologies are used with great effect in postgraduate studies up to doctoral level. Story collecting as a form of narrative inquiry allows the participants to put the data into their own words and reflect upon practice rather than merely relying upon the collection and processing of data.

It’s against this background that I want to explore the dominant method of evaluating the effectiveness of teachers, schools and educational systems – and the unintended consequences that such a model has generated. My argument being that we have a measuring (quantitative) weighted system, with qualitative (storytelling) being of secondary import, whereas I would turn that relationship on its head.

For surely the ultimate test for any education evaluation system is the improvement it leads to in outcomes for children and young people – and it is generally accepted that the factor which makes the biggest impact upon the effectiveness of that system is the quality of classroom teaching and learning.

Yet despite this knowledge it is an implicit fact that most school improvement systems are based upon the external collection and interpretation of student outcomes – with little reference to the quality of the teaching and learning process.  The assumption being is that it is possible to improve school performance through external challenge.  The problem with this system of school improvement is that it is based upon the premise that self-improvement cannot be relied upon in isolation.

Such external challenge has the unintended consequence of disempowering staff within the system to the extent that they feel pressurised to improve as opposed to tapping into their professional commitment to improve.

So if the dominance of the counting and measuring (quantitative) model has proven ineffective what might be the alternative? I think the answer lies in a parallel methodology that has had a transformational impact upon many of our classrooms over the last ten years.  I am referring here to the notion of the Assessment is For Learning programme (AiFL).

The logic of Assessment is for Learning is based upon a realisation that simply giving a learner a mark or grade at the end of a course of study (summative assessment) does not enhance the learning, nor the teaching, process. In contrast where a teacher (and learner) use Assessment is for Learning to provide and reflect upon on-going feedback to revise and develop further the learning and teaching process – it actually enhances the final outcome and the effectiveness of the learner and the teacher in the future.

Now it seems to me that that our school evaluation models seem to comply with this simplistic paradigm. We use summative assessments – class, year group, school, authority, results – as the driver for change and make only passing reference to underlying stories which underpin the outcomes.

So what might a system look like that modeled itself upon AiFL? Let’s start by giving it a name – Evaluation is for Learning (EiFL).

I actually think we are beginning to see EiFL manifest itself in an incredibly exciting and organic manner within the Scottish education system in the form of pedagoo.org. Pedagoo represent a group of Scottish educators who are determined to describe and tell stories about their own practice in an open and transparent manner with the view to improving the quality of education they provide.

By tapping into what it is these educators are attempting to do in their own classrooms we begin to see an alternative to the dominant quantitative methodology, whereby teachers take the lead by sharing, reflecting upon and improving their practice.   Imagine a school where every practitioner was “fired up” to the same extent and enabled and encouraged to participate at such a level, where they could share their stories with confidence and a passion for learning and professional inquiry – I’d put my money on that school any time!

School evaluation could be conducted in a similar manner with external evaluation focusing upon the narrative stories of managers and teachers as they describe how they are attempting to improve the quality of the education in their school.

The relationship between the stories (qualitative) and the counting and measuring (quantitative) in EiFL is reversed to the extent that the numbers are used for validation – not judgemental – purposes.

And before any of you think I’ve gone soft – if any teacher couldn’t answer the question posed at the top of this article I’d have extreme reservations about their competence – regardless of the outcomes of the students in their class.

Overcoming curriculum inertia: a zero based approach

Arguably the most difficult stage to implement significant curricular change is in the senior phase of secondary education.

The “high stakes” nature of the upper secondary school assessment system combines with other factors, such as self interest, fear of change, rigid staffing and timetabling models, and avoidance of potential conflict to create a state of “curriculum inertia” which is incredibly difficult to shift.

There is no blame to be attached to this inertia. It is a natural consequence of a system which has laid down sedimentary layers of practice, structures and expectations one upon the other, from one generation to the next – regardless of changes to examination arrangements.

In my recent post on senior phase options I listed a number of possibilities, which, if combined, could radically change the way in which we meet the needs of senior students. Yet regardless of how good these ideas might be, curricular inertia will make it incredibly difficult to see any more than a few translated into practice and certainly not enough to reach a tipping point in favour of the needs of the student – as opposed to the needs of the system.

The orthodox approach to senior phase curriculum planning has been to adopt an incremental approach where tweaks are made from one year to the next but which in reality result in minimal and cyclical change.

An alternative approach to be considered is borrowed from the world of financial budgeting – Zero based budgeting:

“Zero-based budgeting is an approach to planning and decision-making which reverses the working process of traditional budgeting. In traditional incremental budgeting, departmental managers justify only variances versus past years, based on the assumption that the “baseline” is automatically approved. By contrast, in zero-based budgeting, every line item of the budget must be approved, rather than only changes.[1] During the review process, no reference is made to the previous level of expenditure. Zero-based budgeting requires the budget request be re-evaluated thoroughly, starting from the zero-based”

Zero based curriculum planning adopts similar a similar philosophy i.e.schools start the planning process from scratch based around the needs of students and build from that point, with no reference to what was done before.

Of course, this would be an enormous change for schools and the reality is that we ARE limited by the staffing profiles which have evolved to suit our existing curricular models, even if they don’t facilitate the needs of young people.

Nevertheless, I reckon that by adopting at least some elements of zero based curriculum planning we can begin to envisage a model of delivery which is very different from what we have today. It is only by describing that vision that we can begin to plan a coherent change strategy to translate that into practice.

Are we adding value?

I met with a couple of colleagues from schools  today (one primary and one secondary) to discuss how we can make better use of the data we collect through our MIDYIS and PIPS assessment system from the Centre for Evaluation and Monitoring based at Durham University.

We now administer these assessments for all children in P1, P3, P5, P7 and S2.  The S2 results act as very accurate predictors of how students will perform in their formal subject examinations in S4. 

Pauline Sales, our Research Principal Officer has been doing some outstanding work to allow us to make judgements about how groups of children progress against the national averages for reading/verbal skills as they move through our schools system, for example “Do children from one primary school improve, decline or remain the same against the national average after two years of secondary school?”  This data offers huge opportunities for school managers and teachers to better understand the impact they have upon children’s literacy levels.

The basic premise is that if we can develop this system further we can make judgements about how much value we add – or otherwise – throughout a child’s educational experience in East Lothian schools. It’s important to emphasise that this data is most useful for judging the progress of groups of children – rather than individuals.  Obviously the form of assessment that makes the most difference to the individual child is that of a formative kind undertaken by the teacher and used to support the learning process. We will be discussing this further with headteacher colleagues with a view to how we make best use of this data.

Assessment Moderation and Quality Assurance: How do we avoid creating a monster?

Curriculum for Excellence Building the Curriculum 5 a Framework for Assessment: Quality Assurance and Moderation – which must win the title for one of the longest titles for any educationally related paper – sets out the practiuces and purposes of quality assurance and moderation.

In the Strategic Vision and Key Principles of Curriculum for Excellence it states that:

“The aim will be to achieve consistency in standards and expectations and build trust and confidence in teachers’ judgements. Education authorities and national partners will work together to develop the most efficient and effective approaches possible for quality assurance and moderation.”

Following some discussion with colleagues on Friday I thought it might be worth trying to work out how we might avoid creating a bureaucratic assessment monster which weighs down the real business of learning and teaching.

The remarkable and ironic thing here is that we need to protect ourselves from ourselves.  For it seems to me that we are in real danger of recreating the same reporting industry which characterised 5-14 – just because it’s what we have come to know and expect. Yet if one reads the document there does appear to be enough space for us to create something which does not sink under its own weight.

However, for us to create an efficient and effective system we need to start from a basis where we trust teacher judgements – particularly where they are locally moderated.  Yet the reality is that our automatic default position is to create systems which are designed to catch the tiny minority who might be tempted to distort assessments.

Our next national CfE Implementation event will be focussing on this issue but I hope to explore this further over the next few weeks.

Developing a 3 -15 Assessment and Reporting System

I have been working with a group of colleagues to develop our assessment and reporting system for East Lothian to match the new curriculum.  We need to come up with an agreed system to replace the current 5-14 testing regime. 

What we have agreed is that we – the authority – will commission a group of headteachers to devise an assessment and reporting  system which will cover children aged 3 – 15.  The tender document is in the process of development but the system must meet the diverse needs of learners, teachers, parents, headteachers, the local authority, elected members and HMIe.  I’m very keen that we also involve parents, teachers, learners and elected members in the development of this system but the “starter for 10” will go to the headteacher group. Exciting stuff!

Our system must:

1 -make use of formative assessment as a key element in assisting the learning process;

Why? – this is what makes the biggest difference in the classroom.  It allows learners to be clear about how they can improve their learning.

2- provide clear information to learners about their progress;

Why? – children should to be able to measure their progress against a clear set of standards where they clearly understand what they have to do to meet the next level.

3- provide reliable information to parents which allows them to assist their child’s learning;

Why? – parents want to help their children but need clear advice from professionals about what they should do to help.

4- provide a means of proving the reliablity of assessment against an objective measure;

Why?- any doubts about the reliability of teacher or school assessment has the potential to fundamentally undermine confidence in the process – particularly at key transition points e.g. infant to upper primary, primary to secondary.

5- provide a clear means of measuring the value added by the teaching process;

Why?- teachers need to know if they are making a difference and the impact of any changes they make to their practice

6- enable parents to judge how their child’s learning is progressing over a period of years.

Why? – parents want to know if their child is making progress in line with the “bandwidth” of child development that might be expected for someone of their child’s age.

7- enable school managers at all levels to make judgements about the effectiveness of the curriculum and the learning and teaching process.

Why? – we need to check if what we are collectively doing is making a differnce – if not we need to try something else.

8- enable the authority to make judgements about the capacity of a school to add value to the learning process in relation to every child’s starting point;

Why? – the public needs to have confidence that it is getting good return on the investment made in that school.  The authority also needs to have confidence that the learning needs of every child is being met by the school.

9 – provide clear statements about progress in relation to literacy, numeracy and health and well being

Why? – these are building blocks to successful learnming but their eminence should also reinforce the collective resposibility of all teachers to promote achievements in these areas

10- provide a valued means of formally recognising children’s attainments and achievements at key stages between the age of 3 – 15

Why? – formal recognition can  motivate and enhance the perceived value of certain aspects/stages of the curriculum

11 – articulate with the Curriculum for Excellence assessment framework

Why? – our system must comply with national guidance

12 – be valued by all stakeholders, e.g. learners, parents, teachers, managers, authority officers, elected members, and HMIE.

Why? – this is a challenge – but what a prize!

PS – we could really do with some help in developing this tender document – so suggestions/comments are most welcome

Perhaps we do sometimes need to weigh the pig?

“You don’t fatten the pig by weighing it” An evocative phrase used by those who would rightly challenge the concept of over-assessment or too frequent external assessment or inspection. A Head Teacher’s Union leader even described the English Ofsted as the “Office of Pig weighing”. The use of the phrase has taken on a global currency as the following examples demonstrate: Australia; England; USA or see google.

Let me say at the outset that I am uncomfortable with this analogy – children are not pigs – anyway Chris Thorn does a much better critique of the concept than I ever could in his blog post from 2006.

But for the sake of argument let’s just accept the pig weighing analogy and use it to make a point.  The question I’m interested in is whether or not we need external assessment, or testing regimes, earlier than the certificated courses which young people will encounter in S4 and beyond. In the current regime we have National Tests for 5-14. These have undoubtedly had an effect on how, and what, teachers – particularly primary teachers – have taught over the last 20 years.

With the introduction of A Curriculum for Excellence there is a possibility that there will be no nationally recognised testing regime to take its place for children below S4.  Now I know many people see this as a good thing and at first glance it does seem appealing but I really wonder if such a situation provides sufficient leverage in the system to change the way in which we structure and deliver learning and teaching?

In my post on reverse engineering I pondered on the “trickle down” influence or leverage on the curriculum provided by examination requirements. Secondary teachers in Scotland have been encultured into a system which takes account of the “examinable syllabus”. What is it that makes us so confident that we can make literacy and numeracy the responsibility of all in S1 – S3 simply by appealing to the professionalism of teachers? 

My point here is that I feel we do need to introduce some form of summative assessment of literacy and numeracy at the end of S3.  I would suggest that the internal judgements of teachers are complemented by a external test which when combined with the internal assessment provides an accurate judgement about the a young person’s abilities at that time.  I believe the external assessment would fulfil a number of functions:

  1. Validate the judgement of the teachers
  2. Where there is a discrepancy between the internal and extrnal assessment it provides a means of providing an external baseline with which to provide a comparison.
  3. Provides a purpose and motivation for young people to improve their levels of literacy and numeracy.
  4. Provide a useful benchmark for schools to measure their progress.
  5. Provide a useful and validated measure of a young person’s abilities which can be used by parents and employers.
  6. Appeals to what secondary teachers “know”  – i.e. teaching to the test.

Before you leap up and down at that last sentence I believe that many great teachers do teach to the test but they do so in such a way that benefits their pupils. The challenge facing us would be to create a test for numeracy and literacy which made schools teach these core skills across all areas of the curriculum and sought to test them in these self-same contexts.

We certainly don’t want to see an “Office of Pig Weighing” in Scotland but I think I could confidently predict a positive change in the way in which we teach literacy and numeracy in our secondary schools if we grasped the opportunity to create an imaginative testing system which complemented and validated our internal assessments and for which every teacher in the school was accountable – not just the Maths and English teachers. 

Qualifications – A Personal Perspective

The Scottish Government issued A Consultation on the Next Generation of National Qualifications in Scotland on the 10th June 2008. The authority will be making formal response before the closing date of the 31st October 2008.

I thought it might be useful to try to work out my own perspective on the questions. In a future post I’ll consider how these responses might be reflected in a school curriculum.

Q1. Do you welcome the intention to update all qualifications at Access, Higher and Advanced Higher in line with Curriculum for Excellence? Please comment on any implications to be considered.

It seems sensible to update all qualifications to reflect A Curriculum for Excellence

Q2. Early consultation has identified the ‘best’ features of Standard Grade and Intermediate qualifications as:

the ‘inclusive’ approach to certification contained in Standard Grade; and

the ‘unit based’ structure of Intermediate qualifications.

Are there any other features in the present Standard Grade and Intermediate qualifications which should be included in the new qualification at SCQF levels 4 and 5?

The new qualification should reflect the ‘unit based’ structure of intermediate qualifications.

Q3. One of the proposals is to grade units. Do you agree that units should be graded A-C rather than pass/fail?

I’d support the grading of units as it would maintain motivation throughout the course and give feedback to students on progress.

Q4. Do you want graded units to count towards the final award?

I believe the grading of units should contribute to the overall award.

Q5. Which option for introducing compensatory arrangements would you most support?

Please tick one option or suggest an alternative.

Option A Extend the range of grading in course awards to grade E.

Option B Recognise unit passes only.

Option C Compensatory award at the level of the course studied with no grade awarded.

Option D Compensatory grade C award at the level of course below that studied.

Option E Compensatory grade A award at the level of course below that studied.√

Q7. Do you agree with the proposal to offer literacy and numeracy awards at a range of SCQF levels (3 to 5)? If not please offer an alternative.

I agree with the proposal – for students following a more vocational programme of study such an option would be welcomed by employers.

Q8. National Qualifications at Access 3 (SCQF level 3) do not have an external examination. Do you agree that any new awards in literacy and numeracy at SCQF level 3 should have an external examination?

I’d like to see an external examination for literacy and numeracy for all S3 students

Q9. Should the weighting between the internal and external assessments for the literacy and numeracy awards be equal? If not should more weight be attached to the internal or external assessment? Please explain.

Difficult one! I’d go for more external weighting as it would enhance the credibility of the qualification.

Q10. When should young people be assessed for literacy and numeracy awards? Please tick one option.

Option A At the end of S3 as part of the summer diet of examinations.√

Option B In the December of S4 as part of a winter diet of examinations.

Option C At the end of S4 as part of the summer diet of examinations.

Q11. Do you agree with the proposal to allow the study of Highers and Advanced Highers over 12 months, 18 months and 2 years?

The more flexibility we can give individual students over the length of study the better.

Q12. Do you agree with the proposal to introduce a winter diet of examinations?


Q13. If you agree with the proposal to introduce a winter diet of examinations, what subjects and levels of qualifications might first be offered?

Numeracy and Literacy.

Q14. Would you agree with changes to the system which allowed the most able students to bypass qualifications at lower levels and begin study for Highers from S4 onwards?

Totally agree with this proposal. This would enable the two year Higher course to become a reality from S4. Some students may be able to sit some highers after one year thereby allowing a two year Advanced Higher course.

Q15. Do you have any other ideas for increasing flexibility within the senior phase (S4 to S6)?

I would promote the possibility of students choosing to follow some courses within a virtual learning environment.

Q16. It is intended that planning for the new curriculum should commence in 2008/09, with approaches based on the new curriculum introduced from school year 2009/10. This suggests that the new and revised qualifications and any increased flexibilities would be required from 2012/13 onwards to ensure smooth progression between the curriculum and qualifications. Is this indicative timeline realistic? Please comment on any implications to be considered.

This timeline is demanding but some key elements of the new qualifications should be in place by that time – especially Numeracy and Literacy. I don’t see the need to compromise the potential of new curriculum by having to rush through if we aren’t ready. What all schools should have in place by that time is a curricular structure which is able to deliver the new qualifications by 2012.


Destabilising the status quo


I recently bumped into a former colleague and briefly chatted about “A Curriculum for Excellence”.  My friend has responsibility for developing learning and teaching at his school and was telling me that the school are going to give every pupil comprehensive course support materials for each of their certificated subjects – once the course has been completed.  The teachers didn’t want to put it out before they taught the course as they wanted to “remain in control”.

For me it was a timely reminder about how much work is still to be done in terms of changing our approach to learning.

If we are going to change the way in which we work then perhaps we need to destabilise the status quo thereby freeing teachers to adopt different roles and engage learners in learning as opposed to absorbing information.

Keeping this in mind I wonder if David Eaglesham, the general secretary of the Scottish Secondary Teachers’ Association, perhaps provides the catalyst when he said he doubted whether ACfE  could live up to its aims without financial input.

“It is almost inevitable to say it is the worst-resourced initiative we have ever had, because there is nothing there in the way of resources,” he said.
“It is not that people don’t want to do it, but if they don’t know what they are doing or have the resources to implement it, it could be disastrous.”

I agree that there is a need to provide resources but I wouldn’t provide them in the form that they have come in the past.  My alternative approach would be to create a virtual learning environment for every certificated course provided by the SQA.  This course could be accessed by students at a place and time of their choosing – I’d like to think GLOW could play an important role here.

I’ve been speaking to a number of my son’s friends who have just finished school and without exception they all said they would have welcomed the chance to access their entire course on-line.  That’s not to say that they didn’t want a teacher but that they wanted the teacher to work in a different way.

So what would be the outcome of such a step – surely it will replace one form of spoon-feeding with another? Well not according to my son’s friends who are now at university – the teacher would take on much more of a tutor’s role where they have use their tutor to expand and deepen their knowledge.  In so many ways this ties in with what Jerome Bruner was talking about yesterday when he said that educational systems were “too easily routinised” and that there were too few opportunities for students “share hypotheses”, “reflect upon alternatives ” or “reflect upon controversy”.

Bruner wants teachers to seek out “inter-subjectivity” (I think I prefer this term to inter-disciplinary) by contextualising their subject within the wider world – but how often do teachers manage to do this in the pressure to get through the content of a course.

Put it this way – there appears to be an appetite amongst young people for such a change.

League Table approach and too much Testing remains Harmful to Education, says EIS


The Educational Institute for Scotland (EIS) – the  biggest teaching union in the Scotland have issued a number of press releases over the holiday period.

The last of these was entitled League Table approach and too much Testing remains Harmful to Education, say EIS

“The Educational Institute of Scotland (EIS) has called for a radical rethink on the over-use of testing in schools and the damaging construction of ‘league tables’ with the data collected. The EIS believes that too many local authorities continue to place too much emphasis on narrow testing and the collation of associated data which brings little or no benefit to schools, teachers and pupils.

Commenting, EIS General Secretary Ronnie Smith said,

“Despite the end of National Tests some five years ago, many authorities seem unable to cure their addiction to excessive testing in schools and continue to favour the flawed ‘league-table’ approach to measuring school success. This is in direct contradiction to current national educational priorities and has a negative impact on learning and teaching in schools. The use of such widespread testing places additional pressure on pupils and teachers to perform well in these tests – this has the inevitable result of narrowing the scope for teachers to use their professional judgement in what they teach, with considerable pressure to ‘teach to the test’ to avoid criticism of the school when league tables are constructed. This tick-box approach to measuring school success is of little value, and serves only to provide figures for education authority statisticians to crunch while simultaneously demoralising pupils and teachers.”

I found this an interesting perspective, particularly given the direction we are taking in respect to outcome agreements. Certainly from an East Lothian point of view we have never presented school assessment data in a league table format – and I can’t think of any other authority which adopts such a “league-table” approach.

I agree with Ronnie when he warns of the danger of solely focusing upon attainment as the only means of judging the success of a school but the new HGIOS3 makes it clear that pupil achievement is just as important.

However, I have to challenge his assertion that testing and the collation of associated data brings little or no benefit to schools, teachers and pupils. Firstly, schools need to have some way of judging their progress against an external benchmark.  Testing provides that benchmark.  I recently wrote about the “King o’ the midden” complex whereby it’s possible for a school, an authority, and even a country to delude itself about it’s progress, unless it collected data,  compared itself with its peers, and then interpreted how that that information can be used to shape its practice. For example, from the PIRLS data it is apparent that children’s reading in Scotland is not making the same rate of progress as in other countries.  Such knowledge initiates a question about how we currently teach reading and might have a direct impact upon schools, teachers and pupils.

A school can only objectively reflect upon how children are making progress throughout their school careers if they have access to  valid and reliable summative test data.  At an authority level such data helps to provide a means of judging a school’s performance in a particular area. For example, if a school’s attainment in maths is significantly below maths attainment in neighbouring schools, of a similar pupil composition, then it is legitimate to ask questions about the teaching of maths in that school.  Once again summative data leads directly back to the learning and teaching process.

I actually think the key point which Ronnie Smith is making is about how such data is used and the culture which underpins its collection, interpretation and use. My hope is that the culture we aspire to in East Lothian actually helps us to collect and use summative data where our ultimate focus is always upon the learning and teaching process, where formative assesment plays a crucial part.  The trick will be to ensure that such a balance is always achieved.

Last thought, Ronnie Smith refers to the needs of schools, teachers and pupils, but makes no mention of parents…..mmm?

Taking the PISA


The Programme for International Student Assessment (PISA) is an internationally standardised assessment that was jointly developed by participating countries and administered to15-year-olds in schools.

The survey was implemented in 43 countries in the 1st assessment in 2000, in 41 countries in the 2nd assessment in 2003, in 57 countries in the 3rd assessment in 2006 and 62 countries have signed up to participate in the 4th assessment in 2009.

Tests are typically administered to between 4,500 and 10,000 students in each country.

Information about PISA 2006 

Sample test questions Maths, Reading and Science 

The full results will be made available on the 4th December.

My question is whether or not we could be using this information to help us shape our Curriculum for Excellence in Scotland?