Closing the accountability gap

Public services need to be held accountable to the public they serve. Such logic is inescapable and leads countries across the world to create national bodies to inspect and audit services to ensure that high standards are maintained and that society gets value from money invested in the service. In Scottish education we are fortunate to have such an internationally respected body in the form of Her Majesty’s Inspectorate of Education to provide such a function.

Yet the inspection/audit landscape of Scotland in currently under inspection itself following on from the Crerar Review which considered how we might reduce the burden of inspection both in terms of running costs – and more importantly – in terms of the additional burden repeated forms of inspection/audit have upon public bodies such as local authorities.

It is with this in mind that I would like to explore the potential of an alternative system for inspection/audit which closes the gap between the service being inspected and those who use the service.

If we consider existing inspection formats the “gap” between the users and the service is filled by the inspection body who undertake to scrutinise the service “on behalf” of the public who pay for and use the service. Of course good inspections/audits make significant use of evidence gathered from such user groups and organisational self evaluation will often engage in productive 360 degree gathering evidence from a wide range of “stakeholders”.

But what if we could develop a model whereby the users were much more involved than simply being “consulted” about the service they use?  What if  they were directly involved in helping to make public judgements about the quality of that service?  Of course those of us involved in the delivery of public services can see many problems arising from such an initiative.  These would certainly centre upon the reliability of non-professionals’ judgements; issues of fairness and objectivity; potential for abuses of power; small interest groups having a disproportionate effect upon a judgement ( and subsequent direction of travel); and the potential for bullying/intimidation of staff.

In addition to these concerns from the professionals, there would be queries about the burden that such an expectation might place upon those stakeholders who might be interested in participating in such a process.  These burdens would arise from the need for time to be given for training; the pressure to present a positive picture of a local service; the fear that a negative judgement on quality could backfire in some way upon them as users;  and the possible stress that engagement in such a process could engender.

Nevertheless,  I would argue that the potential benefits emerging from “stakeholder” involvement in  inspection far outweigh any of these concerns – all of which can be addressed through a clear and well developed model of practice.

At it’s most basic level the process could be described as follows:

1. A range of stakeholders are invited/volunteer/ are nominated to participate in the evaluation process.

2. These stakeholders could involve some of those who deliver the service to evaluated but they would be in a minority compared to those who benefit from the service.

3. The process would be based upon nationally agreed quality indicators and levels of performance such as the HMIE How Good is Our School documentation.

3. The stakeholder group are provided with training to allow them to undertake the evaluation.

4. The stakeholder group are assisted by an externally appointed objective adviser (such as an HMIE) who has experience of national standards of performance.

5. The stakeholder group evaluate the internal performance judgements and associated evidence provided by the service to be inspected.

6. The stakeholder group scrutinise the evidence and engage with other users in order to validate the judgements made by the service.

7. The stakeholder group submit a quality report following a nationally agreed template which is nationally validated through reference to nationally available data.

8. The report is published and shared with the local and national community.

The principal of the above process clearly differs from current practice in that it uses exernal validation and support to empower a local community to judge the effectiveness of a local service – as opposed to taking it out of their hands.  In such a way it builds capacity and an on-going improvement process which extends far beyond the legacy of the traditional “snap-shot” external inspection.

My hope is that in East Lothian within the coming year we will have at least three schools where we can trial and refine a stakeholder evaluation process along the lines I’ve described.  The reward for such participation would be a reduction in the external scrutiny of the school throughout the year by the authority and – more importantly – an improvement agenda arising from the process which is focused and benefits from the shared ownership of the entire community.