"Why do the media report the decline in our ranking rather than the decline in our results?"
- Dr. Rachel Wilson.
Assessment is a topic that is critically important, hot, and over done. Yet there was an attitude that came through from the abstract for Rachel's presentation that sounded positive and excited about assessment which is not an attitude that I have come across before.
Rachel made some cold points to start with. We have, she began, an assessment system which is essentially external to the classroom and which created a situation where her own daughters, currently in fourth grade, have already sat more exams than Rachel did during her entire schooling and which has created a competitive streak in them she had not expected. She made the point that research demonstrates that emotions and feelings are at the heart of learning and therefore that these things should be at the heart of our education system which is certainly not the case when the perception of school is that it prepares you for an exam which serves only one purpose; to determine what university and courses you are eligible for.
The media reported, quite vociferously, the recent release of the latest PISA results (for example here, here, here and here). The issue is that the stance taken is one of bemoaning our drop in ranking relative to other countries. Rachel questioned this attitude; "why does it matter if we are ranked below Kazakhstan in PISA?" Rachel continued by acknowledging that our testing results across reading, writing, mathematics and scientific literacy are certainly declining, despite the near zealous focus on standardised national testing
We were asked to consider how often a student has been unable to answer a question or complete a task in a test situation that they have demonstrated the ability to do ordinarily. It is quite often, and the rhetoric around oh, I'm not a test person is demonstrative of the fact that we are aware of the impact that testing can have on our emotions and feelings. Rachel invoked Hattie's research and exhorted us to know our impact and to consider the impact that our choices have on our students.
Assessment should, we were told, engage students. It should be something that they want to complete. Consider how eager the majority of students are to learn and to engage with learning tasks in their early years of schooling. What happens that we then see the fourth grade slump and students disengaging with learning? Assessment should engage students and allow for professional judgement. This is not, as far as I can see, reconcilable with the current system of mandatory reporting each semester in an A-E fashion how a student is going relative to their peers across a range of subject areas and the pressures put upon teachers and students to ensure growth, but that perhaps says more about the focus of our education and schooling systems.
Rachel then took the audience on a whirlwind history tour of assessment in Australia. We have traditionally utilised three main forms of assessment. Norm referenced demonstrated where students sat on a bell curve. Criterion referenced assessments were designed to measure student achievement against a clear set of criteria or learning standards that indicated what students should know and/or be able to demonstrate. Standards references assessment was designed to be a process of collecting and interpreting information about students learning and allows for teacher professional judgement. Much assessment that goes on at the moment is a hybrid of all three models, however, there is another option. Ipsative assessment.
Ipsative assessment was not a term that I had heard of previously, however, a read of the brief overview provided onscreen (captured in the above tweet) indicated that this is probably being used on a regular basis in many classrooms, though perhaps not in the structured and formal way that Rachel was indicating. She went on to talk about an online system that is used in New Zealand that allows teachers to log on and see data across a range of curriculum areas and quickly identify gaps in learning which can be used for planning purposes. It also allows assessment tasks to be completed on an as needed and appropriate basis rather than the current model here in Australia of a big day or week of assessment testing each year. Being able to input student results, have them mapped to curriculum areas and use that data for planning in a timely manner would be useful, especially given that the purpose of assessment of learning should be to inform the next steps in that area. It highlights the fact that the delay in results after NAPLAN testing makes the tests themselves completely redundant as a pedagogical tool, especially considering that neither the student or teacher is given access to their test paper to talk about what they have done and use it as a feedback tool.
Rachel's talk was very intriguing and seemed to be well received by the audience. I heard a few people sitting around me comment that they wanted to research ipsative assessment more and look at how they could adapt their current assessment processes to suit and the buzz as we moved out to lunch demonstrated that she had given many people food for thought.
If you have missed any of the articles in this series or Storifies of the Tweets from FutureSchools, you can find them here.