There’s no getting away from it but assessment, in any of its many forms, is an integral part of learning. But what this week’s learning has made clear is that not all assessment is equal and not all assessment is for the same purpose.
Bloxham and Boyd split assessment into three key purposes: assessment is can be of learning, for learning or as learning.
Assessment of learning is the more typical type of assessment that we remember from school or college. Exams, multiple choice questions, end of year essays all seek to test students on how effectively they have responded to the learning. Assessment of learning is traditional and summative.
Assessment for learning is carried out by the tutor and allows them to understand where the student is in their learning at that point it time. The results or conclusions made from this type of assessment allow the tutor to modify the teaching programme in order to support the needs of the students if they are ahead or behind where they might otherwise be expected to be. Assessment for learning is formative, and diagnostic.
Assessment as learning is carried out by the student but, more often than not, at the prompting of the tutor. Working on assignments or carrying out revision are both examples of assessment as learning and are where students do a lot of their learning. Whatever the format of assessment as learning it always involves active involvement from the student.
These types are summarised in Chapter 2 of ‘Developing Effective Assessment in Higher Education’ but it is this week’s exercises and peer discussion that has brought these ideas to life and provided demonstrations of how these types may be applied.
My main takeaway from the reading (of chapter 2 and of the following chapter) is that there is a tension between how consistent and transferable a type of assessment might be vs how well it encourages deeper learning practices. It seems to me, at this stage at least, that the more an assessment encourages deeper learning the more specific it becomes to the students and the tutors involved and therefore produces results that are less easy to compare across cohorts and across Universities.
Multiple choice questions, for example, can give consistent grades across cohorts and universities but are really only a measurement of learning that encourages a surface approach to learning and doesn’t require students to necessarily connect ideas and concepts, just to memorise them. An essay that critically assesses the contemporary and historical placing of ones own work strongly encourages deep learning practices but requires professional insight and opinion from the tutor and institution during marking and therefore may not be a ‘test’ that is identically replicable across institutions.
Based purely on reading I figured that design of assessment within a course or module would simply be a sliding scale of surface/comparable assessment to deep/less consistent assessment. Yet in sharing critiques of our own assessments, or those that we have encountered, has shown an interesting mix of interpretations and intriguing assessment design.
I have read a critique from a peer that found multiple choice questions to be highly effective for deep learning (I’m not sure I agree), but also a summative assessment design that allowed for both deep, student led learning, combined with repeatable consistent assessment through the use of a student’s research notes in a measurable exam. This was the most surprising part of the week and as this came from randomly picking a few examples from my peers I wonder what other gems are hidden in the forums that I won’t have time to come across.
The learning outcomes for this week were… to engage with pedagogical scholarship and my colleagues’ perspectives about assessment and feedback practice. And then to that to critically evaluate and give feedback on my own and my colleagues’ practice. All with the aim of identifying strengths and potential actions for further development.
I feel that both LO’s have been achieved and whilst the thoughts and ideas in my head are not fully formed there is enough to have broken open preconceived ideas of what assessment is and could be.