Guest post: We have invited Dr Mary Richardson from UCL Institute of Education to share her research in this blog article.
Trust is also important because those who take assessments should always feel confident that they have the right assessment for the context. Trust matters, primarily, because assessments, particularly high-stakes tests, are truly life-changing. Torrance (2017:90) sums this up well when he says:
Passing and failing examinations not only defines individuals as educational successes and failures but also establishes the legitimacy of the idea of being an educational success or an educational failure and all that flows from this in terms of life chances.
One of my foci for research at UCL Institute of Education is about how academic staff relate to the new technologies for assessment that are increasingly employed in Higher Education Institutions (HEIs). These are currently under-researched themes, particularly in terms of how change is enacted. How we talk about assessment, how we introduce a range of ways to assess, and how we educate key stakeholders in assessment are all vital to successful implementation and critical to how the assessments and their outcomes are trusted by our students. Electronic or e-assessment is an exciting new feature of the working life of universities, but I’m concerned about how it is used.
It’s important to be clear about what we mean by e-assessment because it can include a wide range of practices. In this blog, I’m referring, quite specifically, to the use of computer-based marking on-screen as a means to provide feedback on students’ draft work and also to mark and grade final submissions. The most popular software is Grademark, a free marking tool that is part of the Turnitin plagiarism software, and some of the initial (but limited) research conducted into online and electronic grading and feedback has suggested it to be a useful addition to teaching and learning in HEIs. However, there is still very little research in the use of Grademark as a mode of electronic assessment, and yet its popularity appears to be growing; all UK universities now use this system. So what happens when we make a change in marking environments?
Many of the thousands of tutors who have always marked paper-based assignments are now being asked to put down their pencils and take up a mouse instead. Quite apart from a significant change in practice, there is an assumption that this new format for assessment is ‘improved’ in terms of efficiency. For example, in the past, students would bring hard copies of work to their HEI and hand it in to tutors – the work would be marked and handed back at a later date. Being able to upload work into a database from which tutors download it for marking is indeed very convenient and it provides significant savings of resources. It can also mean that tutors have more opportunities to access work during the marking period because they are not reliant on collecting and carrying hard copies. However, these things simply are indicators of convenience; they do not provide evidence for the use of technology in improving actual assessment practice. This needs some consideration.
The use of e-marking for school-level education, for example national tests and/or qualifications externally marked by awarding bodies, is closely scrutinised with the markers being subject to training, moderation, and post-marking inspection. Such formal processes ensure that those employed to use online assessment environments are, if not expert, certainly highly trained and able to reflect on their skills in these formats. The same is not true of practice in HEIs. What appears to be common is the introduction of enforced movement from paper to e-assessment without specific training to re-educate the markers/assessors for working in this new environment. Such dramatic changes are likely to impact the validity and reliability of marking because the interactions with a screen and mouse are different to those with paper and pen.
I’m not suggesting that all marking in HEIs is unreliable, but I do question the assumption of comparability of marking on-screen and on paper which leads me to ask: To what extent can we trust the use of the e-assessment practice in HEIs?
It is imperative to know we can trust the quality and validity of assessment outcomes in HEIs because they are incredibly high-stakes. On the part of the students, assessment outcomes represent a significant body of learning for individuals; and they represent a significant cost financially to individuals. On the part of teaching staff, assessment outcomes underpin systems for accountability – and, like it or not, they represent a quality ‘mark’ for the institution, its staff, and its students. How academic staff feel about their ability to perform as assessors is important because there is resistance to electronic technology where it is perceived to be less efficient, or more complex than the system that it replaces. In this environment, the professional identity of academics needs protection to assure and continually develop high-quality assessment practice.
Dr Mary Richardson is the director of the MA in Educational Assessment at UCL Institute of Education, University College London. She leads courses in assessment and testing and supervises doctoral candidates.
Her areas of research interest are in educational assessment. She has recently begun work examining the use of Online Marking for HE assessments with a view to developing systematic trials of marking comparability between paper and screen.
Dr Mary Richardson shared her research on academics' experiences of change in e-assessment practice in higher education at the 'Examining Excellence' conference held at BPP University on 6th December 2019.