Remote proctoring can be an effective method of safeguarding the integrity of assessments. But it’s not the only method. Framing the question as one form of proctoring versus the other, instead of determining what you need from an academic integrity perspective and then evaluating your options, might lead to a false comparison between two methods that have relative merits depending on context.
Deciding between how you safeguard academic integrity is the same as deciding a research strategy. You wouldn’t start by picking qualitative or quantitative methodology without first defining what insights you need to gain. If you focus only on comparing methods, you risk a false comparison, overlooking which approach best aligns with your research goals. In the same way, with proctoring, beginning with the question of academic integrity requirements allows you to evaluate the options based on fit and impact, rather than getting locked into a debate over in person versus remote.
How then can institutions navigate the complexities of deciding whether to redesign an assessment, continue to deliver it in person, or consider whether proctoring is the right answer? The purpose of this article is to help you consider the questions to ask yourself to arrive at the right answer for your context.
Key Considerations for Implementing Remote Proctoring
- How much do I know about remote proctoring capabilities versus how it was implemented during the pandemic?
- Is this really a question of assessment format?
- What are you trying to achieve from an assessment and integrity perspective?
- Are you making the best use of technology to deliver your pedagogical goals?
- How do you address privacy concerns in remote proctoring?
- When should human oversight be involved in remote proctoring?
- What are the key behaviors to flag in remote proctoring?
- What should students know about remote proctoring?
How Much Do I Know About Remote Proctoring Capabilities Versus How it Was Implemented During the Pandemic?
Remote proctoring isn’t alone in being compared to an alternative that is viewed through rose tinted glasses. The traditional method of invigilation where proctors walk around an exam hall is not without its faults. It’s all too easy when evaluating a new way of doing something, particularly when technology is involved, to ignore the negative aspects of what currently exists and benchmark a new proposed solution to a false standard.
Traditional invigilation where students go to a venue to take an exam is undoubtedly easy to deploy from a student perspective because they are familiar with it, and from an institution perspective because they have legacy systems and processes to operationalise it at scale. At the same time, what is an invigilator or proctor able to effectively monitor? Or are they there in a deterrent capacity? Is it convenient for your students to descend on a single venue on a single day to take an exam? Have you deterred students from taking your programme because of the inflexibility of the assessment regime? Particularly where the learning and teaching is flexible?
Now, with the ability to reflect on experience, explore options at a reasonable pace and implement thoughtfully, what should institutions consider when deciding whether to remotely proctor their students?
The imposition of something new at speed, in challenging and novel circumstances, is highly susceptible to a suboptimal outcome. In my old world of being a legal academic, the US Supreme Court Justice Oliver Wendell’s well known adage that “Hard cases make bad law” best described this phenomenon. It’s not a given, sometimes those circumstances prove to be just the push required to stimulate change brewing organically. Remote proctoring fits into this category; new technology used at scale during the pandemic with contrasting opinions on its efficacy and appropriateness.
Is This Really a Question of Assessment Format?
Questions to ask yourself when considering your assessment format may include: Is this assessment an exam because that’s the best format for assessing students on these learning outcomes? Is it an exam because it was the best from what my options were in the past? Is it an exam because it’s always been an exam? Does this exam still need to be a closed book exam? Just because it has been for years, might not mean it has to remain the same.
Scrutiny of the assessment format is useful not only from an pedagogical perspective it also helps better articulate the academic integrity needs required. If some or all of the learning outcomes can better be assessed using an alternative format, and you have the ability to make change, you may want to consider this instead of a closed book assessment that requires invigilators. You might make smaller changes by keeping it as an exam but making it open book.
A final consideration is whether it’s still a cohesive assessment or whether it could be subsumed into others, or perhaps parts can and a smaller exam remains. Or you might make no changes at all. This article doesn’t advocate change for the sake of change, but counsels against keeping the status quo without interrogating what you have and the rationale for it.
What Are You Trying to Achieve from an Assessment and Integrity Perspective?
Recognising that assessment integrity is non-negotiable, there’s still decisions about how that is achieved. Locking down and monitoring everything is perfectly possible but at what cost? Do you need all of the data that it will generate? Are you adding barriers that prevent students from being able to take assessments with no additional gain or integrity?
Remote proctoring can deliver a secure environment for students to take a digital assessment, giving you and them the benefit of typing, interacting with questions beyond that paper can deliver, and streamlining the assessment process. It comes with the additional layer of providing you with evidence that can be reviewed during or post an exam for further investigation. That simply isn’t the case when in-person. An invigilator walking around a room can’t replay what they saw, nor can they show it to another. Test validity features strongly when considering assessment design and integrity. For example, online design features can enable more students to demonstrate their skills and knowledge, but you need to be careful that the additional security measures do not then curtail this ability, thereby impacting validity.
Are You Making the Best Use of Technology to Deliver Your Pedagogical Goals?
The ethos of putting pedagogy before technology is at the core of our beliefs at Inspera, and we know, yours as educators. The plea to ‘build digital’ is not to build your assessments from a menu of tech tools, but instead to free up what you want to achieve with an assessment from the current process and constraints within which you deliver it.
To take just a few examples: reading time, scrap pieces of rough paper, or writing a table of figures that outside of a pen and paper exam would be created digitally. All of these might make perfect sense in an in-person and/or paper driven world. What about if you weren’t constrained by working on paper or in-person though? Could your assessments then be designed to more authentically replicate real-world experience using the digital capabilities that are common in academic and professional fields?
How Do You Address Privacy Concerns in Remote Proctoring?
Remote proctoring is no different to any other technology. Deployed incorrectly, inappropriately or without proper understanding of how it operates and the effect can be damaging. At the heart of the technology is the ability to record the screen, camera and microphone of a student’s computer. That by itself may make it inappropriate for your context if you don’t have explicit permission from your students to capture and store personal data of this nature.
Where however, you do, ensuring that you and your proctoring provider have strict access protocols and an archive and deletion policy that matches your own, allows you to decide the proportionality of capturing data. Student sentiment and understanding is also key here.
Our experience working with institutions worldwide who deliver a mixture of in-person and remote assessments, remote proctored, in-person proctored and not, summative and formative, reveals the greatest student buy-in where they are given a choice of how their exam is invigilated. For those whose circumstances or preferences make taking an exam at a suitable location of their choice more convenient and comfortable.
When Should Human Oversight Be Involved in Remote Proctoring?
While remote proctoring solutions can offer valuable insights into student behavior during assessments, it’s vital to remember that this technology functions only to record behavior, not as a definitive judge. Educators and administrators with extensive experience in understanding the nuances of student behavior in the context of an institution and specific assessments are the only ones who should be making a judgment as to what does or does not constitute academic misconduct.
Proctoring software can go one step further than flags by detecting or tracking patterns, or combinations of flags to provide deeper insights. Once again, they serve not as a decision but potentially greater assistance to the human deciding. Only a human can contextualize the information.
What Are the Key Behaviors to Flag in Remote Proctoring?
A great range of flags does not by itself equate to better data on the behavior of your students. The temptation to switch on every flag at the lowest tolerance because that will flag everything has the downside of the very same: it will flag everything. Unless you have the expectation that your students will sit almost statue-like during an exam, moving as few muscles as possible to type their answers, you will be inundated with flags when you review a recording.
A mature approach to proctoring is to understand what flags do individually, and in combination with each other, blended with both the normal and potentially discrepant behavior of your students. A discussion with a proctoring provider will help you understand what the likely output of your exam will be and how different combinations will impact that. Consider for example, a flag that a student is looking away from the screen will be repeatedly triggered if you allow students to use authorized materials during an exam. This leaves you with a choice of whether to use the flag or alter its sensitivity.
For high-stakes exams, where academic integrity is paramount, flags related to potential behavioral anomalies are critical. These can indicate unauthorized assistance. However, they should be interpreted carefully within the context of the exam format, timing, and student needs. For example, students with certain needs may trigger more facial movement flags, necessitating an understanding of each learner’s background before acting on such data.
Meanwhile, if the assessment is open-book or collaborative learning, technical flags that potentially signal unusual internet activity or non-permitted software usage may be more pertinent than strict behavioral flags. Such flags help ensure that students are accessing resources within approved guidelines without being unduly judged for natural body language.
Consider also that high flag counts can sometimes indicate a need for adjustments in exam design rather than academic misconduct. Excessive flags may suggest that instructions aren’t clear or that the environment isn’t conducive to being proctored. Ultimately, each flag should serve as a prompt for deeper investigation rather than an immediate assumption of misconduct.
What Should Students Know About Remote Proctoring?
How much of this should be revealed to students? In our view, all of it. Being open and transparent with students as to how remote proctoring technology works, what it flags and who will review recordings, increases confidence and acceptance. Removing the mystery enables students to focus on their assessment.
Putting Your Answers to These Questions into Practice
In considering the use of remote proctoring, educators must evaluate several critical facets to ensure that it is appropriate for their educational context. Thoughtful assessment design is foundational, as it determines not only what students are examined on but also how they interact with a digital platform. Understanding the purpose behind using remote proctoring, whether it’s to uphold academic integrity, support a fair testing environment, or ensure access to secure assessments across varied locations, is essential in making proctoring a supportive, rather than punitive, tool.
Human oversight and transparency are non-negotiable. By incorporating human decision-making as a cornerstone of the process, proctoring can remain sensitive to each student’s individual circumstances. Students should be well-informed about how remote proctoring works, the safeguards in place for their data, and how any flagged behaviors will be reviewed.
By balancing these elements, educators can create a proctoring environment that respects student privacy, enhances fairness, and ultimately supports an equitable academic experience.
Ishan Kolhatkar, Global Client Evangelist. Former Deputy Dean of Learning & Teaching
—
Learn how Inspera Proctoring helps educators deliver exams with the right level of integrity.