
For about a decade, I went everywhere with a red, green and blue pen. That’s right, I was ready to mark at a moment’s notice. If there was a paper, I was ready to mark it and give feedback. That I needed a red, green or blue pen, depending on whether I was a first or second marker, or moderator drove my behaviour. I can’t write with a biro so it took some time to find the perfect disposable rollerball pen and when I did, I bought a box of each colour, two boxes of red in fact. The learned behaviour of fitting comments into a margin, using the correct annotation of a circle, square and other symbols that have begun to fade from my mind.
I’m therefore not aloof to the challenge of getting academics more engaged with digital assessment tools, online assessment platforms and technology in general. When you are time poor and used to a process, it’s easy to stick with what you know. The maturity of digital tools, assessment and otherwise, is such that we are not in a position where the vast majority are unaware of their benefits in general. The pointed question is ‘Why change?’ and perhaps specifically ‘Why should I change?’.
This post will explore why academic staff might be reluctant to change and how institutions can find a path towards greater buy-in and adoption.
Why Do Academics Resist Digital Assessment Tools?
Before deciding how to demonstrate the answer to ‘why change?’, it’s important to consider individual points of reluctance, along with the combined effect of several of these points, cognisant that the combination may differ between academics, subjects and faculties.
- Workload and Time Pressure – Perception that digital assessment tools add complexity and require significant setup time.
- Lack of Confidence and Training – Discomfort with technology and insufficient support for both technical and pedagogical adaptation.
- Fear of Change – Deep-rooted traditions and resistance to altering long-established assessment methods.
- Loss of Academic Autonomy or Judgement – Concerns that automation may erode expert discretion and nuance in assessment.
- Trust and Reliability in Digital Platforms – Scepticism about new tools due to past experiences or the comfort of familiar systems.
- Assumptions About Functionality – Frustration with usability or interface design based on mismatched expectations.
- Insufficient Incentives or Recognition – Lack of institutional rewards or motivation to adopt new technologies.
These challenges are explored in more detail below.
1. Workload and Time Pressure:
One of the biggest barriers is the perception (and sometimes reality) that moving assessments from pen and paper to digital, or from one digital platform to another, will increase workload – at least initially. Designing assessments in a new system, learning the interface, setting up question banks – all of this takes time. Many educators already feel time poor, so an unfamiliar platform can look like an extra burden.
2. Lack of Confidence and Training:
A lack of confidence in using new technology or insufficient training can make using a digital platform or a new digital platform seem intimidating. Absent appropriate support, they are likely to stick with what they know rather than risk something going wrong. Broader than technical skills, the same is true for adapting, enhancing or innovating assessment from a pedagogical perspective.
3. Fear of Change:
Academia is steeped in tradition. Many educators have deeply ingrained beliefs about what ‘good assessment’ looks like in their subject. A move to digital can challenge those beliefs. Altering the delivery of assessment that has existed in the same format for a couple of decades can be nerve wracking and bring about fear of unknown unknowns.
4. Loss of Academic Autonomy or Judgement:
Underlying personal attitudes toward teaching and technology also matter. Concerns can include using automated marking diminishes an academic’s role as the expert, or that digital tools undermine academic rigor. There can be a philosophical resistance; a sense that education should happen face-to-face, or on paper, and that digital tools are impersonal.
These beliefs won’t change overnight. Faculty may fear that digital = standardisation, templating or a loss of control over their assessments. Their concern being system settings or an algorithm would override their nuanced decisions. This ties into a larger concern about whether digital assessment might become too ‘mechanistic’. Academics take pride in their expert judgement. An educator can gauge nuance in an essay or give partial credit for an unexpected but insightful approach to a problem.
5. Trust and Reliability in Digital Platforms:
Paper and long standing digital platforms have incumbency advantage. Neither are without their issues, but when compared to a new platform, there can be a tendency to downplay past issues and be more cautious about something new. All methods, analogue or digital have their advantages and disadvantages. All need to be carefully managed to ensure you get the most out of them. Ensuring that they are fairly evaluated is, however, key.
6. Assumptions About Functionality:
If a digital platform requires multiple logins, double entry of data or doesn’t work in the way that an academic assumes it would, this can get projected onto potential future platforms.
7. Insufficient Incentives or Recognition:
Finally, we shouldn’t ignore human nature. University teachers are extremely busy, and like anyone, they respond to incentives. If technology adoption in education is seen as thankless extra work with no recognition, it’s a hard sell. Often, early adopters of innovative teaching practices do so out of personal passion. But to move the middle majority of academics, institutions may need to acknowledge and reward the effort.

Strategies to Increase Academic Engagement with Digital Assessment Tools
Encouraging academics to adopt a new platform or approach requires a multi-pronged strategy. It’s part change management, part professional development, part messaging and part support. Here are strategic and tactical methods you can employ to drive engagement and adoption:
- Provide clear vision and leadership support to align digital assessment with institutional goals and values.
- Run a pilot that suits your institution, involving both early adopters and skeptics to gather useful insights.
- Provide alternatives that meet the goal, easing the transition with flexible approaches that support busy academics.
- Invest in user-friendly design by choosing and configuring platforms that are intuitive and accessible.
- Empower academics through targeted training and support that’s timed for actual use and focused on pedagogical outcomes.
- Align digital assessment with academic values and pedagogy to show how it enhances educational quality.
- Recognise and reward engagement through institutional incentives, publishing opportunities, and performance criteria.
- Review and update policies and processes to support digital workflows and remove structural barriers.
Let’s explore each strategy in more depth.
1. Provide Clear Vision and Leadership Support
Change in universities often starts at the leadership level. Academics need to understand why the shift to digital assessment is worth their effort. A clearly articulated transparent vision for digital assessment sets out the value to students, staff and the institution as a whole. The bigger picture is capable of explaining “Why am I doing this particular task?”. The effect may not be immediately apparent, or seen as trivial, but when placed in the context of what the Institution is trying to achieve, explains the need. Emphasising how this aligns with the university’s learning, teaching and assessment strategy, digital strategy or both, takes it from the implementation of technology to its real place at the heart of what institutions do. Clarifying why prevents misunderstanding: digital assessment platforms provide you with the ability to deliver richer feedback, innovative questions and authentic assessment. It’s about showing the benefits of digital assessment tools for academics and adding capabilities and options for academics, not replacing them. A clear strategy also provides an inflection point to consider the assessment types you use, either to validate their continuation or reflect on alternatives. For example, the appropriate use of exams as an assessment instrument.
Achievable targets of how many assessments or which schools or programmes will move to a particular method or platform of delivery, supported with resources, keeps the community focused on an end goal. Listening and flexibly responding to faculty concerns, be they technical, pedagogical or human, makes all the difference. By having champions of change across the institution and levels of seniority, legitimises that change. Making modern digital assessment a recognised part of institutional strategy, something that is collectively being worked towards, makes it the norm. In Australia and New Zealand, institutions have long seen the virtues of digital assessment as part of their online and blended learning provision.
2. Run a Pilot that Suits Your Institution
Piloting new software is commonplace. The configuration of the pilot however, can make all the difference as to the velocity and success of a rollout. The temptation can be to create a very small pilot with one eye on the amount of change you wish to make at the start. The corollary is that moving to the next stage is a bigger jump than if the pilot had encompassed a greater proportion of the institution. Will the pilot provide you with sufficient data to make the next decision? Does your pilot give you the right insights into your academic and student population to roll out at scale. A programme that only contains full-time academics might not give you the right perspective on how part-time or temporary staff will interact with a platform.
One effective tactic is to pilot the digital assessment platform with willing enthusiasts before a full rollout. Identify ‘change positive’ academics or departments who are open to trying the platform in their courses. Maintain their enthusiasm with resources to deliver the message to others with success stories and proofs of concept that demonstrate to the wider institution, the benefits that can be realised.
But they can’t and shouldn’t be the only people you engage in a pilot. The tech-skeptic and change-skeptical need to be part of that initial engagement. Even if they don’t use the platform at the start, understanding their concerns and fears pays dividends. They can be fed into the use by those who are ‘change positive’ to either be addressed as part of a pilot or identify what needs to be tackled in stage 2.
3. Provide Alternatives that Meet the Goal
If one of the goals is to get more academics to grade submissions on screen so that students have easily accessible rich feedback, consider which is the goal and how academics get there. Rich accessible feedback is the goal and on-screen grading is the method. As well as training and explaining the benefits of digital assessment tools, consider alternative routes that meet the goal. Allowing academics to print papers for a period of time to have both paper and on-screen versions to aid the transition, can ease the journey to the goal. Making the path to change easier and gentler can make the difference across a busy workload.
4. Invest in User-Friendly Design
If you want academics to use a digital platform, make it as easy as possible for them to do so. This involves both selecting the right platform and configuring it smartly. A digital assessment platform needs to be intuitive, providing a clean interface, clear options, and minimal clicks to do common tasks. Creating resources that cater for both those who want to get the job done and those who want to innovate, two separate constituencies, widens the pool of those enabled and engaged in the platform. In the University of Bath’s initial rollout of Inspera, they consciously limited question types in the first phase to avoid overwhelming staff. As they get comfortable, you can introduce more advanced features gradually.
5. Empowering Academics Through Targeted Training and Support
Training is absolutely crucial in getting academics comfortable with a new assessment platform. A one-off workshop at launch isn’t enough. Investing in training tied to strategy and pedagogical outcomes pays dividends compared to just mechanical ‘click-by-click’ training. It’s helpful to frame training in terms of problems: ‘How to provide rich feedback’ allows you to demonstrate rubrics, in-line comments and audio clips in one platform rather than requiring three and a way of tying it all together. ‘How to ensure academic integrity’ is an opportunity to explain individual capabilities and how they work together.
Consider the timing of training. Time it with when faculty actually need to use the platform. A session in the summer might be forgotten by the time exam season arrives. Instead, offer refreshers, on-demand clinics and accessible online resources to coincide with assessment periods. Keep the resources (videos, guides) easily accessible online, and remind staff they exist. As new features or best practices emerge, update the training materials. Continuous upskilling should be part of the culture.
When academics feel competent and supported in using the tool, their fear diminishes and they can even become advocates. Working with your vendor to ensure that the right people in the organisation have access to release notes, test environments and time to try out new capabilities in your context, has an outsized impact on your institution.
6. Align with Academic Values and Pedagogy
One major key to winning academics over is to show that digital assessment platforms can enhance (not diminish) educational quality. This means framing a platform not just as an efficiency tool for the university, but as a pedagogical asset for educators. Many academics have worries that digital = shallow, or digital = standardised testing. Dispelling that by highlighting how the platform can support what they value: high-quality, authentic, fair assessment.
Start by identifying what values or outcomes matter to your academic staff. Common ones are: academic integrity, authenticity, quality of feedback, supporting student learning, and inclusivity. By aligning the platform’s use with core academic values, you transform it from a top-down requirement into a tool that faculty can use to serve their educational goals.
Academic Integrity: Demonstrate how digital platforms can reduce misconduct or cheating through features like question randomisation, similarity detection, or secure browsers. If exam and content security is a concern, explaining how the benefits of remote exams supported by proctoring and a locked down browser, or in-person exams in a locked down environment, help educators find the right delivery option for their students and context. A screen recording of an exam provides evidence for both educator and student from an integrity perspective and to support technical difficulties encountered. Proactively, fostering originality in Higher Education, from the mindset of the historical notion of catching cheaters into something more integral to the learning process.
Authenticity: Digital assessment platforms enable complex tasks that couldn’t be done on paper and provide an ability to ask randomised questions that isn’t possible with a single question paper. Digital assessment platforms allow educators to break down summative exams into real world tasks and echo the real-world tasks their students will face on graduation.
Feedback and Assessment for Learning: One huge advantage of digital is speedy feedback and richer feedback modalities. In-line comments, explanations and cohort level feedback, along with voice notes and annotated rubrics can be made available on a scheduled date, removing the need for the administration of emailing or uploading to various places. Where automatically marked questions are appropriate for use, instant feedback can be provided. When paired with multiple attempts, it unlocks a powerful formative assessment capability that supports assessment for learning.
Inclusivity and Accessibility: Digital assessment platforms with in-built accessibility tools and compatibility with external tools, deliver an ability to produce one assessment that can be used by a wider range of students. It’s important at the same time to address the digital divide and have a strategy for ensuring parity of access to assessment.
7. Recognise and Reward Engagement
Creating an incentive to use and enhance digital assessment drives use, adoption and enthusiasm.
Recognition in Performance Reviews or Promotion Criteria: Recognising digital innovation and where appropriate, teaching innovation through the use of digital assessment tools, as part of promotion criteria, provides a tangible reason for engagement.
Publish and Present: Encourage and support academics to publish case studies or pedagogical research on their digital assessment experiences in journals and at conferences. Not only does this contribute to the scholarship of teaching and learning, but it gives the faculty members content for publication.
8. Address Policies and Processes
Sometimes academic reluctance isn’t just about them and the platform; it’s about the surrounding policies and admin processes. To truly embed digital assessment, institutions should review and update their policies, procedures, and support structures to align with the new mode. For example, do your exam regulations allow for online exams? Are there clear protocols for what happens if a student’s internet drops, or if a student prefers handwriting for accessibility reasons? Having well-defined policies can increase staff confidence.
Also consider logistics: if your institution still runs big in-person exams but on computers, ensure the physical setup is adequate, eg, computer labs or a bring-your-own-device policy with adequate power and WiFi. An academic’s own enthusiasm for using a platform will be linked to their natural care for their students and knowing they have been properly accommodated.
Continuously iterate and involve academics in decision making about the platform. Forming an advisory group of faculty from different disciplines to gather input on how things are going and what could be improved at the policy or support level. This empowers academics to shape the environment, rather than feeling things are imposed. It could be as simple as regular feedback surveys after each exam period, or as structured as a committee that periodically reviews the impact of digital assessment each semester and makes recommendations. When academics see their feedback leading to concrete improvements, their trust in the system grows, and so does their willingness to engage.
Success Stories from Around the Globe
Read how the University of Auckland transformed their digital assessment practice, how Newcastle University fostered creativity and innovation with digital assessment and how the University of London successfully rolled out large scale exams across the globe.
Found This Post Useful?

Check out the white paper co-authored by this post’s author, Ishan Kolhatkar. The white paper is called Digital Transformation in Higher Education Assessment and is free to download.