Generative-AI-icon

Much of the discussion in the area of generative AI in education and assessment has been to quickly consider how candidates can be prevented from misusing it. As we draw breath to examine the rapid pace of change and the possibilities generative AI unlocks, let’s consider how it can have a positive impact on assessment, both for candidates and educators. 

While artificial intelligence did not begin with generative AI tools, their prominence over the last year has certainly brought them to fore when educators consider the impact of AI when assessing students. The concern that students will ‘outsource’ their assessment to AI has perhaps supplanted so-called ‘contract cheating’ and simple plagiarism as the burning issue where students have access to a device. 

While the concern is real, where does that leave educators when considering assessment authenticity? Hands up if you’ve used generative AI. Keep your hand up if you’ve used it to help you with a real-world task rather than asking it to write you a rap about the value of student feedback.  (I have and no, I won’t be publishing the video of me rapping). If you haven’t used generative AI, then I’d encourage you to do so in order to see what it is and – just as importantly – isn’t capable of doing.

 

How Can AI Help Assessment as an Enthusiastic Assistant?

Authentic assessment

A parallel can be drawn with the emergence more than two decades ago of search engines. They didn’t by themselves make us smarter, but they did give us quicker access to information. Sifting through it, choosing the right sources, rejecting what was irrelevant is where the human skill lies, not merely using them.

Generative AI does go one step further. Yes, it still works hard and quickly. A well-crafted prompt will return results in seconds. Its accuracy can be hit and miss, partly based on your prompt, partly based on what it knows and partly on its inherent ability to process data. It’s worth remembering that it will only get better. But there is still a great deal of human skill involved in ensuring that you don’t fall into the ‘Rubbish in, rubbish out’ trap.

 

The Rise in Use of Generative AI

A cursory search of the internet will show you hundreds of articles and videos explaining how to get the most out of generative AI. It’s being incorporated into work and social tools at a furious rate. Before we know it, generative AI will become second nature to all of us and our skills will improve in generating prompts, interpreting results and honing outputs to make this enthusiastic assistant work harder and smarter. Their Q4 performance review in 2024 will look significantly different to this Q4.  If all of us are using generative AI, perhaps subconsciously as it becomes deeply embedded in applications, then it tends to suggest authenticity in assessment should trend the same way.

 

Authentic Assessment Might Include Generative AI

At the outset, I acknowledge some assessments should absolutely not include generative AI. A diagnostic test to determine the knowledge, skills and understanding of a candidate, unless specifically targeted to their use of generative AI, should exclude the use of the tool. However, in writing an authentic assessment, the question to consider is ‘why are you asking a candidate to summarise, analyse or synthesise information available to them?’.

If they need to be able to do that unaided to meet a learning outcome, then your assessment needs to ensure integrity and be conducted in an environment that excludes the ability to use generative AI because its availability, speed and temptation is likely to be too great to ignore. 

That does not mean a return to paper. An assessment taken on site with a lock-down browser, like Inspera’s Exam Portal will give you all the benefits of digital assessment  with rich question types, simple delivery and the ability to give students detailed written and audio feedback, but in an environment where they can only access their assessment. 

But consider for a moment whether their skill could or might even be better demonstrated by summarising, analysing or synthesising information that doesn’t just include primary sources, but the product of generative AI. Can they better summarise the summary? Is it authentic and relevant to show how they can take the information generated by AI and their own research to demonstrate their analysis across sources? If so, permissive referenced use of generative AI might lead to better outcomes.

 

AI Policy Considerations and Digital Environments

Policy

Candidates and educators appreciate clarity; a difficulty in what is a moving target with rapidly improving generative AI capabilities.  As a starting point, institutions may wish to consider three options:

• Generative AI Freely Permitted

This may be appropriate in two circumstances; either generative AI incapable of answering the question or you actively want candidates to use it in their research to improve their submission.  A digital environment is natively where a candidate would use generative AI and other tools rather than paper. While you may be doing that already, our flexible digital assessment ecosystem doesn’t constrain you to just coursework or take-home assignments. You could do this as a timed exam with candidates typing directly into the platform or using real-world tools and uploading their final submission.  Our generative AI detection capabilities will still allow you to check its use.

• Generative AI Permitted with Attribution

You may wish to permit generative AI but treat it as any other attributable source. Once again the full suite of assessment types in Inspera can be used in conjunction with our similarity and AI detection solutions.

• Generative AI Prohibited

Where use of generative AI would be counterproductive to the aims of your assessment, you can prohibit the use of AI without having to return to the labour-intensive complexities of pen and paper. Inspera allows you to run your assessment in your context and then layer in levels of integrity as appropriate. This can include a locked browser, proctoring (from screen-only to audio/visual monitoring) and the use of similarity and AI detection capabilities. Importantly, AI isn’t deciding what is and is not permitted, but is there to assist you in deciding.

 

Reviewing Your Assessment Questions and How You Generate Them

Questions

As a final thought, two other considerations arise when considering how generative AI could fit within your assessment practice. Both relate to the questions themselves. 

First, do your questions fit with the tools available to you candidates? Should they? Do your learning outcomes need to be reviewed in order to consider how your assessments can be more authentic? That won’t be the case of every outcome, for every module, but a review will help identify the modules and programmes that could benefit from enhancement.

The second is to consider who writes the questions. Generative AI won’t replace carefully crafted questions from expert educators, but can help with more run-of-the-mill tasks. Generative AI can create a set of questions far quicker, given the right prompt. From a simple bank of questions within a syllabus area for formative assessments, to helping an educator with a factual matrix to underpin their carefully crafted question for students to demonstrate their mastery of a learning outcome. 

As we continue to explore generative ai in the education sector,  we know educators would appreciate assistance. We want to ensure that the machine helps the human to the level of sophistication required, no more, no less.

Ishan Kolhatkar is Global Client Evangelist at Inspera. Prior to working here, he was a Principal Lecturer in Law, Deputy Dean of Learning and Teaching and Director of Group Education Technology.

 

Written by Ishan Kolhatkar

August 4, 2023