Generative AI – systems that can be accessed by almost anyone to create, review and rework content – are rapidly shaping up to offer a generational leap across a wide range of industries and disciplines and have a growing role in education.

However, the gates are also open for potential misuse of AI in education, and this sensitive subject must be handled with care if the technology is to be successfully adopted. AI has huge potential for its educational uses but also has great risks. Barrett & Pack (2023) found in a study that there are concerns over GenAI’s potential academic applications, with disagreements among educators and scholars on what is classified as acceptable use. It highlights the need for evidence-based guidelines for integrating GenAI into educational contexts if adoption is to be successful so that students and educators are on the same page.

The purpose of this article is to act as a guide to the different ways that AI can be misused in real-world scenarios – helping educators, school leaders and policymakers to understand what should be considered misuse and helping students learn the implications of using generative AI incorrectly.

We’ve identified six broad ways in which AI could be misused across the educational sector, along with seven practical applications of AI that would be considered unethical. Because AI is absolutely something that can be used in education and incorporated into assignments and papers – provided it’s done so in a fair and reasonable way.

 

Defining AI Misuse in Education

It’s important to take a step back in the first instance and look at the various broad ways AI could be misused in education. While many will instantly gravitate to one definition, there are several ways that AI could do more harm than good if not implemented and regulated correctly.

Understandably, plagiarism is one of the most prevalent education concerns around generational AI. The potential for students to use AI systems to create essays and papers they have not authored, plagiarizing existing works or passing off AI-generated content as their own, is rightly at the forefront of thinking when considering how AI can be properly managed in educational contexts.

However, educators must be careful not to fall into the trap of only tackling that sole issue, as any of these alternative concerns could be equally damaging to the student or the institution’s reputation. Alongside plagiarism, the question of AI misuse also brings into focus other challenges for educators such as:

  • Equity
  • Privacy Concerns
  • Algorithmic Discrimination
  • Efficacy
  • Authorship
  • Appropriate Use

 Equity

Generational AI is widely available but not universally so, particularly as many of the most advanced systems (or at least the most advanced models of a system) require a paid fee. Students from disadvantaged backgrounds may, therefore, not have equal access to AI tools, which could widen the gap between the privileged and the marginalized students. AI tools are also constantly learning and adapting their systems – if algorithms are trained based on biased data, they can both perpetuate and even amplify existing social inequalities.

 

 Privacy Concerns

Educators can use AI in a number of ways to benefit students, tailoring and customizing curricular resources to best fit their needs. However, in doing so, the AI systems need data on the students, which could include a range of demographic information. A report from the Office of Educational Technology highlights this – educators must be concerned with what happens to the data collected by AI, and whether it meets the stringent privacy laws in place. If AI is to be used in decisions to best represent students, it must balance that with respecting the privacy of the student.

 

 Algorithmic Discrimination

Linked to the prior point, any AI system that collects student data must be heavily regulated not just for privacy concerns but also because of the potential for algorithmic discrimination. Human oversight of any AI system is absolutely crucial to avoid any biases within the data based on factors such as race, gender or socio-economic status. Without that oversight, biases may go undetected and cause inequitable data outcomes, furthering the issue of equity problems.

 

 Efficacy

At the broadest level, there are concerns about the efficacy of generational AI systems, even as they continue to develop in sophistication. The unchecked use of AI in an educational context could cause severe reputational harm if the system is not able to produce accurate, effective work. So a critical evaluation of any utilized AI system is necessary on an ongoing basis. Not only that, but it’s crucial that any AI systems used are done so in conjunction with non-AI methods – relying solely on AI tools, without factoring in inherent limitations, can be counter-productive.

 

 Authorship

Related to the plagiarism issue, the use of generational AI within education raises questions around authorship and the rightful ownership of AI-generated works. Students and educators must be clear on the proper citation of any content created using AI, to ensure that work is not incorrectly credited. If institutions are not clear on this, their reputation is at risk should published papers or research projects rely on uncredited AI.

 

 Appropriate Use

Ultimately, educators and policymakers are responsible for setting guidelines and ethical frameworks on AI technologies that can be deployed in their respective institutions. By making clear what is considered appropriate use of AI, educators can demonstrate how they are prepared to embrace it in ways that enhance the student experience and their development, rather than hindering it or causing an unfair playing field. These guidelines can help students to use AI responsibly, furthering the innovation rather than using it as a shortcut and ensuring that AI doesn’t exacerbate existing issues related to inequity.

 

Practical Examples of AI Misuse in Education

Looking a step deeper beyond the broad capabilities of AI to be misused, we can identify the more practical examples of how AI tools can be utilized in education in a way that would certainly be considered unethical or improper.

The Artificial Intelligence Index Report 2023 by Stanford University highlights that the unethical use of AI is rapidly rising due to both the increased use of AI technologies and the awareness of the ways in which it can be misused.

In the educational field, generative AI is already causing concern for many educators due to the various ways in which the technology can be used in harmful ways:

  1. Plagiarism of written content
  2. Artificial artwork
  3. Computational AI
  4. Translation
  5. Bypassing plagiarism software
  6. Use in unsupervised assessments
  7. Programming

 

1. Plagiarism of written content

Likely the most obvious and prevalent of the practical misuses of AI in education, it’s extremely easy to copy or paraphrase existing works without citation when using AI as a research tool. Students who task an AI with writing all or parts of an essay or paper will likely be committing plagiarism as, even if the work is not directly copied and pasted, it will be a re-written version of existing content that won’t be properly referenced.

2. Artificial Artwork

AI is already powerful as an image creator and is rapidly developing its ability to generate faked video content. Creative students can use AI to generate work rather than making it themselves, which prevents them from properly showcasing their creativity and developing their own creative thinking skills.

3. Computational AI

AI can be used to solve complex mathematical calculations, but students who rely on AI to do this work for them are either failing to understand the problem-solving process properly or are getting complacent in their understanding and could fall out of practice, causing them to forget how to use the formulae themselves.

4. Translation

Artificial translation is improving, but it is far from flawless. Students using AI to translate a text could be using an inaccurate representation, whereas students actively studying translation are not properly developing their language understanding.

5. Bypassing Plagiarism Software

Plagiarism detection software does exist, and students are aware of it – so they may use AI to try and disguise their own content theft by asking the tool to rewrite it for them.

6. Use in Unsupervised Assessments

Students may be given unsupervised assessments designed to challenge them to complete a project independently – without that supervision, students could use AI to complete some or part of the task for them, answering questions on their behalf.

7. Programming

Certain AI tools are capable of generating programming codes, and while the results are impressive (and AI certainly has a future in the IT development industry), it could stymie students’ ability to learn and develop their own coding solutions.

 

Exceptions and Examples of Beneficial AI Use in Education

However, there are potential exceptions that highlight how AI could be used in a way that is beneficial to students. Broadly, the educational sector is very much in the early stages of further exploring these, but already some practical uses have emerged:

  • Editing
  • Structure and Research Assistance
  • AI Prompting

 

 Editing

While it’s important that students are able to write to a high standard, and they should be capable of proof-reading their own work, the use of AI as a self-editing tool could be encouraged to improve efficiency and accuracy, provided that the student was not specifically being educated in a language-based role and that the editing was not transformative, and the work remained similar to the original rather than becoming AI-generated content.

 

 Structure and Research Assistance

Students can use AI as a platform to kick-start a project, getting help identifying potential research topics and putting a structure in place for their essay or paper. Provided students are given appropriate guidelines on what is appropriate and where the limits should be drawn, and they then use this framework to carry out the work themselves, the use of AI in this instance could enhance a student’s own work quality and make time savings for them, without giving them an additional unfair advantage.

 

 AI Prompting

It may be that because AI has become so widely used, that it is more embraced in certain academic fields, and there may be educational tasks that include designing AI prompts. So, it would be entirely appropriate for a student to use AI to understand how to train it better to get the desired results.

 

The Impact of AI on Student Learning

While AI does have a number of potential benefits, when misused, it can be harmful to the student and have a negative impact on their learning and development. Students who rely on these examples of AI misuse may feel that they are gaining benefits in the short term without seeing the potential long-term impact it has.

Without using AI, students develop a core set of skills, including critical thinking and problem solving. By using an artificial intelligence tool to carry out these tasks, students aren’t able to develop that creativity. AI is built on an algorithm, and so while it can ‘learn’, it won’t develop the same innovative approaches to problems that the human brain can when properly trained, thus limiting not just the student’s potential but also their potential for leading their industry in change.

Students could also lose other core skills and tools necessary for their own academic achievement. By relying on the shortcut methods of AI, they may exhibit reduced attention span and concentration as they become used to the instantaneous and easy answers that AI can often provide. Memory formation could also be impacted as students feel under less pressure to commit complex subjects to their own memory, instead of relying too much on their AI tools to store and access any information they need.

 

Preventing the Misuse of AI in Education

Educators need to make sure that students understand the negative impacts of using AI, alongside the benefits. Educating students on how AI could hinder their mental development is the best way to encourage them to embrace it ethically and responsibly within the guidelines set.

As such, it is crucial to underline the importance of originality in students’ work, and there are several ways that educators can reinforce this:

  • Oral Presentations
  • Project-Based Learning Assessments
  • Plagiarism detection and AI detection tools

 

 Oral Presentations

Asking students to give oral presentations on their work, either standalone or alongside a written submission, to ensure they can demonstrate understanding of the work, they’ve developed their memory skills to learn it, and as a sideline, also developing their presentation skills which may be essential in their career.

 

 Project-Based Learning Assessments

Using project-based learning assignments that involve a mix of skill sets and ensure that students aren’t relying on simple essay-writing AI tools.

 

Plagiarism Detection and AI Detection Tools

Using plagiarism detection and AI detection tools to scan students’ work, although it’s vital to understand the difference between similarity indexes (which only provide simple numerical comparisons between like-for-like texts) and plagiarism, which can take other forms such as re-written or translated work

 

 

That last point is key. It’s vital to move beyond traditional originality AI detector scores and to use a tool that is more holistic in its approach, that can capture all forms of plagiarism.

Setting more varied work projects that require students to demonstrate their understanding of a subject matter is the best way to ensure their development across core academic skills. Then, using AI authorship detection and plagiarism detection software like Inspera Originality is the best way to maintain academic integrity so that any papers or works published are ensured of originality.

 

Leveraging AI for Academic Integrity: Not All Evils Come To Harm

It’s true that AI offers many risks to academic integrity, but there is also great scope for AI to be beneficial to preserving and maintaining academic integrity, whilst also promoting fairness and inclusion within the educational sector.

We’re still in the early stages of understanding all of AI’s potential applications, but already we’re seeing some of the benefits. It can help to level the playing field for students from disadvantaged backgrounds who perhaps are unable to access many traditional learning resources in person – provided they are clear on what’s an acceptable way to then use that research. Setting clear boundaries on AI assessment in education and what’s permitted is crucial.

It can be beneficial for expanding the scope of research via multilingual textual analysis, giving students access to even more information from a wealth of different publishing languages.

AI proctoring is also a developing field, with the potential for online assessments to be better regulated to ensure the fairness and integrity of all testing results.

 

Final Thoughts

Provided care is given to its application, and clear and transparent guidelines are set, AI certainly has a big future in helping educators and students to innovate and push boundaries within their respective fields, without compromising on quality.

There are clear examples of how AI could be misused, including broad styles of misuse and more focused real-world examples.

The challenge for educators is to be aware of these and to look at setting rules that students understand and agree to.

Looking to ban ChatGPT and other generative AI tools outright is the wrong approach. If students use artificial intelligence to help them complete assignments in the right, ethical ways, then it can be a transformative tool that could make future innovations even easier to achieve.

Written by Jo Bowden

March 11, 2024