In a research study that highlights the impact of artificial intelligence on education. A groundbreaking study at the University of Reading has discovered that AI-generated answers to exam questions tend to go under the radar and often, as a result, perform better than the answers from actual students. This discovery has initiated a call for action from different education sectors around the globe to immediately address the impact of the use of AI in academic exams.
The study, published in PLOS ONE, has conducted a blind test where answers generated by ChatGPT were submitted for various undergraduate psychology modules. To everyone’s surprise, these AI-generated answers went unnoticed 94% of the time and, on average, scored higher than submissions from actual students. This study is one of the largest and most robust of its kind, setting a challenge for educators trying to discern whether submitted content by their students are AI-generated or not.
Professors who led the research at Reading’s School of Psychology and Clinical Language Sciences, Associate Professor Peter Scarfe and Professor Etienne Roesch, highlighted the need for immediate attention from educators worldwide to recognize the negative effects of AI. Professor Scarfe emphasized that as most institutions have shifted from the traditional way of conducting exams to make assessments more inclusive, this shift necessitates a deeper understanding of how AI might impact the integrity of educational evaluations.
“We won’t necessarily go back fully to hand-written exams, but the global education sector will need to evolve in the face of AI,” Scarfe said. Roesch added that education sectors should reach an agreement on how students should utilize and recognize AI in their work to prevent a possible crisis of distrust in society.
The implications of this study are alarming, especially considering a recent UNESCO survey that revealed only less than 10% of 450 surveyed schools and universities had policies regarding generative AI. This unpreparedness could pave the way for widespread academic dishonesty, as students might exploit AI to cheat without detection, securing better grades than their peers who do not cheat.
Elizabeth McCrum, Pro-Vice-Chancellor for Education and Student Experience at the University of Reading stresses the transformative power of AI in education. She advocates moving beyond archaic methods of assessment and adopting new approaches that align with workplace skills, including proficiency in utilizing AI.
“At Reading, we have undertaken a huge program of work to consider all aspects of our teaching, including making greater use of technology to enhance student experience and boost graduate employability skills,” McCrum said. She expressed optimism that Reading’s thorough review of its curriculum positions the university well to guide students through and benefit from rapid advancements in AI.
This research serves as a reminder to the education sector prompting institutions to establish rules and advice on incorporating AI. With AI advancing, the difficulty lies in maintaining honesty while leveraging technology to enhance teaching and evaluation techniques.
In summary, integrating AI into evaluations poses a test that educators need to address in order to protect educational integrity. The University of Readings study underscores the need for an approach to this growing concern, guaranteeing the conscientious and ethical application of AI, in education environments.