University of Reading project raised questions about the integrity of take-home assignments and coursework.
University of Reading’s researchers tricked their professors by secretly submitting exam answers generated by AI that went undetected and received higher grades than the real students.
The project created fake students’ identities to submit unedited answers generated by GhatGPT-4 in take-home online assessments for an undergraduate course.
Only one of the 33 entries was identified by the university makers, who were unaware of the project, and the rest of the AI answers received higher-than-average grades than the student responses.
The authors’ findings demonstrated that AI processors like ChatGPT could now pass the “Turing test,” named after computing pioneer Alan Turing. This test measures an algorithm’s ability to pass undetected by experienced judges.
The authors of “the largest and most robust blind study of its kind” warned that the results could significantly impact how universities investigate students. The study examined whether human educators can detect answers generated by artificial intelligence.
According to Dr. Peter Scarfe, an associate professor at Reading’s School of Psychology and Clinical Language, “Our research shows it is of international importance to understand how AI will affect the integrity of the educational assessments.” He added, “We won’t necessarily go back fully to hand-written exams, but the global education sector will need to evolve in the face of AI.”
The study concluded: “Based on the current trends, the ability of artificial intelligence (AI) to exhibit more abstract reasoning is going to increase and its detectability decrease, meaning the problem for academic integrity will get worse.” Reviewing the study, experts stated it was a death knell for take-home exams or unsupervised coursework.
According to Prof Karen Yeung, a fellow in law, ethics, and informatics at the University of Birmingham, “The publication of this real-world quality assurance test demonstrates very clearly that the freely and openly available generative artificial intelligence tools enable students to cheat take-home examinations without difficulty to obtain better grades, yet such cheating is usually on undetectable.”
According to the study, universities could use assessment materials created by students using artificial intelligence. Prof Etienne Roesch, another research author, stated, “As a sector, we need to agree on how we expect students to use and acknowledge the role of AI in their work.” The broader application of AI in other areas of life is necessary to avert a crisis of trust in society.
Reading University’s pro-vice chancellor for education, prof. Elizabeth McCrum stated that the schools were moving away from the take-home online exams and creating alternatives, including applying knowledge in “real-life, often workplace-related” settings.
She added, “Some tests will support students using AI. Teaching them to use artificial intelligence ethically and critically, developing their AI literacy, and equipping them with the skills needed to succeed in the modern workplace. Other assessments will be completed without the use of artificial intelligence.”
However, Yeng stated that using artificial intelligence in university exams would create problems for “deskilling” students. He added, “There is a real danger the next generation will end up effectively tethered to these machines, unable to engage in serious thinking, analysis, or writing without their assistance, just as many of us can no longer navigate around unfamiliar places without the aid of Google Maps.”
In the endnotes of the study, the authors suggest they may used artificial intelligence to prepare and write the research, stating: “Would it you consider that it as cheating”? If you considered that ‘cheating,’ but we denied using GPT-4 or any form of artificial intelligence, how would you prove that we were dishonest.”
A spokesperson for Reading confirmed that the research was “definitely done by humans.”