Special thanks to Anthony Wallis and Dr Lachlan Brown (FoAE).
On 5 April 2023 Turnitin have switched on a preview of its “AI writing detection” capabilities.
This means that in addition to the normal similarity score, users will receive another percentage score which Turnitin says will indicate how much of an assessment was completed using generative AI (GAI).
Detecting whether writing has been created by artificial intelligence is a difficult and complex matter. There does not seem to be a foolproof method of detection.
In the above example, Turnitin thinks that 92% of the assessment has been written by AI. You can click on the 92% to get more information about which sections are of concern. In the fine print, Turnitin tells us that they are 98% sure that this is the case. However, we don’t have ways of verifying the detector yet, so we need to be a bit careful.
Because of the complexities in this space, the Faculty would advise staff not to jump to conclusions if scores are high. As our processes currently stand, a Turnitin writing detection score alone is not enough evidence to demonstrate Student Misconduct.
Rather, we would recommend that staff take the following steps when thinking through the new Turnitin AI writing detection score.
- If there is a score of over 20% you may wish to take a second look at the student’s submission.
- Are there other factors that seem ‘off’ which may indicate either contract cheating, or work generated by GAI.
- Does it fail to match the student’s ‘normal’ writing/ expression?
- Does it lack specific localised/contextual knowledge of things all students in the subject should know?
- Does it contain made up facts or references? (being a language model, current AI will often fill things out with artificial information)
- Is it a bit studious and over the top in its explanations?
- Is it a vague pass-level answer that looks okay on the surface but doesn’t engage with any depth.
- Are all the sections/paragraphs of equal length and (too) well-proportioned?
- Is it extremely logically organised but not really cognisant of basic facts?
- Does it plough on with confidence even when it’s obviously wrong?
- Does it contain phrases or structures that match the outputs created when you have placed your questions into a GAI program?
What’s a high-ish score? We’re still trying to work that one out. Very high scores of (say) over 90% may provide a good incentive to take a second look (perhaps check a couple of quotes and references). But we’ve recently had cases of GAI work that had only a 10% score in Turnitin.
If any of these things indicates further cause for concern, we would advise that you speak with an academic integrity officer, or make an allegation here.
Once an allegation is made, the Faculty can begin an assessment and evaluation of the matter. Part of this might involve considering the nature of the evidence, interviewing the student, and comparing work across the student’s submissions.
We do not think that staff should confront students about their Turnitin AI Writing Detection score unless they have a good pedagogical relationship. Rather, such scores should be used as one indicator among many possible indicators that may show the work is not the student’s own. If there are a few indicators, or an experienced marker has concerns, please pass the work on to an Academic Integrity Officer for further analysis.
As always, if you have further questions, comments or concerns do get in touch.
We’d also love to hear about your experiences with student work so far during session. Are there any indicators we might be missing?
Thanks also to FOAE for this information.
For more information, contact the Faculty Academic Integrity Officers: FOBJBS-Ops-SAM-AMO@csu.edu.au
 NB: Students do not have a way of defending themselves against such scores, unless they know precisely how the scores were created. So it may be a denial of natural justice to rely on these scores alone as evidence of misconduct.