How Can You Tell if Students Have Used AI for Their Work?
Over the past two years, the explosion of AI tools like ChatGPT, Claude, and Gemini has reshaped the way our students think about homework and assessments. Whether you teach music theory, English literature, or physics, the temptation to “let the bot do the work” is very real. And while these tools can be powerful learning aids when used transparently, they can also be misused by students to produce assignments that don’t reflect their own effort or understanding. I am often asked by music teachers in the many keynotes that I have presented on generative AI about this exact topic and I usually just say that we need to either change the way we assess our students to avoid the use of AI altogether OR we need to have students clearly identify how they used AI in their work, rather than if. This post is an effort to both highlight tools that can detect the use of generative AI as well as methods that students use to try to make their use undetectable. I follow that up with a few ideas on how to avoid students using generative AI altogether in the music assessments that you give them.
Signs of AI-Generated Work
AI-generated writing often has certain telltale giveaways. You often see the following signs in generative AI responses:
It’s just doesn’t sound like your student wrote it: The writing flows smoothly with few spelling or grammar issues, even from students who normally struggle. There are also some words that they would never use in their writing. For example, I find that the word delve is a particularly common word in ChatGPT responses, and in my opinion, one that few students would use. From grading thousands of graduate student assignments over the years, I’m used to overused words or phrases like plethora, in conclusion, or showcase.
Vagueness or overgeneralization: AI tends to avoid specifics unless prompted. An essay might feel “thin,” even if it sounds fluent. Without personal connections - which are usually common in student writing - it reads like a manual rather than an essay.
Completely formal tone: AI sometimes writes in a neutral, academic style that doesn’t reflect a student’s authentic voice. While students can use tools to try to make things sound less formal, unless they edit it themselves, it still can sound pretty sterile.
Repetition of phrases or words: Large language models tend to repeat the same points more than a student typically would. I also find that tools like ChatGPT loves to use bulleted lists and emojis way more than necessary.
My all-time favorite way to detect when Ai has been used is when a student downloads or copies the AI generated response directly from ChatGPT and includes the follow up prompts at the end of the response that it generated - like “Would you like me to include language that sounds a little less formal?” I saw that in a few of my students’ responses in the last university level course I taught a year ago. Pretty funny (and lazy) if you ask me.
AI Detection Tools
There are several tools that can help educators detect potential AI use, but I have to admit, I don’t think that any of them are 100% accurate and there are many news stories about students who have been accused of using generative AI in their work when they haven’t. It’s a whole new world of plagiarism detection out there - so caveat emptor. Personally, I would just avoid situations where students can use it altogether. Rather than assigning a discussion question in an online learning platform or asking for a written report, have them discuss in class and create podcasts.
GPTZero
One of the first detection tools built specifically for educators. It analyzes text for “perplexity” (how predictable the language is) and “burstiness” (variation in sentence patterns). Human writing tends to be more varied; AI often scores lower on burstiness.
Turnitin
Turnitin has integrated AI-detection into its existing plagiarism platform. Many districts already use Turnitin for originality checks, and the AI-detection layer provides a percentage likelihood that a submission was AI-written.
Copyleaks
A widely used tool that can scan documents, websites, or pasted text for AI signatures. Copyleaks claims high accuracy with GPT-4 detection and integrates with LMS platforms like Canvas and Moodle.
Originality.AI
Originally designed for publishers screening for plagiarism, this tool is increasingly popular in higher education. It provides clear reports and integrates well with large-scale workflows.
QuillBot is a writing and paraphrasing tool that helps users rephrase sentences, improve grammar, and enhance clarity while maintaining the original meaning. It’s commonly used by students, educators, and professionals to refine writing, avoid plagiarism, and generate polished text quickly.
Crossplag
This is a relatively new option that has an intuitive dashboard. It highlights suspicious sections instead of labeling an entire document, giving educators more nuanced insights.
It is important to note that students know that these sites exist and actually use test their responses on the sites. If you go to YouTube and enter: How to avoid getting caught using AI for your homework, there are thousands of videos.
How Students Try to Avoid Detection
Students are nothing if not resourceful, and the rise of AI detection tools has only inspired new ways to sidestep them. Some will paraphrase ChatGPT output with services like QuillBot to make the text appear less “AI-like,” while others avoid detection by breaking their requests into smaller chunks, stitching paragraphs or outlines together manually, and smoothing the seams with transitions. A few attempt “voice masking” by training AI on samples of their own writing so the results mimic their personal style, making them harder to flag. Others lean heavily into revision, editing drafts until the statistical patterns—burstiness and perplexity—shift enough to confuse detectors. More inventive students even use image-to-text tricks, converting AI-generated text into an image and then back into editable writing with OCR, stripping metadata in the process.
For educators, though, focusing solely on detection tools is not the answer. A more effective approach is to design assignments that foreground the process of thinking. Requiring drafts, outlines, or reflective notes makes it much harder for AI to stand in for authentic learning. Short oral defenses—such as asking a student to explain why they chose a certain chord progression—can quickly reveal depth of understanding. Blending performance with writing is particularly powerful in music education, where demonstrating on an instrument or through software can complement written analysis in ways AI cannot replicate. Perhaps most importantly, educators should work toward normalizing transparent AI use. Instead of prohibiting it, invite students to cite when and how they used AI—such as noting that ChatGPT suggested practice strategies, but the student selected the ones that worked best. This reframing shifts the conversation from detection to integrity and creativity, placing the emphasis where it belongs: on authentic musical growth.
AI is here to stay, and our students are growing up in a world where it will be part of their professional and personal lives. The goal is not to create a surveillance culture, but to help students build integrity, self-awareness, and responsibility in their use of these tools. Thankfully, the music classroom is one of the spaces within a school that I personally believe the fear of student usage of generative AI to complete assignments is pretty tough. It’s the nature of what we do. Although I’ve had a lot of fun with April Fool’s Day posts over the past few years with AI tools that take away the need for students to practice their instruments, the bottom line is that you won’t need to worry about your flute players using AI when they play their scales, sight reading exercises or repertoire for you. At least not for now.
Just as calculators didn’t eliminate the need to understand arithmetic, AI shouldn’t replace the hard work of learning. By combining detection tools, thoughtful assignment design, and honest dialogue, we can guide students toward a healthy relationship with AI—one that supports their growth rather than undermines it. After all, AI is going to be a huge part of their future. I believe it is every teacher’s job to help students navigate, learn, and think about generative AI and how it should be used responsibly in all aspects of life.