Is your candidate using AI to cheat, or are your fears about AI cheating you out of a good candidate?
Since the launch of ChatGPT, we’ve all been navigating a new world. What is this thing? How do I use it? Is the use of this fair to creators? How much carbon did I burn with that search? Why does the gymnast in this video have three arms?
More recently, we’ve started to see the effects of AI on selection processes, and seeing that many employers aren’t set up to handle it. Some choose to “fix” the problem with more AI tools, but that can create new biases and blind spots.
Before we dive into potential solutions, the first step is for you and any other decision-makers at your studio to decide what your approach to AI usage actually is.
Like Hayao Miyazaki, do you feel that using it violates your art, your product, and what it means to create as a human? Or do you see AI as a helpful tool that you’d like staff to incorporate to create efficiencies?
Everything else will flow from where you land on this. If it’s the latter, then excluding staff or candidates based on AI use could actually work against your goals.
If it’s the former and AI use is absolutely off-limits, then you’ll need to consider the impact of that stance. A candidate with dyslexia might use AI to check for spelling errors. Would that be considered cheating? And are you certain that none of your own team are using AI tools to vet or score candidates? Many ATS automatically vet candidates using their own AI model. I’ve also seen a candidate nearly eliminated for using AI to check their work, by an AI tool that was checking for use of AI.
The key takeaway: whatever your stance, make it clear to candidates. That clarity prevents unintentional cheating by your measure and ensures fairer outcomes.
So what does this mean for each stage of your selection process?
CVs and Applications
Most AI detection tools focus on the complexity or predictability of language. That’s a problem in itself: people who write in shorter, simpler sentences, such as those with English as a second language, are more likely to trigger a false positive.
Even worse, these tools are easily beaten. You can tell ChatGPT to “use more complex vocabulary” and it’ll pass most AI detectors.
If you’ve decided you’re happy for your team to use AI in their work, but you want to ensure candidates are showing you who they are through their own words in applications, then make sure you include this stance clearly in your job adverts. State that either no LLMs should be used for their application, or LLMs should only be used for proofreading.
In terms of then vetting these applications, you need to decide how much their use of AI matters to you at this stage. If you suspect usage AI and you’ve stated it's not allowed, how sure are you that they’ve used AI? Is it worth interviewing them still?
Often, there’s no way to be sure without an interview. Whether or not they can talk deeply about their experience and answer questions relating to their application will show you whether their application is their work.
Interviews
AI-assisted interviews are on the rise. Tools now exist that can listen to a live interview feed and generate answers in real time. There have even been AI-generated candidates turning up to interview!
You can spot AI-assisted interviews if you know what to watch for. Look for unnatural pauses, delayed responses that sound overly polished, or candidates looking repeatedly to one side (as if waiting for a prompt). You’ll likely see their eyes moving from left to right as they read the prompts. Someone genuinely thinking tends to make brief eye movements, look up or down, and show micro-expressions of effort where AI users don’t.
An AI-generated candidate can be caught by asking them to get something from out of frame, if they can’t do it, you’ll be able to tell it’s AI. You’re looking for a glitch in the matrix, or trying to create one. Though you should be mindful of genuine candidates who may have limited mobility.
Ways to counter AI in interviews:
- Keep the conversation fast-moving and hypothetical. Humans handle follow-ups more naturally, while AI tools need time to generate responses. But continue to read body language, plenty of humans need time to reflect and think!
- Focus on “real-world” experience rather than textbook facts. Questions like “Tell me about a time you…” work well. Follow up with specifics, such as, “What went wrong?” “What did you learn?” AI-generated answers tend to stay surface-level.
- Where possible, consider in-person or live video interviews. Real human interaction makes it much harder to rely on AI without being obvious.
Tests
Tests are where AI use becomes trickier to define. Some candidates may innocently use it to proofread or sense-check their work; others might fully outsource the task.
To handle this fairly:
- Be explicit in your test brief. Include a line such as: “Do not use AI tools to complete this test. We will check for AI usage.” This sets expectations and reduces accidental misuse.
- Do your homework. Feed your test question to a few AI tools to see the sort of answers they produce. This helps you recognise the “AI voice” if it shows up, though bear in mind, that tone is constantly evolving.
- Add a follow-up stage. A short debrief interview based on the test can quickly reveal whether someone truly understands their work. Ask probing questions like, “Why did you approach it this way?” or “What would you change if you had more time?” Candidates who’ve done the work themselves can discuss it naturally and critically, talking through their process.
- Use tech smartly. For written or coding tests, there are tools that can detect tab switching or differentiate between live typing and pasted text. That won’t catch everything, but it can help you spot perfect, copy-paste solutions versus organic work.
If possible, make tests live. Real-time tasks, especially coding or design exercises, make it much harder to fake competence. For longer art tests, consider using screen recording or camera setups that show what’s happening on screen.
Back to Basics
Ultimately, the best defence against AI misuse isn’t more tech, it’s human process.
- Be transparent with candidates about your stance.
- Train interviewers to accurately recognise natural vs. artificial responses.
- Focus your assessments on problem-solving, collaboration, and creativity. These are areas where human nuance shines through.
AI isn’t going away, so make sure your hiring process is set up to handle it.
.png)
.png)
