Following the AI hype that came in 2023, low-quality content became a major concern, birthing the need for AI checkers. However, results from these AI detection tools have proven more than questionable – they’re unreliable and completely ineffective.
This article explores some of the lapses associated with AI detectors and how they have become a major pain for writers.
But first, let’s talk about why I’m so passionate about this particular subject.
Why I Care About AI Checkers
Until late 2023, I had never received any feedback that my content was potentially plagiarised. However, this changed after working with one particular client for 3 months.
The job required me to create game reviews, so it was a no brainer that I would play every single title before reviewing. This is the only way to write from an informed perspective and do the job to the best of my abilities.
After 3-months of hassle-free work-client relationship, I was hit with an abrupt message, which read:
“Hi Gideon,
I’m afraid I’ve had to end our contract. Your last article – AI content was detected, and this is strictly forbidden…”
False Accusations
That was a huge blow for me. Until that moment, I had prided myself as a writer with a clean career and no accusations of plagiarism – or anything even close to it. Besides, I didn’t even know how to use generative AI tools like ChatGPT at that time.
What was most funny about the experience? This was feedback about a review I wrote on a game released in 2023. At the time, ChatGPT’s knowledge base was only limited to 2021. So, how could I have generated a review using AI on a game that recent? The database didn’t even have any information about it.
I went back and forth with the client, demanding to know why they considered my content AI-generated. They kept pointing to AI plagiarism checkers, which for them, was all the proof they needed.
Part of a Bigger Problem
Losing that job piqued my interest to look into some of the lapses associated with these AI checkers. But I didn’t fully grasp the weight of these issues until I stumbled upon several posts from iGaming Writer Dominic Field on LinkedIn. He, along with a few other experienced writers, was vocally pushing back against the use of such tools.
I continued reading, finding more and more posts from writers and students bemoaning their painful experiences as a consequence of AI detection tools. I realised I wasn’t alone, and concluded that we have a much bigger problem on our hands.
Implications of the Unreliability of AI Detection Tools
Although I have given a hint of the problems associated with AI detectors, there are far-reaching implications. Let’s take a look at some of the most prevalent.
Flawed Assessment of Students’ Papers
With the growing use of AI among students, the use of detection tools among lecturers and professors is now commonplace. These teachers are turning towards quicker options to tell whether a paper is a human-generated content or not. But reports have proven that this might be the wrong way to go about it.
In 2023, Lamah Ahmed – Policy Research Director for OpenAI – pulled the company’s own detection tool. Ahmed was reported by CNN as saying, “We don’t recommend taking this tool in isolation because we know that it can be wrong and will be wrong at times – much like using AI for any kind of assessment purposes.”
As such, students being evaluated with such faulty and unreliable systems poses a major concern. If lecturers continue to rely on such tools, more students could face losing genuine scores, even when they are not guilty of using AI.
Also read: AI Decisioning for Fraud Detection and Prevention
Growing Distrust from Clients
Writers are the most badly-affected by this situation. Lately, agencies are subjecting them to questionable checks to ensure their content is not written by AI. It is now commonplace for writers to have to go to extreme lengths to prove they are not being dishonest.
I have recently heard of cases where writers have to record themselves working, for example, to put the minds of their employers at ease.
Others have been taking work that is demonstrably far older than anything that ChatGPT could have produced, and running that through an AI checker. Invariably, the detectors tell you that generative AI tools were used.
This toxic trend, if not stopped sooner, could further impact the productivity of writers. The process of writing and editing – as demanded by most agencies these days – is hard enough. But now, we have the added stress of having to prove content is human-generated.
Loss of Jobs
Similar to my experience, more content writers are losing their jobs because their articles have been flagged by these tools, even though they are 100% human-generated.
We have all heard that AI would take jobs. But nobody foresaw that it would happen like this – as a consequence of these unreliable tools.
AI Checkers Are Unfit for Purpose
There’s an overwhelming body of evidence that AI detectors are creating more problems than they aim to solve.
We simply can’t trust these tools to be used without further scrutiny – not yet, anyway. Perhaps one day, but not any time soon.
In the meantime, it’s important that we continue to use human input for proper and reliable assessments.
Leave a comment