Low-quality content
- Journal
- Process
I’ve avoided using generative AI tools since the start of the latest tech hype cycle. I’ve never used ChatGPT, Copilot, Midjourney, Stable Diffusion, Claude, Cursor, or any other of the household-name generative AI tools.
There are many questionable aspects of these tools around ethics, environmental impacts, privacy and security. But putting these concerns aside, it’s curious how popular these tools are when their output is so frequently generic and low-quality.
The value proposition is obvious: publish more content, and more quickly. But at what cost? Originality, thought and the human touch seem to take a backseat when generative AI is involved.
We all know that generative AI is used for article outlines, drafts and blog post images, among other things. Over time, there have occasions where I’ve suspected its use in less obvious situations as well:
- Unlabelled AI responses to personal emails I’ve sent
- Comments and replies in communities/groups as well as AI-generated “conversation starter” questions
- Filler content in products
- Copy and services offered to clients
- Newsletter content
As a reader, I don’t think I’ve ever consciously consumed AI-generated content and felt better for the experience.
The realisation that I’ve been reading unlabelled AI material is usually accompanied by a feeling that the content is insincere, dishonest and lacking respect for the recipient/reader. That goes for articles, newsletters, emails, chatbots – the lot.
“Treat it like an intern”
The accepted wisdom around generative AI is that the output is only as good as the prompt and editing skills of the user. On the evidence of the output, I wonder how much editing and curation actually goes on when generative AI is used for content creation.
Much in the same way that people already treat ChatGPT as a source of fact, it seems that a lot of AI-generated content is considered good enough to publish with only a light edit and review.
More widely, generative AI is often pitched as a productivity hack. Speeding things up isn’t typically an indicator of quality, thought or curation.
If content is AI-generated for speed, what are the chances that it’s getting anything more than a cursory review before publication?
AI slop or human slop?
Of course it’s not always possible to say with certainty that something is AI-generated. I could be misidentifying low-quality human-written content as AI slop. But there are unambiguous signs that don’t require an AI-checker to identify:
- The unmistakable style of AI images
- Waffling says-nothing nature of AI-generated content
- Mistakes that humans are less likely to make
Without realising, I’ve almost certainly come across high-quality content that has involved generative AI use in its creation. For instance, I’d imagine the results are better when AI is used as an editor rather than the content creator. But I suspect this is the exception rather than the rule: from the perspective of pure anecdata, content quality seems to be broadly lowering.
Ultimately, whether low-quality content is accurately detected as AI-generated or misidentified bland human output is somewhat of a moot point. It used to be that I’d be on the lookout for SEO articles, skipping them as quickly as possible. But now my brain has been trained to look for AI-generated content, and it nopes out as soon those spidey-senses are activated.
At the end of the day, I wonder what the point of all of this is: is our goal to produce content we can’t be bothered to write for an audience that won’t be bothered to read or engage with it? Or, worse, skip it because they assume it’s AI-generated?
Generative AI makes it easier than ever to pump out bland, generic and uninspiring content for all sorts of purposes. Now more than ever is the time to collectively push for quality, originality and creating material that’s undeniably human.