I’ve seen coaches refer to content they see on LinkedIn as AI slop, which feels from my perspective both judgemental and unfair. The output from AI never arrives ready to share, but not everyone knows that, and some (think dylexic or other neurodivergent) have discovered in AI a way of communicating in writing that they simply couldn’t access before.
AI isn’t the issue, we are. Many of us tried ChatGPT 18 months ago, found its output less than satisfactory, and concluded that AI is rubbish.
- It gets things wrong!
- It makes things up!
- I could produce better writing!
- It uses em-dashes!
- It’s rubbish!
I’ve even heard of coaches who tried to play chess with ChatGPT, found it couldn’t play chess, and took that as evidence that LLMs are useless. This begs the question what if the problem isn’t the tool? What if the problem is that coaches are using it inexpertly, without the foundational knowledge required to use it well in any given context, and then judging its output against a yardstick they don’t actually understand?
The Expertise Problem
An LLM is a ‘large language model’ and they are very clever indeed, but they’re not clever at everything. One thing that they can be very good indeed at, is writing copy, which is why coaches think it’s the answer to their marketing prayers.
Coaches I speak to have often used an LLM inexpertly – although they wouldn’t have been aware of how inexpertly they were using it – and then found its output to be less than satisfactory. That’s like someone with zero coach training choosing a coaching question from a list online, asking it in a conversation and getting a poor response, and concluding that coaching doesn’t work.
Prompting LLMs is similar to asking coaching questions in that the more sophisticated the questioning or prompting, the better the output. All AI output needs a ‘human in the loop’ to do final edits, especially when it comes to copywriting, and this is where understanding what good looks like becomes crucial. The problems start if coaches aren’t familiar with what good looks like in a given situation, if the yardstick they’re using to measure output is inadequate.
Most coaches have no idea whatsoever what good looks like when it comes to marketing and client acquisition because it’s a counterintuitive process. We think we know (and in some instances we’re absolutely certain we do) but we don’t. So when we prompt an LLM to create marketing content and it produces something that doesn’t look like what we expected, we blame the tool, call it slop, and dismiss AI as useless.
What we’re really seeing in this scenario is our own lack of expertise reflected back at us.
Why There’s No Magic Prompt
Coaches have asked me for what they believe would be a ‘magic prompt’ that will fix all their marketing problems. They think that if I would just give them the magic words then ChatGPT (or Claude, or Perplexity, or Gemini….) would be able to create everything they need to attract a stream of clients, forever. The thing is, there is no magic prompt.
I could, for example, provide a prompt that said ‘create a year’s worth of LinkedIn posts’ and an LLM could do this. However, without the foundational work done to provide the context in which the LLM should create the output, it will produce what coaches have been calling slop.
Here’s a good analogy that coaches will understand – just because someone can ask questions doesn’t make a conversation a coaching conversation. Someone could ask an LLM for 100 great coaching questions, but without without knowledge to have a professional coaching conversation then the questions alone couldn’t help that person have an effective coaching conversation.
As coaches, we wouldn’t consider this an acceptable way of approaching coaching because we’ve learned how to coach and have developed the skills and knowledge required to have a proper coaching conversation. The same goes for client acquisition and marketing, except we think we know what we’re doing without acquiring the skills and knowledge.
Coaches ask an LLM to create marketing content without understanding marketing, then judge the output as poor because it looks like the kind of stuff everyone else is churning out, and which we refer to as slop. We blame the tool for our lack of foundational knowledge.
The Foundational Work
The difference between how an expert uses AI for marketing versus how a novice uses it comes down to the foundational work – the hours spent refining the target audience, the ‘ideal client avatar’ (ICA), and the marketing message. This is a massive and essential part of the work that must be done before anyone ever asks for output, if we want that output to be good.
Where we go wrong is that we ask an LLM to help us to, for an example, create an ICA, but have no idea at all if the output from that prompt is good or not, despite believing that we do. We use the tool from a place of ignorance and expect it to compensate for our lack of knowledge, which isn’t how tools work.
When an expert uses AI for marketing, they’ve already done the deep work of understanding their target audience at a granular level. They know the challenges, the language, the context, the shame, the desired outcomes, (to list just a few of the things we need to understand) and they can provide that context to the LLM because they’ve developed it through proper learning and application. The LLM then helps with execution of a strategy that already exists, rather than being expected to create the strategy from nothing.
When a coach who hasn’t learned client acquisition properly uses AI for marketing, they’re hoping the tool will tell them who their target audience should be, what their marketing message should say, and how they should position themselves. They’re asking the tool to do the thinking they haven’t learned how to do themselves, and the output – the ‘AI slop’ -reflects that fundamental gap.
Use Tools from Knowledge, Not Ignorance
It’s about understanding the tool and using it in context, from a place of knowledge rather than ignorance or laziness. Knowledge, as always, is power.
When we use AI tools from a place of knowledge – when we’ve done the foundational work, when we understand what good looks like, when we can evaluate output against genuine expertise – the tools become incredibly powerful. They can help us execute faster, create more efficiently, and test variations we might not have thought of.
When we use AI tools from a place of ignorance – when we’re hoping the tools will bypass the learning we need to do, when we’re asking them to think for us, when we can’t evaluate whether output is good because we don’t understand the domain ourselves – the tools become frustrating and produce slop because we’re providing nothing substantive to work with.
The tool isn’t failing, we’re are using it wrong.
Chess
A coach told me they tried to play chess with ChatGPT, found it couldn’t play chess, and concluded that AI is useless. The problem with that thinking is that ChatGPT isn’t a tool that plays chess, and using it for chess then concluding it’s rubbish is like using a hammer to tighten a screw and concluding hammers are useless.
The right tool for the job matters, but more importantly, understanding what job the tool is designed for matters. We are at risk of dismissing LLMs as rubbish based on using them for things they’re not designed to do, or using them without the expertise to use them well, or judging their output against a standard we don’t understand.
We might want to reconsider what we believe to be true about AI tools and their capabilities.
Slop Isn’t AI’s Fault
When we view things we see on LinkedIn as AI-generated slop, we’re often right from the perspective that what we’re looking at was AI-generated, but the problem is a user error, not an AI failure. What we’re looking at is the output from someone who asked an LLM to create content without providing proper context, without understanding what good looks like, without doing the foundational work required to give the tool something substantive to work with.
The LLM did exactly what it was asked to do – it created output based on minimal input. The result is generic, vague, unconvincing copy that sounds like it could have been written by anyone to anyone. It’s slop not because AI is incapable of better, but because the person using it didn’t know how to get better output.
When coaches see genuinely good marketing content, they often don’t recognise it as a) marketing in the first place and b) AI-assisted because it doesn’t look like slop. What good marketing looks like comes from someone who knows their target audience deeply, who understands their challenges intimately and who can speak in their language convincingly. That’s because the person creating it has done all that work, and they’re using AI as a tool to help with execution rather than expecting it to do the thinking and the intellectual heavy lifting.
Interestingly, good marketing delivered by coaches is usually invisible to other coaches, because they are not the target audience of the coach who is marketing well. For examples of excellent marketing, look at the mentors from The Coaching Revolution (you can find them all on my website, on the About Us’ page). They all market incredibly well and none has a target audience of coaches.
What Needs to Change
Coaches I speak with have dismissed LLMs as rubbish based on experiments from 18 months ago, or more recent unsuccessful attempts to use them without proper knowledge and skill. We might want to reconsider that position, because not only have the tools improved dramatically, but also we were almost certainly using them badly in the first place.
The issue isn’t whether AI can help with effective and ethical client acquisition processes (for example). The issue is whether coaches have the foundational knowledge required to use AI tools effectively, in any context. Without understanding our target audience deeply, without a clear marketing message, without knowing what good marketing looks like for that specific situation, no AI tool will produce good output no matter how sophisticated the prompt.
At The Coaching Revolution, our approach to teaching coaches how to use AI tools effectively, starts with the foundational work. We take coaches to a position of knowledge and therefore power. Our coaches choose their target audience, they learn what a compelling, authentic marketing message needs to say and what good looks like and how to implement this as part of an ongoing strategy. Then they learn how to use AI tools to help execute that strategy.
The tool is powerful when used by someone who knows what they’re doing. The tool produces slop when used by someone believing it will bypass the learning they need to do. That’s not an AI problem, it’s a knowledge problem.
The Real Issue
The real issue isn’t that AI is rubbish or that the content it produces is slop. The real issue is that many of us are trying to use sophisticated tools without the foundational knowledge required to use them well, then blaming the tools for our own lack of expertise.
We wouldn’t expect to deliver excellent coaching without proper training, yet we expect to create excellent marketing with AI without understanding what client acquisition is and what good looks like in that sphere. We wouldn’t judge coaching as a discipline based on one poorly executed coaching question, yet we judge AI as useless based on poorly executed prompts.
The right tool for the job matters, but so does knowing how to use that tool. AI can be incredibly powerful for those of us who understand client acquisition and marketing, and who use AI to help execute strategies they’ve developed through proper learning. AI produces slop for those of us who don’t understand marketing and are hoping the tool will think for us.
That’s not the tool’s fault. That’s a knowledge gap that needs to be filled before any tool, AI or otherwise, can be used effectively.
An Opportunity
If you’d like the opportunity for a robust conversation about this – or to just flat-out tell me why I’m wrong – why not join my next free challenge, Nail Your Niche? There’s even an option to upgrade to a VIP version, which gives you 3 x 60-minute group mentoring sessions with me for just £99 (inc VAT) – that provides us with time for a lot of robust conversations!
Are you ready to choose to learn what good looks like? Register for the challenge by clicking here.
Recent Comments