When people talk about AI, they’re usually not talking about the same thing. The term has become an umbrella for everything from your phone’s voice assistant to hypothetical machines that don’t yet exist. Understanding the differences between these levels matters if we want to make sense of where we are now and where we might be heading.
Level 1 – Narrow AI
This is the AI that exists in our lives right now. It’s everywhere.
Narrow AI (sometimes called weak AI) is built to do specific things really well. Netflix recommends shows based on our viewing history. Google Maps predicts traffic patterns. Our phones’ camera recognises faces. These systems are excellent at their single task but can’t apply that skill elsewhere. A system trained to recognise images can’t suddenly understand language. A chatbot that answers customer service questions can’t play chess.
This category includes machine learning techniques that have been around for decades, along with newer generative systems like ChatGPT (released in November 2022). When generative AI creates text or images based on what we ask it to do, that’s still narrow AI. It can be powerful and useful, but fundamentally limited to what it was designed for.
Level 2 – Generative AI
We’re currently living in this phase, though it’s important to understand what generative AI actually is.
Generative AI is a subset of narrow AI. It learns patterns from data and then generates new content based on what you ask it. Feed it a prompt and it creates text, images, code, music etc. It doesn’t understand what it’s doing in the way we might understand something. It’s working from statistical patterns in its training data, not from genuine comprehension. Think of it as very sophisticated pattern matching that produces remarkably useful results.
The arrival of ChatGPT made Generative AI visible to millions of people, but Generative AI itself isn’t new. What changed was accessibility.
Level 3 – Artificial General Intelligence (AGI)
This is theoretical, it doesn’t exist yet.
AGI would be AI with human-level intelligence that can learn and adapt across different domains without being retrained for each new task. A human can learn to play chess, then apply problem-solving skills to medicine, then switch to writing. An AGI system could do the same. It would understand context, transfer knowledge between areas, and reason through novel problems independently.
The key difference between today’s narrow AI and tomorrow’s AGI is that it wouldn’t just react to prompts. It would think, reason, and learn the way humans do, across any field.
Level 4 – Artificial Super Intelligence (ASI)
This is purely speculative.
ASI would surpass human intelligence in every domain. It would solve problems humans can’t, potentially address climate change or disease at scales we can’t imagine. It might even develop something like emotions or creativity. It would also be capable of improving itself without human input, potentially at exponential speeds.
This is where the theoretical becomes unsettling. An ASI aligned with human interests could solve our biggest challenges. An ASI misaligned with our values could be catastrophic.
Leading AI researchers are genuinely concerned about this. Geoffrey Hinton, known as the “godfather of AI” for his pioneering work on neural networks, has said “it’s quite conceivable that humanity is just a passing phase in the evolution of intelligence” and estimates a 10 to 20% chance that superintelligent systems could wipe out humanity within the next 20 years. Yoshua Bengio, who won the Turing Award alongside Hinton for their foundational work in deep learning, has warned that “we don’t have methods to make sure that these systems will not harm people or will not turn against people. We don’t know how to do that.” These aren’t outlier voices, these are the researchers whose work made modern AI possible.
Where We Actually Are
Every AI system we use today is narrow AI. The generative tools that feel capable and impressive are still narrow. AGI is something researchers are working towards, with no clear timeline. ASI is possibly science fiction dressed up as serious discussion.
The gap between narrow AI and AGI is vast. We don’t yet know how to build systems that generalise knowledge the way humans do, or whether current approaches will ever get us there. The gap between AGI and ASI is even larger. What matters now is understanding that today’s AI, however sophisticated, is still fundamentally limited. It’s a tool that requires human judgment, oversight, and understanding.
Understanding these four levels demystifies AI – you now know what it actually is not just the hype around it, you understand we’re working with narrow AI and you know AGI is theoretical and ASI is speculative. That’s all useful and interesting knowledge.
What knowing this doesn’t help us with however is whether a piece of AI-generated content will actually help our coaching businesses. Whether it will generate interest in what we do, or whether it’s worth using at all. What we need to understand to make it worthwhile is something entirely different.
AI is a Tool and Tools Need Skill
Let’s use a really simple example – the power drill. A power drill is useful and it makes tasks faster and easier than doing them by hand. However, buying a drill doesn’t guarantee our shelves will be straight or that our cabinets will hang evenly because being able to achieve those things depends upon additional knowledge. We would need to understand angles, measurements, wall types, and weight distribution. A skilled tradesperson with a basic drill will always outperform an amateur with an expensive one.
AI works in the same way. Having access to ChatGPT or other AI tools doesn’t guarantee we’ll produce effective marketing or write compelling copy. That depends on what we ask it to do, how we refine our requests, and whether we understand enough about client acquisition and marketing to recognise when the output is good or not.
Understanding what AI is, is helpful but isn’t enough – we also need to have very good skills in the particular context in which we’re using AI.
Let me give an example…
Let’s imagine we have two coaches using ChatGPT to write marketing copy. One knows how to prompt effectively, she understands temperature settings and context windows and how to structure requests. However she’s never studied marketing or client acquisition and so she’ll get grammatically correct output that is limited by her lack of understanding of the field. She won’t know if what’s been produced is poor because she doesn’t know what good looks like. The lack of understanding what good looks like means the output is invariably poor, because the first output from any AI tool is usually poor. She’ll create content, articles and posts for example and when she has no interest from potential clients, she’ll decide that marketing doesn’t work rather than recognising that the AI output was poor because her input was poor. Garbage in, garbage out for those of us of a particular generation.
The other coach knows less about AI but understands how client acquisition works. She has given her chosen AI tool lots of material upon which to draw when it creates content to use in her marketing. When she uses AI, she recognises immediately when the output sounds generic or misses the mark. She pushes back and iterates because she knows what good looks like. The tool works in her favour because of her knowledge of the field.
Understanding AI without domain expertise means we have a powerful tool we can’t use properly. Yes, we can remove em dashes and tighten sentences, but we won’t know if the output is useful in the context of client acquisition. We won’t know if it speaks to anyone in particular or if it positions us well. We won’t know any of that because we don’t know what good looks like in the context of client acquisition, and many of us are not aware that we need to speak to a particular client rather than trying to speak to everyone. That’s the real risk and it’s where coaches end up blaming the tool or the approach instead of recognising we lack the foundation to judge quality. Going forward, AI isn’t going away and coaches who use it well for client acquisition will be the ones who possess both skills – they’ll understand how AI works, and they’ll understand marketing. Without both, we’re just polishing something we can’t evaluate.
Sources
Geoffrey Hinton quote “It’s quite conceivable that humanity is just a passing phase…”: MIT Sloan, May 2023, https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-geoffrey-hinton-sounding-alarm-ai
Geoffrey Hinton on risk estimates: Machine Intelligence Research Institute, “If I were advising governments, I would say that there’s a 10% chance these things will wipe out humanity in the next 20 years.”
Yoshua Bengio quote “We don’t have methods to make sure…”: CNBC interview, November 2024, https://www.cnbc.com/2024/11/21/will-ai-replace-humans-yoshua-bengio-warns-of-artificial-intelligence-risks.html
