AI isn’t magic. It’s maths, language and incredible amounts of processing power. But that doesn’t mean AI can read minds (yet). For now at least, you need to tell it exactly what you want to get the best results.
We’ve probably all had the following experience when first trying to get to grips with AI: you type in your request and hit enter. Instead of being presented with something dazzling, the AI’s output hasn’t hit the mark at all. So you decide to just do the task yourself.
For a lot of people, this is the gap between AI expectations and AI reality in a nutshell. We’re told that this powerful technology can do everything, but we keep getting bland, generic answers that will take a lot of time to fine-tune into something of real quality and value.
But is this a case of user error or a flaw in the system? The truth is that as powerful as AI is, it also (currently) has clear limitations. It doesn’t understand what you want – it predicts it based on the instructions and context you give it.
So, if you’re not getting the results you want from AI, the reason is probably that you aren’t asking it in the right way.
Why inputs matter more than you think
Your ‘prompt’ is all the information you give to an AI before it starts. If your prompt is vague, the result will be vague too.
if you don’t give AI direction, it’ll fill in the gaps with guesswork.
Imagine if you contracted someone to do extensive interior designing of your home and the only instruction you gave them was that you “wanted the place to be redder”. You might come back to find the designer has wrongly interpreted your request as meaning you want crimson walls, maroon carpets and cherry-red furniture. Not because the designer is bad, but because your brief was. AI works the same way: if you don’t give it direction, it’ll fill in the gaps with guesswork.
That’s why strong inputs are essential.
When you get specific, add detail, and provide structure, the AI has more to work with. Think of it less like a search engine and more like briefing an eager intern.
Prompting myths that might be holding you back
Many beginner AI users fall into the same traps. Here are some of the most common misconceptions that might be influencing your outputs:
“You only need one prompt.”
Not quite. Effective prompting is iterative. A great input will get you a lot closer to the goal, but most of the time you start with something broad, see what comes back, and then tweak, expand or clarify. It’s a dialogue – not a one-and-done transaction.
“The AI should know what I mean.”
How could it? It only knows what you tell it. If you want a blog post in a certain tone, or a list in a specific format, or a summary that includes particular points, you need to spell it out.
“If it gets it wrong, the tool doesn’t work.”
Actually, a not-quite-right answer is a helpful signal. It shows where your prompt lacked clarity or context. It’s your cue to refine.
The anatomy of a good prompt
There’s no one-size-fits-all formula, but effective prompts tend to have a few things in common:
A clear instruction
What do you want the AI to do? Write? Rewrite? Summarise? Critique? Be precise.
A specific output
Don’t just say, “write a post about sustainability.” Say, “write a LinkedIn post about our sustainability goals in a confident, professional tone, using no more than 100 words.”
Relevant context
What does the AI need to know to do the task well? That might include your audience, purpose, tone, brand guidelines, or relevant background.
A defined format
Should it give you bullet points? A headline and intro? A numbered list? The more structure you include, the better.
A useful example
Showing what ‘good’ looks like can be powerful. If you’ve got a reference point or an ideal outcome in mind, include it. For example, if you don’t have a tone of voice, you can just copy in text from a blog you want to emulate and tell the AI to write in that style.
Thinking in workflows, not just outputs
In the next few years, what will separate the AI leaders from the rest of the pack is the ability to slot AI into their processes in a way that enhances value. The aim of the game here isn’t replacing people, it’s enabling them with AI solutions deployed in a strategic way.
Instead of just asking AI to complete a task, think about how it could support a broader process. Imagine it as an intern – capable, fast, but in need of guidance.
Say you’re writing a client proposal. You could break the process into steps like:
- Summarise the brief
- Pull in relevant case studies
- Draft an outline
- Suggest a persuasive call to action
Each of those becomes a separate prompt, with its own instruction and inputs. You’re not just creating one output – you’re building a mini-workflow. This way of thinking unlocks real productivity gains. It also forces clarity, because you can’t brief AI well if you’re not sure what you want from the process.
Give it the good stuff
Another common issue is not feeding in the right supporting information. You wouldn’t ask someone to write a company bio without telling them what the company does. AI is the same. If you need the response tailored to a specific audience, say who they are and what they care about. If you need certain information to covered, provide a list.
You can provide PDFs, word docs and web links for AI to reference, or you can just copy and paste notes directly into your prompt. Going back to our intern analogy, try and provide AI with the same resources you’d give a human to help them do the best job possible.
Edit, refine, repeat
‘Iterate’ is definitely the buzzword of the AI boom. And while I’m not personally a fan, this is an AI article – so I am almost legally obliged to use it. This is because iteration (or ongoing improvement) is a cornerstone of AI use.
Remember that AI should never be deployed unsupervised. It’s a magnifier of your expertise and skills, not a replacement for them.
The first response is rarely the best one. But that’s not a failure – it’s part of the process. Ask a writer or a designer how many projects of theirs get signed off at V1. 90% of the time, creating something great requires revisions and tweaks. In fact, the most important thing here is having someone else review the early drafts to make sure they are sticking to the brief. You take on that role with AI. Your expertise, oversight and quality control are absolutely crucial to making sure the outputs are good enough.
AI works best when you treat it as a collaborator. If the answer’s not quite there, ask it to revise based on new instructions. Highlight what worked and what didn’t. Ask for more detail, a different tone, or a tighter structure. The more you iterate, the closer you get to what you actually need. But just remember that AI should never be deployed unsupervised. It’s a magnifier of your expertise and skills, not a replacement for them.
Better prompts, better results
Most frustrations with AI come down to poor prompting. But the good news? Prompting is a skill – and like any skill, it can be learned. The more intentional you are about what you want, the better your outputs will be. And once you start breaking down tasks, adding context, and thinking in workflows, you’ll see what AI can really do.
Prompting also shows just how important human oversight, creativity and strategic thinking are to getting the best results from AI. Without an expert in the mix providing inputs and checking outputs, you put your business at real risk of promoting inaccurate information and bland, generic content that sounds like everyone else.
Because the tool isn’t the reason your AI outputs are missing the mark. It’s what you give it to work with.
Join Colm and another member of our AI Taskforce, Elliott, for a live webinar – “Avoiding the AI Content Trap” – on Thursday 15 May 2025 at 09:30am BST. Sign up here.