Ready Fire Aim
Mental models versus large language ones.
AI is going to fundamentally change advertising in a lot of ways. We’ve talked about that before, and we’re definitely going to end up talking about it again, because it’s the background radiation of the moment. But there’s a quieter, more fundamental realignment that’s going to be necessary, and its anathema to the way our industry has worked forever.
Advertising is the way it is, and to some extent strategy exists, because of the high costs of failure. Unlike many other businesses, where you can fail quietly and semi-anonymously, until the point when you don’t, advertising can’t truly be tested until it is out in the world1. And the smallest-sample-location-based tests can spread, because advertising is fundamentally spreadable; nothing we put into the world ever fully goes away or stays put, anymore.
A brand mis-step, even as a small regional test, can completely shift the relationship a company has with its consumer base, and it will take years to fix. This is hard for some people to understand until it happens to them, but the short version is; in many cases way safer to ship broken product than broken marketing. Largely because product (assuming it doesn’t harm someone or immediately make you a laughing stock) can be repaired, recalled, or exchanged. Marketing that screws up too much isn’t forgotten, if anything it’s more memorable than the good work you do on purpose BECAUSE it’s a mistake.
A high cost of failure has led to some behaviours that are features, but get perceived as bugs.
Agencies overthink things. We have a lot of rounds of approval and redundancies in our processes. We over-consider best practices at times. This is in conflict with speed sometimes, and there has been a massive amount of effort invested over the decades to make the process faster, but just as reliable, because the stakes of a screw up have gotten higher in an age of social sharing, influencer marketing, and (deep sigh) “cancel culture”.
So we tend to operate in a strict mode of Ready, Aim, Fire.
Ready as in “let’s figure out what we’re doing here”, Aim as in “let’s align on the approach, solution and idea” and Fire as in “let’s make the thing and get it out into the world”. We make sure we’ve got a good understanding of the expectations and constraints, we develop a detailed plan for what we’re going to do and how we’re going to approach it, and then we execute to the best of our abilities. This is logical, consistent, and useful. Managed risk, minimal wasted effort.
It’s also the exact opposite of the way AI makes me want to work.
My experiments in AI have gone from the regular chatbot interface, to building plugins, skills and agents in Claude Code that take tasks that used to be too time consuming to do, and makes them always-on background work.
Step one, for me, was figuring out how to get better, more detailed more informative stuff out of LLMs, paired with my existing processes. Basically, making it possible to take a Ready, Aim, Fire approach with more depth, detail, and exploration than was possible before. This has, tbh, been a huge success. It’s changed the way our team works, and made it easier to deepen our engagement with a problem before delivering a strategy, before digging into a comms plan, or before briefing a team. We’re spending less time on finding the right things to think about, and more time on the thinking.
But when I started building more automated processes into my workflow, I realized the best way to work with AI tools is the opposite. I needed to start thinking Ready, Fire, Aim2.
The “Fire” stage shifted from “make the things and get them into the world” to “make a lot of things and see what of it actually makes sense, and use THAT to evaluate what the right choice is”.
The pace of prototyping possible when it comes to building tools or outputs with AI breaks the math, a bit. When you’re building something internally that means endless iterations as you whittle your initial stick into a spoon. But it works for purely mental labour, too. I’m not pushing anything half baked into the public sphere, but I am iterating at a more detailed and faster level, and sharing a different type of work in progress. For someone who’s career has likely been more shaped by animated discussions about strategy documents than any other thing, the ability to throw together 20 or 30 versions of a strategic territory, rapidly figure out what doesn’t work, and then take that knowledge to craft something I’m actually ready to stand behind, is kinda great.
I am still midway through this rewiring. It’s only possible because of the ability to make inefficient, but less uniquely humanly creative, parts of the process happen very quickly. In one recent example, I needed to consider potential definitions of a set of words. Ready-Aim-Fire Jon would have spent an afternoon with a thesaurus, and built a map of all the possible definitions and how they interconnected. Ready-Fire-Aim Jon, however, used an LLM to crank out hundreds of combinations, with proposed definitions attached. By reading that pile of options, I both 1) encountered some unexpected possibilities, 2) realized that a lot of stuff might have worked if done differently, and 3) realized that a lot of stuff was definitely never going to work, and discarded it.
I didn’t spend the time independently thinking my way to a correct and defensible answer, and then validating the answers I liked the most (and that’s what the timeline available would have suggested, even a year ago). I spent it challenging and questioning AI attempts at a small defined problem, to develop and refine a set of heuristics I could use to make the problem clear, tangible and solvable.
The cost of failure in advertising hasn’t gotten a single iota lower. But the job has always been to fail internally as part of the process, so that you don’t fail when it comes to what goes to market. We’re in a time where the we have the ability to do that at higher definition and at greater scale, than before. We can see what success and failure are shaped like, before we commit to an outcome, more often than we used to.
Ready, Fire, Aim.
I know many testing businesses disagree with this, but as someone who has put a LOT of work through testing, and seen a LOT of focus groups, people in isolation don’t know what they think until there’s a broader cultural response to something, the same way some movies just hit different in a theatre. You just don’t know until it’s a collective experience.
Just to reinforce this - I am not suggesting a spray and pray approach, or test & learn as a replacement for strategic thinking and creative craft. I’m suggesting an assisted rapid prototyping model where you make (many) things as a way of seeing if and how they work, and use that to aid your strategic and creative process. Breadth and iteration, not indiscriminate generation.

