It’s an anxiety-inducing time right now, so why not talk about the subject that’s falling into the not-urgent-but-important quadrant for many of us: AI.
I am not interested in litigating the (many) arguments people making against the development, existence or use of AI, from training data, to energy usage, to the moral injury AI art seems to cause a subset of people online. I just want to talk about how AI is reshaping strategy.
Part of your job as a strategist is to live in the world and be able to observe and experience while reserving judgment. AI is in the world. So, let’s observe.
We have a somewhat unique, era-defining cognitive problem: there is too much to consume, too much to know, too much to navigate. Our brains do not exactly thrive in a world of infinite stuff to intake and interpret. And while this has been true in varying degrees in many eras, it’s particularly pronounced in our current Era Of Content. Most strategists know this challenge intimately, because we feel it every time we need to write a POV, conduct landscape research, or figure out what a broad audience thinks and feels well enough to support creative thinking.
This, to me, is what AI could actually be actually for. Using it to generate more stuff is an unfortunately side-effect of creating a tool that can help you navigate a world of too much stuff. It’s not about creating output for you as much as it is an assistant that helps you corral and evaluate inputs, and maybe sharpen your own output. It’s a great sidekick. I don’t know if we’re ever going to reach Artificial General Intelligence, but I’m pretty sure we’re already at Artificial General Intern.
Here’s some stuff I use AI for, today:
Basic web research: Perplexity takes what used to be 8+ hours of research and crams it into 1-2. You still need to check the sources (it’s not perfect) but in general it does a better job than an entry level human on the first pass of desk research, with the same amount of coaching / correction, in 1/10th the time.
Summarizing (public) reports: Most desk research work, in my experience, involves reading 2-3 long reports and articles that aren’t helpful, for every one that ends up influencing your work. I’ll assemble a pile of potentially relevant research, get ChatGPT to summarize each PDF (via a custom GPT) so I can figure out what I want to read in detail myself, and what might correlate well with another source I have.
Comparing (public) data sets / research: I set up a custom GPT more than a year ago that will take a report, and look for contradictions in either its data, conclusions, or both. About 6 months ago I set up a new one focused on taking multiple reports, and looking for contradictions, correlations, and incongruities between them. This is maybe the biggest time saver I have right now, because it finds the weird bits, so I can focus on understanding, explaining, or researching them further. TL;DR - use the pattern recognition machine for pattern recognition.
Challenging my own work: When I have an insight, or an POV, or a conclusion, sometimes I’ll ask ChatGPT (or Claude, or whatever) to evaluate my work (scrubbed of any confidential or protected information) for clarity, originality, consistency. To point out gaps in my argument. To review it from the perspective of a person I spend a few paragraphs describing, or even (hat tip to Mark Rukman on LinkedIn) in the persona of a historical figure.
Here are the barriers I see (real or imagined) that stop people from using AI:
Fear of becoming obsolete: Many people are worried that AI is going to make them obsolete. It might. I read encyclopedias in grade school to access the global store of knowledge, and flipped out when Microsoft Encarta suddenly made all of those volumes obsolete, only for that to be replaced by a broader amount of authoritative online publishing. Some of my skills (like absorbing a ton of data and finding connections) are going to become less valuable. Others (like getting to the unspoken rationale behind a confusing observation) still seem to have value. I’m going to focus there.
Asking it to do your whole job: If you’re getting an LLM to write for you, in my mind that’s lazy, and a little disrespectful of your intended audience. Why should they read something you didn’t bother to write? These are pretty incredible tools for handling inputs, but they’re currently fairly shoddy for crafting outputs. Which, imo, is the point. Everyone likes that post that argues AI should be doing the laundry, not creating art and poetry. I agree; use it for the drudgery parts of your work, and if you realize that was where you added the most value, maybe you need to think about that. But the output you make and put your name on? You should probably still do that yourself.
Delusions of grandeur: You are likely overestimating the quality of your own work, and underestimating the quality of AI-generated work. The average strategist’s half-day research-driven pen portrait isn’t that much more useful than what IdeaApe generates in 5 min, for example. There are definitely things a person can do that an AI cannot (currently, at least) but you should be focusing your effort there, rather than on trying to denigrate useful (if limited) tools.
The changing value equation: The biggest barrier is a mindset and industry that has historically billed based on hours, rather than outputs. Instead of being thrilled at the ability to increase the speed to a good brief or insight (which is the number one complaint your partners inside and outside of an agency have) many of us are worried we might get the work done too fast. Instead of being excited about extending our capabilities, many of us are worried that the quirks of brain or personality equipped us for this work, are now meaningless.
Look, strategy is going to change, it always does. But the parts AI is about to take over… aren’t actually strategy, IMHO. When I ask an LLM to build a strategy, it throws a jumble of standard practices at me. A list of tactics, some relevant, some not. But it can shine at the stuff that goes AROUND strategy. The digging and comparing of research that has to come before you can find glimmer of an insight or opportunity worth pursuing. The ever-updating tactical considerations that might be needed to get an idea ready for every platform, channel and format. In general, the stuff people in this role, in this industry, complain about doing the most.
AI is not going to steal your job, at least not today. It’s going to make you a lot more efficient, and if you’re not careful to avoid using it to think for you, it might make you a lot less original. It doesn’t even mean departments are getting smaller, but it might mean you can live up to the very high expectations clients and creative teams already have for how much a single strategist can produce and support.
However, AI might destroy the current concept of an intern or entry-level strategy job, like the ones that kicked off the careers of a lot of the people currently in senior roles. We need to figure out how to identify, train, and mentor that talent in a way that acknowledges, maybe, asking people at the beginning of their career to do the heavy lifting in research, POV development, and at times reporting or analysis, wasn’t the best approach to training, or efficiency. That said, if I’d started my career 18 years ago with these tools, instead of the ones I had then, I think I could have done much, much more, much, much sooner.
Well said! Just like all technological advancements, we can use this opportunity to give all the drudge work to AI so we can train our own minds to focus on the value only humans can bring.