I’ve been pissing people off about Midjourney, Dall-E, and Chat-GPT. Accidentally. Largely because I’ve been very unwilling to say “this is going to change everything for the better”.
I’m basically the opposite of a luddite, but I can understand why someone might make that mistake. I’m not anti-technology, though, I’m just carrying the burden of having been an internet utopian in my teens and 20s, through the rise of personal web pages and blogging and social networks.
I was the truest kind of true believer, which is why it hurt so much when it all ended up as polarization, trolls, authoritarianism, and surveillance capitalism. The evolution of the social web from Inception to Acceptance to Arab Spring to QAnon is the backdrop of my career to date.
It breeds a certain concern re: unintended consequences, is what I’m saying. But I think I actually reframed the whole she-bang around AI accidentally, this morning, when I realized what’s actually happening: everyone is learning how to write a brief.
I know that we call them “prompts” now, but the logic still holds. Each of these things is built on some natural language processing and informed by a massive set of training data, and the words you type in are interpreted as a task. You are writing a brief, whether or not that’s a skill you’ve developed to date. That means, inevitably, you will be misinterpreted, as well.
The first experience I’ve seen everyone have with AI art, for example, is assumption of context. When I tried to re-create a meme I always found amusing, I ended up with a literal wolverine staring at a photo, rather than a Marvel comics character. From there we get into issues of nuance, of clarity, of focus. The genius of Chat-GPT is that you can build on prior context. You can rephrase and refine and build, and that’s okay (because the time between reviews is seconds, not days, and crucially, there’s no producer whose life you’re ruining with the re-brief).
I raise this largely because most people in marketing / communications can make it an entire career without needing to live with the feeling of guilt that comes with writing a brief poorly. Strategists are not so lucky, we get to stare this specific category of failures in the face. A red herring example on a page, a word that could be interpreted differently, or even a turn of phrase misinterpreted as a statistic, is a stick you get beaten over the head with when it wastes a creative team’s time (or worse, kills a good idea).
If everyone I may ever work with experiences even 1/100th of this frustration trying to get Midjourney to make a picture of their favourite cartoon as a Rembrandt painting, I will be thankful for the empathy it would generate.
What this technological revolution has done, is take a really niche experience (providing direction for an art director and/or a copy writer) and provide a faster-but-limited version of it to everyone. It’s particularly interesting how ad agencies are going wild for having a text box that does a tiny and disconnected and poorly explained version of what they… already do.
It would be terrifying if not for the un-secret truth - that explaining why an idea is the right one, why the image needs to look a certain way, and why the headline requires the slightly-less-grammatically-pleasing version of the sentence, is what clients actually pay agencies for, most of the time. If it was purely creative production and order taking, we’d have gone extinct two generations back.
This is why I am really interested in discussions of high-value prompt engineering; they are based on the unspoken (unexamined?) assumption that a brief can be perfect enough that there need be no explaining why, no haggling over executional details, no personal investment involved, to sell an idea. I’ve run into this perception of briefing-as-perfectable several times, and I get the appeal, but… that’s fundamentally reducing a conversation to a set of instructions. I’m not confident that would be a good thing.
I don’t doubt there will be a day AI can write a better deck than I can. The question is, can a large language model based on a massive amount of training data and variables do the truly novel, truly generative work that creative agencies are notionally paid for, on purpose? Because until the answer is yes… someone will still need to figure out what the brief needs to be, write the brief, and go through the beautiful and terrible process of briefing.
(And then someone will need to be able to explain whether or not the output is good. And then someone will need to actually hunt down clients, get them on board, and interpret whatever is left unsaid.)
I’m pretty sure I signalled above that, to my experience and understanding, none of these tools has shown it can repeatedly and reliably be induced to create new ideas, so much as deliver a plausible approximation and/or explanation of existing ones.
This is justifiably foreboding for introductory writing, or explainers (or news articles), and for commissioned art or illustration of existing concepts (either characters, stories or genres). Some of the best paying and most widely accessible creative work is bringing familiar concepts to life, with vision and talent.
A huge portion of human creative output is a really well done re-hash of previous human creative output. Not just sequels but inspirations and adaptations. For every truly great piece of creativity, a genre is spawned (I for one am excited for a bevy of culturally-informed and nuanced stories that question the nature of reality inspired by Everything Everywhere All At Once).
I guess I find something really romantic about the idea that those truly original germs of an idea are uniquely human.
And until that gets proven wrong, I’ll be here figuring out strategies and writing briefs, regardless of whether they are for a person or a chatbot.