This piece is part of AI on the Move, a PAN series that explores the ongoing impacts of artificial intelligence on marketing and PR.
Across weeks and months of conversations about AI, both internally with our teams and externally with peers and clients, it became clear that we weren’t always talking about the same thing. As marketing and PR professionals, there are two distinct lenses through which we see AI:
- The adoption and use of AI tools for convenience in our work
- Amplifying the work and stories of our AI-aligned clients
On the surface, what separates these two angles is the use case for the technology. In the first, we tend to talk about tactical implementation of AI into enhancing or simplifying day-to-day routines. This is the ChatGPT or Bard AI conversation, for instance. In the second, we talk more about AI as a solution to the problems of a modern world — healthcare, cybersecurity, supply chain.
Look deeper, and they’re also separated by the questions they answer. The first conversation revolves around what AI can do and what it should do. The second conversation answers a different question: what do we want from AI?
These are all questions worth answering. As we work to determine the future of AI, a future that at least for now is still in human hands, it’s useful to both perspectives to pursue a fourth question: how did we get here?
The good, the bad, the unknown
Defining the role of AI
In 1979, a large tech corporation produced an internal training resource with guidance for employees on, among other things, how to properly interact with the technology around them. One slide offered a simple rule:
“A computer can never be held accountable. Therefore, a computer must never make a management decision.”
Regardless of how it was interpreted at the time — a reflection on personal responsibility or a premonition of what was still to come — it’s not hard to connect the dots as to why, more than 40 years later, the statement feels more relevant than ever. The conversation around AI covers a lot of ground, and it eventually congregates on the two points the slide brought forward — the two points that still drive the first conversation: what can AI do, and what should it do?
Those answers are useful, but they are not what drives innovation — technology and progress have never come from settling for what we know to be possible. If we’re to make AI an integrated component of our work and lives, we have to answer the third question: what do we want from it?
A long time coming
Origins of AI
Let’s back up for a minute. It’s important to this conversation to acknowledge two things about AI. One, it’s a broad field of work that encompasses a diverse range of opportunities. Two, it’s hardly a new concept.
In 1950, English mathematician and computer scientist Alan Turing created the Turing test, a reasonably standardized means of measuring the ability of a machine to behave like a human. While its credibility as a test has come under scrutiny in recent years, the intent behind it has never lost any intrigue.
For 70 odd years, humans have sustained a fascination with machines that act like us. Across books, television, film, startups, venture capital and even military funding, we’ve tugged on the thread, speculating and investigating in equal measure.
Despite how long we’ve talked about it, the explosion of the current AI conversation can be pinpointed to a single moment in time: the release of ChatGPT on November 30, 2022. From that day through mid-April, Google searches for AI increased by more than 400%. And it’s not all talk — venture funding in AI-related startups is more than double what it was last year.
What separated ChatGPT from any predecessor, and what at least in part explains its meteoric rise in popularity, was that it made AI visible and accessible. Suddenly, a technology that had been closer to science fiction than to reality was, literally, at our fingertips. With the viral popularity of ChatGPT came a surging interest in tools like Midjourney, a generative AI visual arts program.
But access was only part of the story. Creativity is an inherently human trait, and AI programs that could write, draw and converse weren’t just interesting — they felt human. We gave them a Turing test of our own, and they passed.
The student becomes the master
Unpacking the great AI debate
Of course, the other shoe always drops.
As quickly as AI caught fire, so too did the fear of what it might bring. Excitement about what AI could do was accompanied by questions about what it should. AI can write an 800-word article on B2B marketing trends — should it replace copywriters?
The answer depends on who you ask and what they prioritize. For our part, we say no — AI is much better utilized as a complement to our work than a replacement for it. But you don’t have to go far to find someone willing to say Sure, AI is free and copywriters cost money or Any use of AI violates our integrity.
Lately, a slew of complicating legal factors have emerged as AI continues to proliferate through industries. A class action lawsuit against Microsoft and OpenAI — at least the second of its kind — alleged misuse of personal data in AI training. Meanwhile, a federal judge ruled that art created by AI is not eligible for copyright protection, citing “human authorship” as a “bedrock requirement.”
So therein lies our dilemma. We want AI to prove to us that it can be like us, but we don’t want it to be so much like us that we’re replaced. Smart enough to work, but not to rule. Right, but without rights. Don’t trust the computer with management.
This dissonance is easier to navigate when you step back from the questions of can and should and focus instead on that third point: What do we want from AI? When computer scientists started experimenting with machine learning more than 70 years ago, it wasn’t because they dreamed of a world where no human would ever have to write or paint again.
While it can look a lot of different ways, the answer to that question is simple. In conversations with AI-aligned clients over the years it’s become clear that, on the whole, we want AI to improve lives. Maybe that means automating some tedious data management task. Maybe it means sorting potential threats to prevent cybersecurity breaches. Maybe it means reading anonymized databases of medical records to improve population health outcomes. We get to decide.
Old dogs and new tricks
The strategy for AI brands
It’s here that the common ground between those two perspectives on AI is most relevant.
The blessing and curse of the current moment for AI is visibility — more eyes means more fans but also more critics. Productivity is also a double-edged sword. Everyone wants to be efficient but “more and faster” is not a strategy. Quantity can’t replace quality, now or ever.
Legacy brands can learn something from their contemporary counterparts about how to navigate the new landscape. Some of that education requires interpreting the current response to AI — just 28% of customers, for example, find AI-generated or supported content and stories to be authentic. Even AI brands outside of content generation still have to confront the fact that tools like ChatGPT occupy most of what customers understand about the technology.
At the same time, upstart AI brands taking advantage of this moment can learn something from established industry stalwarts. The race to be first has at times come at the expense of the true promise and integrity of technology. In the dot-com era, some businesses built on the back of napkins proved as fragile as their origins, collapsing as quickly as they grew. In this AI era, a new generation of startups and tech innovators must ensure they don’t follow the same path. Legacy brands that still succeed today were founded and built to address critical challenges, well before the hype. They answer the third question. If opportunistic players want to become legacy ones, they need to start answering it too.
The signal in the noise
How AI brands can tell their story
All of this might seem like indulgent pondering on a challenge that is all too tangible, but there’s a purpose to it. Our work as integrated marketing and public relations experts means we spend much of our time thinking about how to tell an effective story. For any AI-aligned brand today, whether they’ve been in the space for two decades or are trying to take advantage of a surging market, that challenge is surely top of mind.
Earlier, we acknowledged two important points about AI: it’s many different things, and it’s not new. The reality for brands, however, is that their message must resonate with an audience who might not recognize those truths. Regardless of the accuracy, brands have to contend with preconceived notions about what AI is and does.
It goes without saying that there’s no one playbook for this challenge. The legacy healthcare AI solution will strategize differently from the AI-powered cybersecurity startup. Brands that capitalize effectively on the excitement without succumbing to the pitfall of doubt will do so because they were able to accurately identify and communicate, with authenticity and integrity, where they fit in the wider context of the AI story.
The way it goes will come down to the strategy and skills of savvy brand marketers — or of the integrated agency they bring in to help navigate the tide. Once brands have established what they want from AI, the same understanding must be reached with the customer. That’s always been difficult enough. In this moment, in the buzz of excitement and in the static of skepticism, the job is harder than ever.