Friend or fascist? Generative AI in marketing: The good, the bad, and the ugly
AI

Friend or fascist? Generative AI in marketing: The good, the bad, and the ugly

25th September 2023

If you’re sick of hearing about AI, it’s likely because of one of the following things:

  1. LinkedIn grifters banging on about how they used it to allegedly quintuple their turnover overnight.
  2. Constant fear mongering about how it will make all marketers redundant by 2025.
  3. You don’t know how to use it to get the results you want.

But are tools like ChatGPT actually any use to marketers today? As with most things in life, the answer is far from black and white. 

Applied smartly, it can be a powerful tool for boosting efficiency and sparking creativity. But there are certain areas of marketing that AI isn’t quite ready to take on — and other areas where it shouldn’t be involved at all. 

We sat down with Steven Millman, Global Head of Research and Data Science at Dynata, to get an expert view on the good, the bad, and the ugly of AI for the Unicorny podcast. Spoiler alert: sometimes it’s better to do things the old fashioned way… 

Listen here: Ai. Everything, Everywhere, All at Once

The good:

Generative AI is already helping marketers to personalise campaigns faster and smarter than ever before. 

‘The most effective uses have been in the context of targeted advertising,’ says Steven, particularly with predicting what a large population will want based on information from a small sample set. But it can also be used on an individual level. 

‘These systems will read information about you from a cookie or […] other systems that collect information used for targeting. And based on the moment when it recognizes who you are, in milliseconds it will choose to give you one ad or another.’

Some of these systems are now also able to create brand new ads based on your information, instead of pulling from a set list of ads as previously. And we’re not just talking about testing different types of copy. With the help of DALL-E or Adobe Firefly, every visual aspect of your advert can instantly change to reflect each individual viewer. For example, swapping between displaying a 60 year old Mexican woman on a beach to a 12 year old British boy skydiving, all without your creative team having to lift a finger. 

But this doesn’t mean the show’s over for marketing done manually. Steven says this particular advancement is still very much in the works, and that AI is likely to remain a trusty ‘co-pilot’ for marketers, rather than replacing them altogether. After all, the abacus, calculators, and Excel  still haven’t managed to kill off the accountant.

‘There’ll certainly be some jobs that we need less of. But the co-pilot idea is that it’s not doing your job, it’s helping you do your job more effectively. And there’s lots of ways where it can do that.’

For example, as a kickstart for writing projects. Let’s say you wanted to send out a survey to your customers. Solutions like ChatGPT will produce a great base for you to improve upon. Steven says the quality of the survey itself won’t be anything special. ‘But if you do something like that, it will tell you a bunch of things you might not have thought about. So it sparks your creativity, but importantly it puts stuff on the page that you can start to work from and edit.’ 

The bad:

But it’s not all binary code sunshine and generative rainbows — particularly not when it comes to market research. Generative AI has the potential to wreak havoc with panel quality. 

People are already using AI tools to submit fake survey responses, with businesses around the world conducting this kind of fraud at scale in paid panels and loyalty programmes. Worryingly, the technology is now sophisticated enough that it’s getting hard to spot.

‘One of the historically most common ways that people look for fraud or poor engagement is reading through the open-ended questions… If there’s a nonsensical answer there, or if you get the exact same answer verbatim for 30-40 people, the chances are you’re under a bot attack,’ says Steven.

‘It took me all of 30 minutes to write a little script in which ChatGPT would give me slightly different answers to the same question posed over and over again, and in almost all cases they would not have been red flags for a coder going through the survey.’ 

To combat this, Dynata now has 196 different ways to check surveys for fraudulent responses — but they’re highly likely to slip under the radar for smaller firms with lighter tech capabilities.

And that’s not where the problems with synthetic research responses end… 

The ugly:

Paying people for their opinions is expensive and time consuming, particularly at scale. But ‘jailbreaking’ AI solutions into giving representative answers for your target demographics is cheap and fast. The only problem is that the results might be horribly offensive. 

‘What [companies] are worried about is that these systems have gotten really good at doing things we really don’t want done,’ says Steven. ‘These models are dangerous to leave unattended. And if you remember what happened with Twitter’s AI bot, it became [an] anti-semitic fascist almost instantaneously by learning off what people were talking to it about.’

This isn’t because the programmers behind these tools are terrible people, but because the training set used to build them contains hundreds of billions of data elements, with every good and bad thing that people say and do represented there in some way. Giving them access to the full range of human expression, however awful, is necessary to produce realistic human-like responses. 

Most tools have guardrails in place to stop those unpleasant elements from being reflected in the answers they give. But, controversial as it might sound, these controls are a double-edged sword. If a tool has too many constraints, it becomes weak as it can’t answer questions in a way that’s actually useful, for fear of being too subjective. 

However, without strong enough controls in place, nobody but wrong’uns will want to use it. And no AI developer wants to see the KKK boasting about how they used their tool to write their most offensive placards yet. 

Even if you’re willing to risk basing your campaigns on regressive stereotypes, would you get the same answers from your real demographic if you went out and conducted that survey yourself? Almost certainly not, making AI panels a lose-lose area for now.

It’s clear that while AI is undoubtedly a useful weapon in any marketer’s toolkit, it’s not yet ready to take over the show. Deploy it where it’s helpful, but take its responses with a generous pinch of salt — and watch this space. These problems won’t be problems forever, and the agencies that start embracing AI now will be best placed to reap the rewards when these tools level up in the near future. 

Listen to Ai. Everything, Everywhere, All at Once on your podcast platform.

Written By
Ornella Weston is the managing director of boutique agency Duckman Copy Ltd. Throughout her career, she’s written everything from white papers to websites, from billboards to board game instructions. Now it’s time to add op-eds to the list. The marketing world is full of flops and failures, often at great...
  • This field is for validation purposes and should be left unchanged.