Zoom on

When generative AI meets marketing

There are more questions than answers about today’s hot button issue: generative AI. Can we monitor the quality of the information output? Or will we be railroaded by errors and manipulations? How can we protect the rights of citizens and content producers from the misuse of information? Will we lose control of new technologies? These are the questions that some of the top AI scientists are asking, starting with Yoshua Bengio and Geoffrey Hinton.  

 

The truth of the matter is that many of these more general questions have to be dealt with proactively by finding collaborative solutions, because everyone’s future is at stake. So here we offer some insightful findings and reflections for managers, in particular marketing managers. 

 

According to a recent McKinsey report, the level of AI adoption in companies has more than doubled in the last five years, and what we call “generative AI” is accelerating this trend.  This type of AI is based on Large Language Models (LLMs), which have the ability to generate content, conversations, images, and software in ways that are nearly indistinguishable from how we humans do. But as a new book by MIT’s Thomas Davenport reminds us, artificial intelligence doesn’t always work. And we should also keep in mind what Luciano Floridi, philosopher of technology, recently pointed out: when LLMs make mistakes, the results are disastrous, precisely because of how they work.  

 

“They do not think, reason or understand; (…) they can do statistically – that is, working on the formal structure, and not on the meaning of the texts – what we do semantically, even if in ways (ours) that neuroscience has only begun to explore.”  

 

AI systems have already proven useful in marketing to identify room on the market for new offerings and experiences. They do this by constantly listening to consumers on social media and in customer service interactions, but also by automatically generating reports based on focus groups and sales data. A concrete example of this comes from Adidas, which used generative AI to design a new shoe based on customer feedback and market trends. 

 

AI can also help create 3D models and other visualizations, and even optimize the final output, assess costs, evaluate manufacturability and obviously consumer preferences too. Every concept can naturally be personalized for individual customers or personas, even pricing. But it doesn’t stop there. AI is radically transforming creativity development and planning for ad campaigns. Case in point: WPP recently announced a partnership with Nvidia to use smart systems to serve their advertising customers. Thanks to generative AI, creativity production is efficient and easy to personalize. What’s more, it can also be scaled up in the campaign optimization phase when myriad versions of creativity are tested to find the best way to achieve brand objectives.   

 

As Stephan Pretorius, chief technology officer at WPP, said: “We are able to (…) customize [advertising] to every environment in the world: you can create 10,000 versions within a couple of minutes.” To guarantee copyright protection for the images used as input in these processes, WPP linked its platform to Getty Images systems.  

 

But as of yet, an issue that remains to be systematically studied is the impact of generative AI when it’s used by consumers. Recently, Coca Cola had its fans develop creativity for the “Create Real Magic” campaign, giving them access to generative AI tools. In addition, Expedia announced plans to upgrade its mobile app for tourist services with a plugin based on ChatGPT that will enable customers to query the chatbot developed by OpenAI so it can help them choose destinations, resorts, and activities. Combining its search capabilities, customized recommendations and curated shopping lists, AI can become a powerful shopping assistant.  

 

For years now, user-generated content has been revolutionizing marketing. So, we can expect that by putting generative AI in the hands of all of a brand’s prospects (be they fans or haters), and not only content creators, we may be giving further impetus to the impact of these decentralized processes for producing web content. Along with all this come advantages, but as we can easily predict, there are risks too associated with fake news and manipulation. To contend with and control this new wave of innovations, we can only hope that institutions and economic actors don’t end up getting caught off guard again. Incidentally, the CEO of Google (whose answer to ChatGPT is BARD), while acknowledging the challenges we are facing with AI, says he’s optimistic because this time, we’re realizing what the possible problems are from the start.  

 

Various solutions have been proposed to deal with the problem of “hallucinations” (errors) in AI output. Garnering the most attention today is the neuro-symbolic approach (integrating generative AI with more traditional rule-based systems), as well as focusing on the quality of the prompts and the training for these systems. As for this last solution, the more LLM development is based on local and vertical knowledge and content, the less costly and more effective it will be. 

 

One of the limitations of current systems is the rigidity in modes of expression used in interfaces. As we’ve mentioned, we already use systems that create images, in particular as applied to creativity in advertising, but the potential for integrated multi-modality has yet to be fully exploited. Indeed, we can only imagine what generative AI can do when it combines audio, images, video clips and virtual reality with text (which is what META is working on right now). And all this not only as output, but input as well. Amazon, which some time ago developed its proprietary LLMs, has already announced that soon Alexa will be able to accept commands in voices like ChatGPT and generate output on various devices enabled with these functionalities (like your TV). And in the not-so-distant future, this company is planning to roll out a household robot (called Astro) integrated with generative potentialities that can perform service tasks and manipulate physical objects. Even in the immersive and hybrid realities of the various options of the metaverse, interactions between humans and avatars (in real time, not pre-programmed) will benefit from these new conversational and creative capacities. 

 

As marketing people, what conclusion can we draw today (transitory as it may be)? For companies, proactive experimentation and a collaborative spirit are the keys to this discovery phase, full of risks but tremendous opportunities too. Exceptionally fertile, solid ground for such experimentation lies in the ability to integrate, vertically and locally, available knowledge and applications with the generative potentialities of the new AI. Against the backdrop of investments and “announcement wars,” future market dominance is also up for grabs, and the outcome may not be a given, because we need to factor in both delivery consolidated by the value promised by these technologies, as well as the openings and restrictions that will emerge in the public sphere and in international policy.    

 

And for consumers? Areas of opportunity and assistance will open up for them too, offering various degrees of freedom and value depending on how critical issues are addressed: the quality of information and resulting decisions, the protection of data and private content, as well as transparency and neutrality in emerging selections and recommendations. (Who decides what data and training the system works on? Who sets the model alignment objectives? What kind of competition will there be as far as different for informational and decisional gatekeeping?) These are not the results of technological research, but more importantly the outcomes of economic, social and political processes. Ultimately, we are the ones who can and must decide on the future of AI.  

SHARE ON