Adam Nowakowski, Creative Agency Partner CEE la Facebook, writes about creativity, data, marketing and the results that matter.
Once a year, the world of advertising goes slightly crazy. This happens around June, when it’s time to write EFFIE competition entries. The final deadline is still ahead of us, but for several weeks already marketing departments and agencies have been looking into the past for proof of success.
This may be difficult, because although marketing is described in terms of cause and effect, its everyday reality is more nuanced. What case studies present as marketing tai chi, sometimes bore greater resemblance to a saloon brawl in reality. Objective? Sell! Message? “Buy!” Once the campaign is over, no one talks about its effects because it’s already a thing of the past, meanwhile the fight continues, so everyone needs to focus on what is ahead.
Possibly, our world is not at all crazy in June. Maybe the time we devote to writing our EFFIE entries is actually a period of sanity, while there is something wrong with how the ad industry operates throughout the rest of the year?
If so, making things right may require acting contrary to our daily instincts: start from the end, focus on the past, and draw inspiration from failures.
The idea to start from the end comes from the conviction that advertising is measured best by its results. They are the only reason for bearing its costs. Marketers pay for strategy, creative development and media plans, but it is the outcomes that they really want to purchase. Therefore, it may come as a surprise that incrementality, one of marketing’s most important concepts, is so poorly understood.
What is incremental growth? An increase that can be attributed to a specific factor and cannot be a mare correlation, i.e. random coincidence. Unfortunately, in the case of advertising causality is not a given, and this is why service on the EFFIE jury requires the highest skills.
To prove the effectiveness of advertising on our platforms, Facebook applies a rigorous measurement methodology known from clinical trials. Once a campaign target group is defined, randomly selected individuals are assigned to a control group. As a result, there are two identical groups - the test group and the control group, but only the prior is exposed to the campaign ads (what is noteworthy, exposure occurs in natural conditions, not within an artificial study setting). Then, representatives of both groups are asked to complete a survey. The questions concern the ad itself, knowledge about the brand and attitudes towards the product.
With the study organised this way, any variance between the groups’ responses can be confidently attributed to the ads. If brand awareness following the campaign is at 25% in the test group, and 15% in the control group, we can conclude that the advertisement generated an incremental 10pp. to this parameter. This way, clients know what they pay for and agencies understand what they are capable of, all thanks to results generated within an objective scientific experiment. This 10% of ‘incremental growth’ is the product of advertising, not just a random correlation (e.g. due to poorer performance by the competition or a category trend).
Starting from the end should mean that the brief for the agency not only includes communication objectives, but also details regarding how KPI’s will be measured. In such a case, ads can be designed to achieve the best results in the tests.
If this sounds like a hack, rest assured it isn’t one. This is merely discipline in the execution of the adopted communication strategy. It should mark the end of all-in-one ads overloaded by tags and promotion flashes. If the goal of the campaign is to increase awareness, ads must include only elements that fulfill this goal, and be free of content that serves other purposes.
Consistent measurement of the effectiveness of activities generates a performance history which provides insights about what works and what doesn’t. Looking into the past, one can design future initiatives. That is why it is worth scrutiny.
The insights are also valuable to agencies that seek new business and the comfort of presenting from a position of expertise.
Finally, there is the issue of failure as a source of inspiration. This works in two ways. Firstly, measurement ensures that mistakes are not repeated. Frank Sinatra sang that only fools go walking on thin ice... twice. With experience that come from measurement, certain things are simply out of the question right from the start. Secondly, when failures are not in vain (because they supply a pool of experience), advertisers manifest a greater eagerness to experiment.
Precise briefs, single-minded executions and bold clients set measurement within the creative toolkit... and THAT is not just slightly crazy. That’s totally mad.