Attribution

Calibrating Your Marketing Mix Model With Post-Purchase Survey Data

by Matt Bahr and Michael Taylor

 

Marketers can no longer rely on a single source of truth when it comes to marketing attribution. The majority of users opt out of tracking since Apple’s iOS14 release, punching a big hole in the accuracy of digital tracking that every ecommerce business has felt the effects of. In tandem, as DTC brands mature and expand into offline channels, as Harry’s did while I built the marketing science team there, it becomes impossible to rely on digital tracking alone.

Depending on who’s counting there are 20 attribution methods available, each with their own strengths and weaknesses. While most of us have had time to come to terms with this new reality, and are exploring multiple solutions, not much has been written about how to use multiple methods together to triangulate the ‘truth’ about what’s driving performance. What’s more, the pursuit of foolproof deterministic methods has left much of a brand’s competitive advantage on the cutting room floor; one model to rule them all surely points to a truth, but whose?

At Recast, the media attribution startup I co-founded after leaving Harry’s, we use Marketing Mix Modeling (MMM) to determine the impact each media channel has on performance. As well as Harry’s, we’ve built models for modern consumer brands like MasterClass, Mockingbird and Away. Google and Facebook have also gotten into the game, releasing their open source MMM solutions.

Historically MMM has primarily been used by larger brands, due to the cost and complexity of building a model, and the assumption you need lots of data to do this type of analysis. Contrary to common belief, MMM can work just as well if not better for ‘small’ brands (<$50k spend per month), so long as you take advantage of modern data pipelines, machine learning algorithms and data transformation techniques to automate the process, protect against implausible results, and remove human bias.

In the example we cover in this post, you can use the result of a checkout survey (as administered by a tool like Fairing) to calibrate your model. This acts to make the model and checkout survey consistent with each other but closer to the ‘truth’ about what’s driving performance. You end up with results that are better than any one method arrives at individually.

Do you believe what Facebook / Google / TikTok are telling you?

Put yourself in the shoes of a marketer or ecommerce store owner advertising across Facebook, Google and TikTok. Do you trust these ad platforms to grade their own homework? And even if you trust their intentions, do you believe they’re capable of telling a full story? Here’s the story they’re telling you:

Channel

CPA

Facebook

$50.00

TikTok

$55.00

Google Brand

$4.50

Google Non-Brand

$30.00

If utilizing a last-click attribution model, most of your sales from Google come through your brand campaign, i.e. people searching for your brand term. And yet, if customers are searching your brand name, then clearly they heard about you somewhere else – attribution credit belongs to another channel! On the other end of the funnel, you have Facebook and TikTok video ads driving hundreds of thousands of views and lots of engagement, but nowhere near as many sales as you expected. Do you turn TikTok off or cut spending on Facebook ads? As a savvy marketer you decide to go for a second opinion, and add a post-purchase survey that asks “How did you hear about us” using Fairing. Here’s what the survey tells you:

Post-purchase Survey
Results

The results confirm your suspicions: many people say they heard about you from TikTok despite UTM code-based analytics data saying they came via “direct / none”, “google / cpc” or “google / organic”. A colleague suggests the survey might be biased towards more ‘memorable’ experiences like watching a video, and in reality it was the more functional Google ads that really sealed the deal.

Google isn’t separated out in the survey (we wanted to keep the survey simple so completion rates remained high), but it’s roughly inline with what Google reported for brand terms, which takes up most of the spend. Facebook is a somewhat muddled picture in that the survey results show a worse CPA than Facebook itself is reporting: potentially this is because you use more static ads and less prominent placements in this channel than TikTok.

Rather than being relieved the picture is only getting messier. Now do you believe analytics, your survey, or the ad platform reports? You need to solve this problem before setting your budgets for next quarter. What can you do?

Build an MMM in Excel with LINEST

You’ve heard about Marketing Mix Modeling as a potential solution to find out what channels are driving incremental sales. You were told it was possible to build a simple model in an afternoon, using the LINEST formula in Excel to handle the linear regression. After an hour or so gathering the data and following MMM tutorials, you arrive at the following model:

MMM using Google Sheets’ Linest Function

Link to Spreadsheet

It seems to have captured your suspicion that Facebook and TikTok were driving results better than what was being tracked by the platform. This is likely the result of being unable to track users who opted out of tracking post iOS14. Facebook in particular has had a huge role reversal, it’s now performing better than TikTok. Google Brand is inflated versus what Google reports, but it’s still suspiciously low. Here’s what the model is telling you:

Channel

CPA

Facebook

$34

TikTok

$44

Google Brand

$7

Google Non-Brand

-$16

The main result that doesn’t make sense is Google Non-Brand: it says that the channel drove negative sales, i.e. one less sale for every $16 spent, which is unlikely. This is a relatively small channel spending just $5,300 total, which explains why the model didn’t have enough data to get this right. However, showing and trying to explain an implausible result like this to executives and decision-makers can seriously harm the credibility of our model. Furthermore this model now offers a third opinion rather than unifying the results we got from survey data, analytics and ad platform reports. Fortunately we can correct for this by calibrating our model.

Calibrate your MMM with Bayesian priors

We use a Bayesian model at Recast, which lets us incorporate what we’ve learned from other attribution methods and make the results of our model more plausible. For example we can tell the model that a marketing channel can’t drive negative sales, because that scenario is unlikely. Setting priors using the results of other attribution methods, like survey data, makes MMM the glue that sticks multiple attribution models together. This allows you to reconcile differences of opinion and make more accurate predictions to base budget allocation decisions on.

In our case we want to calibrate our model with the results of the survey. It’s important to note that all of our existing models are likely to be wrong. Facebook tends to underestimate the true number of conversions due to iOS14. Google is likely to be overestimating the impact of brand keywords. The survey overstates more ‘memorable’ channels like TikTok. The key is using the priors to set an intelligent starting point for the model to map and determine the most likely estimate for what true performance is.

We’ve put together an example script in R, to show how a Bayesian marketing mix model would work using our example data. It’s in a Google Colab notebook which lets you execute the code in a virtual environment rather than having to download and set everything up on your local computer, and it’s shareable like any Google doc.

For simplicity we’ve interpreted our survey data using upper and lower bounds, that we feed into the model. Approximately 30% of orders each came via Facebook and TikTok according to the survey, which translates to $61 and $41 CPA respectively. We suspect the survey is most generous to Facebook and TikTok being engaging video ad formats, so we set the lower bound (minimum CPA) at $10 for these channels to give the model some wiggle room.

Google Search is a problematic variable, as we want the model to correct for the spurious negative $16 CPA we got from our MMM. So we’ve set a $20 minimum for this channel as it’s likely to be more expensive than Facebook and TikTok based on what we know. We have a strong belief that Google Brand isn’t truly incremental, though might drive a few incremental conversions that otherwise would go to a competitor, so we put the minimum CPO at $100. We created upper bounds for each channel at $500 maximum because that’s high enough to be unbelievable based on what we know.

ROI-lbs

The model is then set up using Stan, a Bayesian modeling library, which forms the core of how we build our models at Recast. This toy example sets up a simple Bayesian model using the data and bounds we specified, in order to demonstrate how powerful Bayesian priors can be in joining multiple attribution methods together. There’s a lot more the tool can do (Recast models typically have thousands of parameters) but this should give you a good sense of its flexibility.

Priors Model

Don’t worry if you don’t understand all the code, it’s something you can pass to your data science team to build a custom MMM solution on top of (or it can be handled for you if you use [Recast]https://getrecast.com/demo-request/). The part we should be most interested in is the output: how does it compare to our simple model in GSheets?

Include Priors

In this model ‘mean’ is the equivalent of the CPAs in the GSheet model: how many dollars do we have to spend in that channel to get an order? Here’s what the Bayesian model says:

Channel

CPA

Facebook

$35

TikTok

$31

Google Brand

$160

Google Non-Brand

$39

In this model you can see the results are now far more plausible. We have incorporated what the survey told us about TikTok and Facebook, and our intuition about Google Brand being mostly non-incremental. Let’s see the model results side by side:

Channel

Platform CPA

Survey CPA

GSheet CPA

Bayesian CPA

Facebook

$40

$61

$34

$35

TikTok

$55

$41

$44

$31

Google Brand

$4.50

$9

$7

$160

Google Non-Brand

$30

N/A

-$16

$39

By incorporating the more generous survey measurement, we’ve shifted the CPA for TikTok and Facebook downwards from what the platforms are reporting, which we know suffers from undercounting due to users opting out of tracking. The CPA is right on the money for Facebook compared to our first model, but the new Bayesian model has revealed a far stronger result for TikTok: this is a competitive advantage as now we can double down our investment in this channel while competitors mistakenly think it’s under-performing! We’ve corrected for the ‘Google Brand Effect’ so are no longer treating brand campaigns as fully incremental. We also have a more reasonable estimate for Google Non-Brand, though the CPA will likely come down as we scale up spend and collect more data. It’ll also be possible to feed these results in as priors to the next model we build, so that what we tell management is consistent over time, but allows for performance improvements.

The end result is far more realistic and flexible than relying on ad platform reports, analytics data or survey results alone, and it’s all governed in one central model for the business. The model can be used to forecast future performance at different spend levels, and can be updated as performance changes through optimization. Management aren’t seeing any completely implausible results from our model, so will be less likely to question it. As more data comes in, uncertainty gets solved as the model continuously learns and improves. No model is right, but by incorporating everything we know about a business, we’re less likely to be wrong.

One thing you may have noticed is that setting the bounds on the CPO gives the analyst or data scientist a lot of power over the results of the model. If we wanted to tell a story that TikTok was really really effective, we could set the upper bound on the CPO for TikTok at $5 and the model will spit back an answer that includes the CPO of TikTok at $5. As they say: with great power comes great responsibility. When building a model this way we need to be careful not to impose our own unjustified biases on the model. If we just tell the model the answer we want to see, then we aren’t getting an unbiased additional perspective!

Michael Kaminsky is the Founder of Recast, eliminating wasted marketing spend and optimizing performance across all online and offline marketing channels by accurately measuring the true incrementality of marketing in real-time. Learn More

 Ready to better know your customers?

See Interactive Demo