For a while now, the tides of perception have been turning around how we think about quantitative vs qualitative data. That’s because the better we get at building and applying these data types, the harder it is to draw a line in the sand. Let’s talk for a bit about why that is.
Old and Busted: “Definition”
In simpler times, we often thought of data types as being a matter of definition: data is quantitative if it is easily and widely defined, and it is qualitative if it is not.
For instance, magazine circulation numbers are, well…numbers. So that’s certainly quantitative, right? Whereas a survey asking people’s opinions about the magazine’s writing would be qualitative: the definition of “good writing” is not only subjective to each reader, but subjective to each magazine’s audience as a whole. Seems like a pretty obvious distinction—but it’s wrong.
New Hotness: “Usefulness”
What the digital revolution has taught us is that a dataset’s type is nowhere near as important as its usefulness, and that’s where the line between quantitative and qualitative starts to dissolve. You can often think of this as the level of integration: the degree to which a dataset can be communicated, compared, ported, applied, scaled, etc.
We’re riding this wave at Fairing, as the nature of our data (post-purchase customer surveys) seems qualitative at first glance, but the integrations we’ve built to tie survey results to your customer records and purchases turns that data into useful insights for action–be it reallocating ad spend for ROAS, benchmarking and comparing performance across product lines, or plotting customer LTV by referrer. And really, that has always been the appeal of data: actionability.
Perhaps we could do with some real-world examples to really drive home the point that data types aren’t as black and white as we used to assume.
Magazine Circulation
This is the poster child for archaic quantitative thinking, as it was the industry’s prevailing KPI for decades. So what was the problem? It’s exactly as we said above: the data point was well-defined, but that definition was virtually useless.
An advertiser would know 300,000 copies of its ad were “distributed,” but they’d have no clue how many of those copies were distributed to people who even intended to read the magazine, let alone how many times (if at all) the reader actually saw the ad. Ironically, advertisers eventually had to implement qualitative surveys to acquire any kind of relevant insight into actual magazine ad impressions.
Even today, at Fairing, we have a number of customers using us to measure the ROI of their print campaigns. Besides plastering discount codes all over their ads, asking the customer is truly the only tactic they have to measure ROI that’s not coming from the publisher.
Location Data
There’s seemingly no question that location data is quantitative. It turns the entire world into a grid and spits out numerical coordinates based on GPS signals. But in that sense, it’s a lot like DNA evidence: location data can tell you someone’s phone was in a particular 16’x16’ square on Earth, but it doesn’t provide vital context—it can’t tell you why the person was there, or what they did, or whether it was even them rather than their phone being carried by another person.
To solve these limitations, location data firms integrate with other data sources. Some of those are similarly “quantitative”–such as purchase data–but others are purely survey-based approaches, such as Google’s very own Opinion Rewards app. Google hones its location data here by asking users whether they indeed entered a presumed store, and what they did in that store. Think of it as the eyewitness testimony that validates the DNA evidence.
Customer Experience Surveys
You’ve probably given one of these Happy Or Not customer experience kiosks a smack upon exiting an establishment. Seems fairly qualitative on the surface: while there are four clearly-defined choices, they span a wide spectrum of experiences, and we really have no clue why a single customer would’ve tapped the “unhappy” button. For all we know, it wasn’t even the establishment’s fault.
But what Happy Or Not understood was the value of cheap scale. An in-depth customer survey could tell you exactly what the customer’s problem was, but it would take a ton of resources to do at scale—and without that scale, you’ve just got a few isolated anecdotes. Instead, this simple and limited interface logged 600 million data points within its first few years of deployment, which led to multi-location businesses being able to compare store performance, experiment with swapping management, and explore customer service opportunities based on time of day.
The data is actionable because it’s high-volume and real-time. It’s a pulse on the business, in a simplistic sense. Does that make it quantitative? Who cares? We’re beyond labels now. The best kind of data is whatever can be most effectively integrated, transported, scaled, and applied into useful decisions.