The Best Way to Add AI Tools Like ChatGPT to Your HDYHAU Attribution Survey

Enia Xhakaj
Data Scientist
,
Fairing
AI tools like ChatGPT and Grok are becoming major drivers of new customer acquisition. But here's what most brands are missing: these conversions are hiding in your "Other" bucket, invisible to your attribution strategy.
Within the aggregated Fairing data set of tens of millions of responses to HDYHAU attribution surveys, we've seen more than a 5x increase in LLM mentions since January. That's not just a trend—it's a blind spot in your measurement that's growing exponentially.
You run an attribution survey to understand how people discover your brand. You have the usual suspects in your channel list: "Facebook," "Friend or Family," "TV." But as AI tools become a primary source for discovery and recommendations, where do those conversions get categorized? Too often, they're lumped into the "Other" bucket, leaving you with a mountain of messy data to sort through.
Our recent analysis of over 80 thousand free-form survey responses shows there's a better way. It's time to rethink your survey's response choices for AI.
The Challenge: Making Sense of the "Other" Box for AI Responses
When users don't see a relevant option in your responses list, they turn to the free-form "Other" field. For users finding you through an LLM, this results in a wide array of self-reported terms.
In our analysis, we found a huge variety of responses. Some users were specific, mentioning tools like "ChatGPT," "Grok," or "Perplexity." Others were more general, simply writing "AI" or "an AI chatbot." We also saw numerous misspellings and variations, like "chat gpt" or "chatgbt."
Importantly, many of these free-form responses revealed that users perceive these tools as recommenders. They weren't just searching; they were asking for and receiving suggestions.
The Solution: Provide the Right Answer Choice
Instead of forcing users into the "Other" field, the solution is to provide a clear, specific response choice for them to select. Based on our analysis of user-submitted terms, we recommend adding the following option to your list of survey answers:
AI Recommendation (e.g. ChatGPT)
Why This Response Option Works
Adding this as a pre-defined choice in your survey is a powerful, low-effort change with several key benefits:
It Mirrors User Language: Our analysis found that many users naturally describe their interaction with LLMs as a recommendation or suggestion. We saw phrases like "chatgpt recommendation," "ai suggested," "recommended by chatgpt," and "grok recommended" appear frequently. Using the term "AI Recommendation" directly reflects how users already think about these tools.
Clarity and Recognition: The term "AI Recommendation" accurately describes the action. By including popular examples like "ChatGPT, Grok, Perplexity etc.," you give users immediate context that helps them recognize this as the correct option for them.
Reduced Data Fragmentation: This single, clear option prevents the "Wild West" of free-form responses. Your data will be cleaner, more consistent, and instantly quantifiable, allowing you to spend less time cleaning and more time analyzing.
Improved User Experience: Providing relevant options makes it easier for users to complete your survey accurately. They don't have to think about how to describe the tool they used; they can simply select the option that fits.
Conclusion: Upgrade Your Answers for Better Insights
The landscape of user discovery is changing. To keep up, your survey methodology needs to evolve too. Stop letting valuable attribution data get lost and fragmented in the "Other" field. By adding "AI Recommendation (e.g. ChatGPT)" to your list of response options, you can capture this critical channel with far greater accuracy and ease. It's a small update that delivers a much clearer picture of where your audience is coming from.
Download our full LLM benchmark report where we dive into LLM growth in survey data over time.