Columns

Insights from the Insight250 winners: The importance of participant engagement and data quality

Annie Pettit PhD CAIP FCRIC is the Chief Research Officer, North America, at E2E Research where she helps her clients build engaging research projects that generate great quality data. With previous experience in government, global research companies, a small start-up, and as a consultant, she’s tackled a full range of research challenges.
Pravin Shekar is an Outlier Marketer, parallel marketer and a raconteur. He is the CEO of Krea eKnowledge, India’s leading healthcare research panel. An author of multiple books (8), Pravin is the recipient of the American Marketing Association’s “Emerging Leader in Marketing Research” award. Pravin is one of the ESOMAR Representatives for India and has previously served on the Governing Council.

The Insight250 spotlights and celebrates 250 of the world’s premier leaders and innovators in market research, consumer insights and data-driven marketing. The inaugural list was revealed this April and created renewed excitement across the industry whilst strengthening the connectivity of the market research community.

With so many exceptional professionals named to the Insight250 it seems fitting to tap into their expertise and unique perspectives across an array of topics. This weekly series focuses on doing just that; inquiring about the expert perspectives of many of these individuals in a series of short topical features. 

This edition features two seasoned research experts who are leading experts in data-driven insights and the approaches and methodologies that make them so effective.

With consumer attention waning and an increased onslaught of information available in the market, the role of data diversity and effective engagement is more important than ever. Pravin and Annie discuss the related challenges and opportunities facing market researchers.

Crispin: What roles do data quality and participation engagement play in innovation? 

Pravin: “It is an incestuous relationship,” let’s admit that first. Innovation is not all about novelty and ideation. The proof lies in the effective execution. If the right participant refuses to engage, and we settle for the second-best, that will affect decision making, and hence innovation…and the right/permission to experiment more! With the right/relevant participants in the entire cycle, it leads to higher data quality in decision making, thereby enabling the innovation loop. Can we change our style in a way that our respondent actually looks forward to participating in a survey?”

Annie: “If we look back at the last fifty years of market research, there’s been a lot of innovation in our data collection techniques, but not a lot of innovation to bolster participant engagement. However, we personally participate in social networks that are so fun, people willingly share their most private experiences on them – TikTok is my current fixation. This is a huge opportunity for researchers. We can’t keep hoping a traditional survey about heavy duty carpeting can be equally rewarding just because we offer a $2 incentive. People want to play and have fun, and it’s time researchers innovated in response.”

Crispin: For years our industry has been concerned about declining response rates – should we be?

Annie: “100% yes, we should be concerned. Without high response rates, we risk basing important decisions on unrepresentative samples that lead to misguided conclusions. But the root of the problem is researchers have not kept up with their end of the bargain. We promise short research, fun research, paid research, and then we write a horribly boring survey and yank the incentive away when we arbitrarily decide that someone doesn’t deserve it. We need to do a much better job of creating an experience that warrants a high response rate. We need to earn back the trust and respect that leads to high response rates.”

Pravin: “We need to be scared right now. For years, we have been parroting this fall in response rates. What are we waiting for? Some messiah to come and lead us to better research fields? When people don’t have the patience to watch a YouTube advertisement beyond five seconds, do we still expect the respondents to patiently answer a 45 minute survey? Will you? What if an agency or client settles for any response (to make up the numbers), instead of a response from the right representative respondent? That would be hazardous for our research.

If the survey is plain boring, aren’t we disrespecting the respondent? Once annoyed, would they ever want to answer another survey again? In all likelihood, they will move away from our entire industry! You say that’s improbable. The current response rates are a slap on our face. The audience has changed. Have we? We need to be SCARED right now. Fear is a good motivator.”

Crispin: Both: What good examples have you seen for increasing participant engagement?

Pravin: “We need to move beyond fads. If the older structure refuses to change or worse, adopts a tokenism approach to change – the new order will sweep them aside (Pssst: it’s already happening!).

  • Share the research report with the respondent; as an infographic, a cartoon or that one key insight. (Hey, if you gave me some advice, wouldn’t you feel good if I updated you on the progress based on that….)
  • Gamification was given a go by, but it will be back. (Hell, yeah!)
  • Short surveys are the way ahead (now, haven’t we heard this one before?).
  • A good dose of fun (for the respondent, not just for us.)
  • An AI database/chatbot with loads of answers so we don’t need to torture the respondent again and again (I had to use the “in” things like AI, ML, BC, right).
  • Short surveys again. Pretty please!
  • Storytelling in questioning is being experimented with. (It is here to stay…. And we shall live happily ever after).

As an example: There are multiple ways to propose. Which one suits best for which respondent?

Check out this custom artwork that highlights this point:

Why do you need a paragraph when three words would do?

Annie: “You can always tell when a researcher has put people first, and researchers and clients second. As they should. Questionnaires that do that ignore formal rules of grammar. They use short and punchy language. Entertaining and respectful phrasing. And fun activities. The world isn’t a Likert scale. It’s a dart board, a plinko game, a shopping cart, a countdown timer, a thermometer. There’s no reason a valid and reliable data collection tool has to read like War and Peace. Spice it up!”

Crispin: Is there a balance between response rates and data quality?

Annie: “Absolutely not. We must always strive for high response rates and high data quality, and one does not deny the other. If you can’t achieve both, it’s time to rethink how you build data collection tools. Think about why you love Instagram or Pinterest or TikTok, and try to incorporate those aspects into your research design. It might be the casual language, the imagery, the gamification, the diversity of thought. Whatever it may be, identify it and replicate it in your tools.”

Pravin: “A few things do not need “a balance.” They only need a higher standard.

Data quality will be good when the response comes from the right sample (and not any sample). For the response rate to be high, the interview process needs to be customized to the respondent, and must be engaging. Can someone please ban 45 minute long Quant surveys? They are not liked by any demographic that I have worked with. Yet, they continue to survive.”

Crispin: What impact does data diversity have on data quality?

Pravin: “What works in the USA (or anywhere else) will not work here!” What if we started that hypothesis rather than assuming it will? Wouldn’t the question answer itself then? I am writing from a country like India, where mistakes continue to happen. Advertisements run in Hindi in cities deep in South India. Most people don’t speak or know the language. The feeling of the client/planner/agency is either “one size fits all” or “when our people here understand, others will too.” Who will then be blamed when the product launch fails? Data collected from one region/geo/demography cannot be extrapolated to others without reason. Yet it continues to happen. Without adding a relevant representative sample, weighting it and necessary over-sampling factored into the process and budget, there is a risk on data quality.”

Annie: “Without data diversity, there is no data quality. One of my favorite books is Invisible Women by Caroline Criado Perez, which demonstrates how the failure to specifically research women by assuming men are typical and women are atypical has led to wrong decisions across a multitude of industries. The market research industry has done an entry-level job of accounting for data diversity. For instance, we’ll prolong field to get a 50/50 gender sample, but we won’t oversample so that we can properly segment and report on the experiences of BIPOC, people with disabilities or people who are not cisgender. If you can’t report on one of those groups, you’re bound to make decisions that are wrong for them.”

Crispin: HOT TOPIC – How afraid are people currently of “fake news?” How as a profession can we provide reassurance of the robustness of our insights?

Annie: I’m not afraid of fake news, rather, I’m disappointed by it. I’m disappointed that so many people aren’t sufficiently literate in numeracy nor critical thinking skills. One thing we can do is make sure we are always fully transparent about our methodologies, and our process of generating conclusions from the data. We regularly hear people say they don’t want a 50 page report, but we can absolutely write a 10 page report with a 40 page appendix. Not only does it help the one person who is going to read those 40 pages, simply seeing it there tells everyone else that you care about being transparent and showing your work.”

Pravin: “If you are referring to “Whatsapp University” and news from similar channels, the aim is to titillate rather than educate, it is a dangerous kind of virus. A lot of people BELIEVE such fake news to be true.

Few people are aware of the existence of fake news and “deep fakes.” It is like the game of ‘whisper down the lane’ played out on a global scale. “He said, she said, this report, that report”; without any underlying data or proof. Such behaviour always existed. Now, it is only an amplification, albeit at a global scale. We can provide reassurance by:

  • Educating the audience about the difference between good and bad research.
    • For example, on the investment front, the “Association of Mutual Funds in India” has come up with a website that educates and informs the consumer (https://www.mutualfundssahihai.com/en). We need to upgrade our association sites (ESOMAR, MRSI, etc) and lead people here for awareness.
  • Encourage people to ask the right questions, especially regarding sample size, representation and weighting.
  • Ensure all necessary information is available for them to access – from official channels.
  • A complaint/escalation channel for redressal.

TOP TIPS

Pravin: “Experiment and aim for failure.” State it out in public. Celebrate failure. That’s the only way we can find newer ways – and build a tribe that embraces the credo.”

Annie: “Innovation isn’t just technology. It’s also about casting aside the “human as lab rat” model for research that meets human needs for play, connection, and meaning.”

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles