Take-aways from the CASRO Digital Research Conference

28 Mar 2014|Scott Porter

I just spent two days at the CASRO Digital Research Conference in San Antonio. I think we could hear a distant roar from SXSW just about 90 minutes up the road… several folks drove over to catch events there as the CASRO Conference was ending.

Top take-aways:

1- The new world of big data

  • Synthesis, storytelling, and strategy is our evolved role that we need to step forward and embrace

o We need to move beyond technology paralysis. Even with an overwhelming amount of new tech, we need to be trying as much as we can and finding out where the technology is lacking for our needs:
– What does the current technology allow us to measure? Is that what we need to be measuring to answer our questions? Do we have the right tools to synthesize, analyze, and report our findings?
o Microsegmenting/targeting is not a replacement for empathy toward consumers. Let’s not forget that we’re talking to people.
o We still need to find a way to get to causal models, despite the plethora of information
o In addition to upping our data synthesis game, we need to improve our storytelling (often aided by the right visualization)

2- Mobile

  • It’s not as much about being mobile as being cross-platform or platform agnostic

o And effects on responses aren’t really mobile/non-mobile… they have to do with a wide variety of factors, such as location you’re taking the survey, screen size, etc.
o Adoption still tied to generations, and we need everyone
Only a few are truly ready for mobile, but we’re all going there anyway, and it’s fast
o With the high proportion of people using mobile for a large share of their interactions, we can’t ignore it.
o That doesn’t mean everything is done on mobile… see the first point. But you have to be aware of where they are, and the device they are on, and decide the best course of action for the research.
o Test and learn… people are using mobile now as a portion of the sample and hoping they understand it as we get into it

3-Survey length

  • Longer isn’t better. Be focused in design on what is most important, and take advantage of modular where possible.

o Longer surveys, more surveys per panelist due to panel depletion = less engagement and worse data
o Shorter surveys may be better suited to mobile and other non-panel recruiting
o In modular surveys, we have evidence of less fatigue in earlier modules
o Incorporating data from respondents only willing to complete some modules increases efficiency and lowers non-response bias

4-Social media and surveys

  • Advances in getting insights from social have implications for the role of surveys. Social insights should still be seen as complimentary to survey based insights.
  • Social is moving toward replacing some aspects of surveys, but it is a slow process

o There are still things that are easiest just to ask people
And people still want to be listened to
o Social media is biased—“Social media is like a big, poorly-designed opt-in survey”
Those without an agenda are likely to remain silent on certain issues
Everyone is listening as opposed to confidentiality in a survey… this has an effect on what is said
o Survey data is more detailed about the strategic questions we want to know about, since we design it that way

  • Social can make surveys better

o Finding the right items to later explore in surveys
– “snowballing” slang
– Open ended queries without brand names: “I wish they would make a …”
– The right tone in real language

  • People who are good at survey research can generally apply those skills to social media research

o If you’re good at writing questions, you’re often good at writing queries
o If you’re good at analyzing the survey data, you often have at least the start of the skills to be good at analyzing the social data
o If you’re good at modeling to compensate for bias, you can help figure out how to look at the social data

  • Social data is hard to analyze (but not impossible!)

o It’s big… we end up sampling it, then having humans look at it, then build rules to code it
o Only extremes come through (see the bias points above)
o There are differences between what people say and what they do (even more than with survey… see the bias points above)

Let me know if there is anything that catches your imagination that you want to discuss. Or read all about it:


Proceedings (some of the papers are yet to be uploaded, but many are there):


Written by Scott Porter, VP Methods.

prev next