If you’ve never attended, most of the 90-minute sessions are structured to provide about five short presentations followed by a wrap-up. With over 60 sessions, that’s over 300 presentations, plus 116 posters in three poster sessions.
The conference is fundamentally about surveys: how to improve the ability to project from survey findings, how to improve survey quality, how to improve the transparency of survey results.
“On the market research side we talk about respect for the respondent, but we don’t conduct extensive interviews and pre-tests to make sure our questionnaires are easy to understand, nor do we always validate that our techniques are better for respondents”
One of things that I loved about the presentations was the concern for respondents. On the market research side we talk about respect for the respondent, but we don’t conduct extensive cognitive interviews and pre-tests to make sure our questionnaires are easy to understand, nor do we always validate that our techniques are better for respondents.
Kristin Stettler of the US Census Bureau discussed the benefits from testing earlier in the process of questionnaire, while Jacki McCarthy of the USDA National Agricultural Statistical Service discussed the rigorous multi-method questionnaire testing that’s needed before sending out three million paper surveys. Randall Thomas of ICG International tested ten different ‘constant-sum’ questions (where respondents have to enter numbers that add up to a particular total) to determine which approaches are easiest and most valid. And that’s just a small selection of the respondent-oriented research.
With 25% of households now having only a cellphone, the empire of random digit dialling has fallen, and there were various presentations from those vying to divvy up the land. Among them are advocates of address-based sampling (ABS), which was a topic in a least a dozen sessions. ABS works by randomly selecting addresses from a postal database with near universal coverage of residential homes. Invitations are sent by surface mail, but different sessions discussed the trade-offs of inviting respondents to complete paper, phone or web surveys, and how best to mix the modes. Knowledge Networks discussed using information on non-responders for ABS to determine how they differ from those who do respond, based on household and county-level demographic data.
Many hope that cell phone surveys can be smoothly integrated into telephone research. This represent a fundamental shift: while landline RDD provided access to a population of households, cellphone RDD provides access to a population of individuals. Or does it? Marek Fuchs of Germany’s Darmstadt University of Technology discussed how sharing of mobile phones has consequences for sampling and weighting. It turns out that ‘dual-frame’ telephone surveys introduce a host of challenges, which other speakers presented on, including bias from scheduling calls in the evening and methods for determining the geographic location of cellphone users. Masahiko Aida of Greenberg Quinlan Rosner Research discussed an attempt to substitute cellphone interviews with non-cellphone interviews and Paul Lavrakas looked at data quality in cellphone surveys, saying that there is more missing data, including refusals to sensitive questions, among those interviewed away from home than at home. A new AAPOR Task Force will soon be presenting its findings for best practices for cellphone research in the US.
In another AAPOR initiative, outgoing president, Peter Miller, announced that 35 organisations are supporting a new call for transparency in survey research as part of the AAPOR Transparency Initiative. The rise of DIY online research using convenience samples has led to many poorly conducted and erroneous studies, Miller said. The AAPOR Code of Ethics has always called for transparency but organisations that support the new initiative will pledge to go beyond what is required by the code in terms of describing methodology, sponsors, funding sources, sampling frame, question wording and so on. In the long run, this will help restore lost trust for surveys and enable organisations to compare and contrast methodologies in ways that are not possible today. Among the members are ABC News, the Associated Press, CBS News and the Gallup Organization.
Given how little most market research conferences on the commercial side focus on surveys, it’s as if AAPOR is a black hole that has drawn in all the survey presentations. We should definitely be inviting public opinion researchers to share more of their approaches with commercial market researchers. Many of their methods cost little, whether it is asking a question in a better fashion or pre-testing a survey with a few potential respondents. We may not be able to afford their budgets or their rigour, but we can’t afford not to learn from them.
For further info: research-live.com/comment/the-research-behind-the-research/4002751.article