Are we doing well?

The growing pressure to deliver more for less has only made things harder. But how do we actually judge whether research is up to scratch? This was the question delegates grappled with at yesterday’s IJMR Forum in London, and it turned out to be more complicated than one might expect.

Denise Lievesley, who heads the school of social science and public policy at King’s College London, looked at how statisticians judge data quality, and urged those in the public sector to take a broader view. “Particularly when I’m talking to statistical audiences, quality is interpreted in a very narrow way, with regard to validity and reliability… I want to argue that quality is about a whole range of dimensions,” she said.

One of those dimensions is utility – being able to put information to some sort of use. This means data must not only be trustworthy, she said, “it must also be trusted”.

Lievesley’s views were echoed by Michael Scholar, chairman of the UK Statistics Authority, who pointed out that although the UK’s official statistics have performed well in peer reviews, surveys (from before the authority was set up in 2008) showed that the British public had the lowest level of trust in its official statistics of all countries in the OECD (Organisation for Economic Cooperation and Development) area. This, he said, illustrates the fact that the value of statistics lies in how they are used. “The value of official statistics is in the discussions they facilitate or contribute to,” said Scholar. “Statistics that are not well presented, in the way that people want them, cannot be said to be truly fit for purpose”.

Richard Bartholomew, chief research officer in the children and families directorate of the Department for Education, defended the apparent conservatism of researchers in the public sector, explaining why they tend to be less willing than their commercial counterparts to move away from things like random probability sampling and “the obsession with the final report”. But with rising cost pressures and a wealth of other methods taking hold in the private sector, Bartholomew admitted that he sometimes feels “a bit like King Canute”.

Ben Page, CEO of Ipsos Mori, also stood up for “basic core skills” in research, and bemoaned what he sees as their decline. “In terms of the bedrock of what we do, which is to speak to a relatively small number of people and then understand what a larger number of people do, there is a risk that the actual knowledge that underpins the theories behind what we’re doing is being dissipated,” he said. “We have to have a look at craft skills and decide whether or not their loss matters.”

Reg Baker, president and chief operating officer of US agency Market Strategies International, who chaired an AAPOR (American Association for Public Opinion Research) taskforce looking at online panel quality, warned research practitioners to be careful in how they use panel research – and how they sell it. AAPOR’s report, published earlier this year, warned that panels should not be used to estimate populations, and advised against claims of ‘representativeness’. The biggest problem, Baker said, is not the shortcomings of panels themselves, but the fact that “we’ve put it out there, sold it to our clients, as something that it’s not”. Market research’s definition of ‘error’, he suggested, should be changed to: “Work purporting to do what it does not do.”

In answer to a question on whether online research had “killed quality”, Baker said, “The smart answer would be no, clients have” – a comment he later qualified in fear that it might be misconstrued. “To no small extent quality is in the eye of the user,” he said.

Alongside the discussions of data integrity and representativeness, Jeannie Arthur and Mike Hall of Verve gave their take on quality from their work using online communities for research. When concerns were raised about the impact on research quality of incentives (which have become particularly controversial in communities where research takes place alongside marketing activities) Arthur argued that this was a much smaller issue than the tendency of traditional research not to feed the results of research back to respondents – or not to do anything useful at all with the information they provided. “I agree that you need to be respectful of people’s time, but particularly for quant research I feel very strongly that it’s much more around thanking people for their involvement and showing them that they are being listened to and are making a difference.”

Some in the industry clearly fear that scientific rigour is being eroded – not just because clients want research fast and cheap, but because of the growing focus on making it more accessible to clients who may not have the time or inclination to bore themselves with the details. On the other hand, everyone agrees that research is worthless if it can’t be communicated and put to use.

All this against a backdrop of buyers demanding more for less. As Ben Page puts it, something has to give, “and too many people won’t even know when something has given”.

For further info: research-live.com/comment/are-we-doing-well?/4003957.article