Social media & journalism: a research critique

This morning I awoke to a few people in my twitter stream retweeting a piece of research by the Society for New Communication Research about the increased reliance of journalists on social media. As someone who works every day with BBC journalists training them to do this, I was sceptical of the numbers, so did some digging.

The report was actually published in February this year, and you can read the executive summary here, and see a slideshare presentation here. The headline findings suggest social media is being taken very seriously in newsrooms.

48% are using Twitter and other microblogging sites
66% of journalists are using blogs
48% are using online video
25% are using podcasts

Overall, 90% of journalists agree that new media and communication tools and technologies are enhancing journalism to some extent.

For those of us who use social media everyday, and understand its value, it’s tempting to see these figures and quickly bookmark the document in delicious, happy they support the way we believe journalism should be headed.

The bottom line is that while the research is valuable as a qualitative study of some journalists’ attitudes (and the quotes listed in the slideshare offer some interesting insights into these), this research isn’t a quantitative representative study, however tempting the statistics are. When you look at the methodology, you can see that only 341 people took the survey and the survey was web-based.

Unfortunately I couldn’t find out how the survey was undertaken, and without that information it’s impossible to do a proper critique. For example, the executive statement claims a 95% confidence ratio, which doesn’t even make sense without other information. (Apologies to the authors if I’ve missed the detailed methodological explanation, but it should have been in the Executive Summary).

I’m particularly interested by how the web survey was distributed. Was the survey sent via email to journalists. If so, how were the journalists chosen? What was the ‘population’? To produce a representative survey, you need to be clear about the total population, and sample randomly from that population. In a world of fluid labour markets with increasing numbers of freelance journalists, how do you get a clear population, particularly when this survey boasts about its worldwide sample.

If it was a web-survey on a site where it was hoped journalists would stumble, you’re already self-selecting journalists who are spending more time on the web, and the types who are more likely to fill in web surveys. And even if it was an email with a web survey link attached, it would be self-selecting. That is fine, but the authors should have disclosed the number of journalists who received the email, and those who went on to complete the survey.

The reason I was sceptical about the figures, as much as I would like them to be true, is that I have spent the last 6 months leading training courses with BBC journalists about social media. While the journalists I have trained are mostly enthusiastic about learning more about these tools, my anecdotal evidence from training about the same number of journalists as were involved in this survey, does not support these figures.

That does not mean the BBC are behind in this regard. That also does not mean I have a self-selected sample because the people coming on the course are in some way ‘behind’. In many ways the journalists coming on the course are the ones who are ready to embrace it. But in very busy newsrooms, where resources are becoming increasingly limited (and I recognise the BBC is in a very special place in comparison to other newsrooms where income is not guaranteed), most journalists have not had the time to spend experimenting with these new tools. When your day is spent desperately trying to meet deadlines, there is no time to have a play with twitter, find relevant blogs, spend time verifying who the authors are, or battle with Facebook’s privacy settings.

Most journalists I train are aware of all the tools mentioned in this report, and want to feel more confident using them, but want time to experiment, and learn the subtleties before relying on them for newsgathering.

While they might have answered the question ‘have you used twitter in your reporting’ with a ‘yes’ because a colleague pointed them in the direction of twitter when a breaking news story unfolded and they were intrigued to see what was being said, that is different to actively using twitter as a newsgathering tool everyday. So when 48% of journalists from this survey said they have ‘used twitter’, how is ‘used’ defined? Does ‘used’ equal occasional or frequently lurking, or does ‘used’ mean regularly engaging with sources and contributing to the community?

I want to stress wholeheartedly that this post is not meant to highlight the shortcomings of BBC journalists. I have talked to a number of other people who lead training at other news organisations across the world, and I know everyone shares my experiences.

I offer anecdotal evidence to show how wary we have to be of research that is not statistically representative. This research is worthwhile and interesting, but beware of drawing conclusions which can not be drawn.

Social media offers everything the authors suggest in terms of disseminating news, finding story ideas and sources, monitoring sentiments and discussions, researching individuals and organisations, and keeping up on topics of interests and participating in conversations with the audience, but I would argue it is far from universally accepted in newsrooms.

There are some journalists who absolutely understand the value, engage daily, are wary of the potential pitfalls and their reporting is significantly improved because of it. The majority of journalists understand social media offers a range of new tool for newsgathering and building community, but understand it should be treated with caution, and the subtleties appreciated. I would say the same of this research.

8 thoughts on “Social media & journalism: a research critique

  1. Interesting critique. As a general point, I think it is worth mentioning that determining the ‘population’ of journalists from which to sample is often a nigh-insurmountable first step in any quantitative research on journalists, particularly here in the UK. There is no central register of journalists, the union membership rate is too low to use as a proxy for population (has been used as a proxy in other countries, e.g. Sweden), freelancing and other forms of temporary contracts are so common that using editorial staff lists from media organizations (if you can even get those, which you rarely can) will miss large parts of the population (still, this has been the preferred method to generate a population in large-scale surveys in the US and Germany), and existing estimates of basics like the total number of journalists and how they are divided among media types are sketchy at best, and old (2004, I think) to boot.

    Do you have any good suggestions on how to establish a clear population of UK journalists that would be representative for statistical purposes? Anyone doing research on UK journalists that wants to claim representativeness probably need to allocate a massive part of their grant money simply to create the basic population to sample from, using many different methods.

  2. Hi Henrik. Completely agree with you about the impossible task of putting together such a list, but without that it’s impossible to do legitimate quantitative research.

    It would be easier to do research within certain news organisations, and that would certainly have merit, but getting access to those lists will always be very difficult/impossible unless the research was commissioned by the news organisation itself.

    Would be interesting to hear what types of publically accessible lists exist, if any.

  3. In my study I used commercially available lists of journalists compiled by marketing companies (Cision, in my case). The weakness is of course that it is hard to guarantee any form of representativeness (though one can mitigate that to some extent by weighting the sample after the survey is done, if you have access to basic demographic data about journalists from elsewhere – this information is available in some countries, not in others), but an advantage is that these companies actually continuously update their lists (I think Weaver et al ran across the problem that by the time they had compiled their sample lists based on staff lists from media organizations, a lot of journalists had left said organizations).

    Of course I am aware that the representativeness of my sample may be limited (or it may be perfect – we will never know) so I have to adjust my conclusions and analysis accordingly. Using analytical statistics (factor analysis, cluster analysis etc) for example, strikes me as slightly more legitimate than using it for descriptive statistics. You can at best get at trends and tendencies, and then validate with reference to other studies.

    Have you seen Thomas Hanitzsch’s Worlds of Journalisms project (www.worldsofjournalisms.org)? They use a completely different strategy: they focus on making sure the samples are comparable between countries and essentially do not care about representative.

    Everything you say is right, of course. But if you are engaged in cross-national comparative journalism research, the choice is often between having an imperfect sample or not doing the study at all. Until research grants massively increase (yeah, _that’s_ likely) I’d rather take the imperfect sample (and be open about its imperfections and limitations).

  4. Great comment. It’s absolutely vital that we have cross-national comparative journalism research and it will be impossible to have a truly representative sample for all the reasons you discuss. But as long as the researchers attempt to find and use comprehensive lists, that’s all that we can ask for.

    My concern is people shouting about headline ‘stats’ without clearly stating the methodology and acknowledging the understandable weaknesses of the study.

    Do you have the links to your research?

    Thanks again for leaving these comments. Really useful discussion.

  5. I agree with you, market research/polling companies often make excessive claims about studies based on dodgy methodologies. They provide a good to a market and results should be interpreted in that light.

    Some info on my research is available on the Reuters Institute web page, and you can also check my (now inactive) blog http://www.doctorofjournalism.com. No empirical stuff yet, I’m in the process of writing everything up.

  6. Sticking my oar in here, for me it was the claim SNCR made, that their sample size guaranteed a 95 percent confidence ratio that was particularly misleading:

    “The survey was conducted between July 2009 and October 2009, and received responses from 341 journalists, resulting in a 95 percent confidence ratio.

    A little more frankness on their part would have covered their backs.

Comments are closed.