Tubestrike 4: Crowdmap’s final test

On the 6th September, London Underground staff staged the first of four walk-outs. On Monday, Londoners face the final of these planned stikes.  I have been working with BBC London, encouraging the use of the Ushahidi crowdmap platform to report the effects of the last three strikes on the city. (I wrote up lessons form the first strike here).

From the beginning we had planned to use the crowdmap for all of the strikes so we could compare, contrast and learn as we went.

As I emphasised here, the map has demonstrated very clearly, that on certain stories, collaboration is the only possible way to report events. BBC London could not resource reporters at every station checking to see whether it was open or closed throughout the day, at every bus stop taking pictures of overcrowding, or giving tips about unclogged roads or the locations of available Boris Bikes.

And yet, while people are looking at the map in large numbers (about 20,000 uniqiue visitors each time) the number of people submitting tips, pictures and experiences is still small. Yes, the #tubestrike hashtag has provided some of the best material, but the map still feels locked in a social media bubble.

I try to imagine what the map would look like if it was covered in red dots, with real-time updates about the current travel situation. The network effect would take hold. The more people use it, the more useful it would become, and more people would use it.

I feel uncomfortable comparing ushahidi deployments, as it’s inappropriate to compare commuters being inconvenienced on their way to work with the terrible situation experienced by Pakistanis during the recent floods (pakreport.org), or the situation in Moscow last summer when the fires took hold (russian-fires.ru). But in Pakistan they’ve had over 2000 reports (a cry for help was posted this morning from someone who has lost everything and can’t get support), and the Russian fires received over 1600 reports.

The London maps have received a handful of direct reports. We’ve been posting content which included the #tubestrike hastag on twitter, adding verified station closure information, and posting audioboos from BBC London reporters, but we’ve received very few emails, SMS’s or reports submitted on the crowdmap itself.

When this tubestrike ends, I will be writing up a report about what has worked, what hasn’t, but mostly, whether this sort of effort can be justified from a resourcing point of view.

Personally, I think it is important that the BBC has a space where this sort of collaborative journalism can be encouraged, but if reports from the audience are minimal, should the crowdmap be resourced?

Is part of the problem the fact that this map appears to be “sponsored” by the BBC. Do people feel like the BBC must know the current situation and therefore they have little to offer? Does it feel too top down rather than community-driven?

We’re going to give it one last push next Monday, and I’d be very grateful if you could use online spaces to encourage people to use it – facebook, message boards, and blogs, as well as offline spaces – in your places of work, down the pub, and around your breakfast table.

And finally, any feedback would be very gratefully received, both technical but also in terms of use and content. Ushahidi have been brilliant to work with, and are hungry for user testing and feedback, and I will pass on to them everything we have learned.

But overall, we want to learn from this experiment. When big events strike, is crowdmap a useful way of describing the impact?

#Tubestrike 2: This Time We’re Serious

As a broadcast organisation, planning how to cover a major piece of industrial action is never easy. Just when extra people have been placed on shift, bulletins have been altered and the cameras are in place, strikes can be called off at the last minute. The same might apply for the next 24 hour walkout of tube station staff (3/4th October), but in case it doesn’t, we’re preparing for the worst.

As many of you know, BBC London used the new cloud based Ushahidi platform, crowdmap to cover the first in this series of strikes on the 7th September. It was definitely an experiment but we were very happy with how it went. I wrote a blog post with some of my reflections here; in terms of what worked and what we might want to do differently.

We have decided that we would like to use the map again if the strike goes ahead, and our decision is based on one key reason: this is a story that can’t be reported any other way.

While I was excited about using the platform, that came from my passion for social media, and my hope that the experiment would work. It didn’t feel like a journalistic imperative, and some of my colleagues weren’t quite as excited as I was.

But our experience on the day convinced everyone, purely because the BBC’s usual reliance on official sources just could not work in this situation. It was in the Union’s interest to tell the story that stations were closed and that maximum disruption had been achieved. Conversely it was in the interests of Transport for London (TfL) to tell us that as many stations as possible were open.

It wasn’t simply a case of both sides deliberately failing to tell the truth, it was more a case of massaging the truth, and protecting working staff. TfL were moving staff around during the day, opening stations and then closing them just as quickly, trying to keep the locations secret so Union pickets couldn’t be moved around ahead of them.

As a result, as the day progressed it became increasingly clear that we couldn’t rely on the information coming from either side. On crowdmap, during the moderation process, it forces you to define a new report as verified or unverified. At first we were verifying information from the TfL website but we quickly stopped doing that. We found we were posting information but Londoners almost immediately began to dispute those reports via twitter or crowdmap itself.

The point of this post is to ask that people get the word out. The original crowdmap received almost 20,000 unique visitors and obviously we were very happy with that, but we’re also aware that many of those visitors were social media types from all over the world intrigued by the combination of ‘BBC’ ‘crowdsourcing’ and ‘ushahidi’ in a tweet.

Our hope for the proposed strike next week is that the crowdmap becomes a really useful tool for people trying to get to and from work. So we’re asking for a few favours:

  1. Please get the word out – post the map on your blogs, twitter, facebook.
  2. Tell people who live in London they can email us (yourlondon@bbc.co.uk), phone us on 020 7765 1064 or send us an SMS (using 81333 and starting the message LONDON STRIKE).
  3. Show people who might not have seen it, what the map looks like and how they could use it.
  4. Tell us which categories you’d like to see on the map to make it more useful for you.

This isn’t an example of journalists playing with a new tool, or one of those interactive exercises where no-one actually pays as much attention as they should do to the material being submitted. This is our opportunity as Londoners to tell the story of the tubestrike in the only way possible – as a collective force.

The BBC can’t place reporters at every tube station or bus stop, but it can help collate the experiences of millions of Londoners on the 3rd October.

This experiment needs to move out of social media land and into the hands of Londoners to make their commute easier. Its success relies on as many people as possible posting their reports. Let’s see what a real crowdsourcing initiative can look like.

Social Media and the UK election

On Thursday I gave a short presentation at the Media140 #ozpolitics conference in Canberra. My slides are up here, but I thought it was also worth sharing some of the key points.

I wrote a couple of blog posts before the UK election about the potential role of social media, here and here, and I had meant to write a wrap up immediately afterwards, but heh ho…best laid plans etc.

Four months on, and after watching the Australian election unfold in August/September, I’ve had time to reflect on the role of social media in these elections. Here are five conclusions:

1) We need to stop trying to compare apples with oranges. The British and Australian election systems are very different to the US one. In the US, there are more ways, and more time (significantly longer campaigns – years rather than weeks) for people to get involved. Social media helps to build community, but like any community in real life, that takes time.

During the conference, Ambassador Bleich, the current US ambassador to Australia and a campaign advisor for Obama, gave an excellent talk about the ways in which the online aspects of Obama’s campaign were deliberately designed to mimic the off-line campaigning experiences which are the bread and butter of US elections. During extremely short campaigns, where these sorts of fundraising and get-out-the-vote experiences don’t exist, it’s impossible for political parties to directly copy Obama’s techniques.

However, all political parties know there will be future elections, and serious forward planning should be taking place already.

2) We should also stop talking about social media and elections without clearly differentiating between the way it is used by

  • political parties;
  • mainstream media;
  • audience.

Again we tend to make sweeping statements about the impact of social media without clearly defining our terms.

In the UK context, while the political parties and on the whole, the mainstream media were not using social media innovatively to connect with the electorate or the audience, the audience themselves were being very playful, collaborative and active in the social media space.

BBC research after the election showed that 63% of of the audience who use facebook said they either saw or posted something to do with the election on facebook. Yes, it’s true that 57% of the British population isn’t on facebook, and these BBC figures show that 37% of British people on facebook didn’t do anything to do with the election on the social networking site, but when apathy is increasing and turnout, (even during the closest election for 10 years) is 65%, I’ll take those figures.

3) We should stop seeing off and online as separate spaces. The reason television and radio figures are higher than they’ve been for years is because more people are spending time on social media spaces and being reminded about content on television and radio. They are also able to watch television, and listen to radio while tweeting or facebooking.

So while I can understand the gloating of television people that “this wasn’t a social media election, it was a television election” (they have been told ad nauseum for the last couple of years that television is being replaced by social media), that wasn’t entirely true.

Yes, the debates were on television. Yes, the debates were watched by a large number of people but the debates were also social objects. Whereas previously people might have commented to their partner on the sofa, in April, the thousands of comments on social media sites were shared globally. That gave the debates a significantly different feel and added to their success.

4) We also need to be clear about how we’re measuring ‘engagement’. If engagement requires that someone sees a post on facebook, contacts the party, and ends up delivering leaflets around their community on rainy evenings, then it’s true, social media didn’t have much of an impact.

Compare this with the following scenario: an apathetic student sees a post by a friend on facebook about using the slapometer during the debates. As a result they end up talking about it down the pub; they watch the debates together for fun to use the slapometer, and as a result have a conversation about the lack of discussion about student tuition fees during the debates.

This type of scenario occurred much more frequently, and I think it should certainly ‘count’ as significant, even if it is very difficult to measure reliably.

During my talk I put up the following table on one of my slides, and I think it’s worth repeating. There are different ways of measuring ‘engagement’ and it’s worth differentiating between offline and online spaces. Once we separate out these different types of ‘actions’, it gets easier to measure the impact of social media. When we lump everything together, it leads to sweeping generalisations and we fail to understand the subtleties in terms of influence and behaviour .

5) Finally, we shouldn’t dismiss humour as a measure of engagement. People only ‘get’ a joke if they understand the observation it is based upon. John Stewart’s The Daily Show works because people understand what he is poking fun at.

If an alien landed in New York and turned on Comedy Central in her hotel room, she wouldn’t be laughing, because she wouldn’t know anything about the Tea Party, or the length of time war has been waged in Afghanistan. So when people laughed at a poster of Cameron wearing a Burberry baseball cap and holding a can of Stella Artois beer, they are laughing because they know the criticisms levelled at Cameron about his education at Eton School, and the claims that he is out of touch as a result. It’s not just a fart joke.

It’s now September, and the UK election seems a long time ago. But there will be more elections, and social media will be even more influential. Next time, let’s make sure we don’t fall for simplistic characterisations of the impact of social media. Instead let’s try not to dismiss the seemingly frivolous, and remember the importance of defining key terms.

The Day After – Lessons Learned from the Crowdmap experience

Late on Monday night, I wrote a short post here in anticipation of the crowdmap I’d just set up for BBC London, which I hoped would provide a useful service the following day for the London tubestrike, 7th September 2010.

It’s now Wednesday morning, and I can write, while still feeling slightly shell-shocked from the experience, that all in all, I’m very pleased with how it went.

I want to use this post to reflect on some of the things that worked, some of the things that didn’t work as well, and some things I will do differently if the next scheduled tube strike goes ahead.

Bottom line was that lots of people saw it: 18,860 unque visitors, and 39,306 page views from 55 countries. 13,808 were from the UK, 3863 from the US, and I can’t get over the fact that we had 2 people form Bermuda, 1 person from Uruguay, and 9 from Kenya, the home of the Ushahidi platform. The power of social media never ceases to amaze me.

We posted 202 reports yesterday. About 50 were sent directly to the map from the audience, either via the web form or the specific SMS channel we set up. The rest of the reports we took from twitter, either tweets in the #tubestrike stream or replies to the @BBCTravelalert account.

I can’t stress enough that getting the reports up wasn’t easy because of the time pressures. Every report, whether it was sent directly or not had to be physically approved. Nothing went straight up onto the map.

Yesterday I was ably assisted by Abigail Sawyer who works for the World Service and who wanted to see how the platform worked and how it might work in a Global context, and for 2 hours during the evening rush hour by Emma Jenkinson, a producer from BBC London who was drafted in as emergency help. We also had help from Steve Phillips, the BBC London transport reporter who was audiobooing, appearing on TV, and updating twitter like a mad thing.

During the two peak times, we were monitoring the SMS console, three twitter streams (#tubestrike, “tube AND strike”, @BBCTravealert), audioboo, emails and the BBC London facebook page.

For each report we needed to add or check:

1)    a clear headline,

2)    a description, which if it was from twitter we were cutting and pasting,

3)    the official timestamp (which frustratingly never stayed connected to the actual time so drop down menus had to be used each time),

4)    the geo-location by putting in the location box and waiting for the map to find it (we soon learned that if you just put in Waterloo, it defaulted to Waterloo in Canada so we had to write Waterloo, UK),

5)    the category (tube, train, bus etc)

6)    the verification status (we only ticked the verification box if the report had been supplied by our own reporters. We realised we couldn’t even verify information from the Transport for London website as commuters were contacting us and saying the TFL information was not up to date)

Only then could we finally approve it and then put it on the map.

Phew. Quite a process.

If you had an event which wasn’t so time sensitive or fast paced, it wouldn’t have been such an issue, but at times we were mopping sweat off our brows, feeling slightly under pressure, especially as we saw so many people tweeting about us from around the world!

That was the process.

In terms of things we learned along the way…

I) had originally chosen Google Maps as the default mapping tool, but half way through the morning rush hour we heard from Harry Wood who encouraged us to use Open Street Map,  a free editable map of the whole world, created by volunteers. It is not for profit and apparently started in London. We quickly changed the settings with one click and were immediately amazed at the improved quality of the map. It was much more accurate.

2) Although we needed to use the inbuilt time stamp, we also realised people needed to quickly see on the map itself (rather than having to click through) when information had been sent, so at lunchtime we started each headline with a time stamp we typed in.

3) At lunchtime, we had collected 90 reports, but realised it was quickly going out of date. We therefore deleted all the earlier reports and started afresh, although we did manually input all station closures, which we realised was the key bit of information people were looking for. One major problem however was that by early afternoon word had spread and I saw people tweeting – ‘good idea, shame there isn’t more information on the map’, so I was torn between trying to make the map look impressive, and it actually being useful!

Things I wished we could have done:

1)    Publicised it more beforehand. This was a crowdsourcing initiative but we didn’t talk to the crowd early enough to encourage people to take part, and to show then how it might be helpful. For obvious reasons, this was very much an experiment and the BBC was slightly nervous about shouting about something that hadn’t been tried and tested. As a result, I only published my short blog post on Monday night and we started tweeting about it on Tuesday morning but that was it. So the fact that we got the results we did, are pretty amazing (I’d say modestly!)

2)    I wish we could have had more time to thank people and to let people know we’d used their information on the map. I did it a few times when I got a chance, and unsurprisingly we saw those people posting more reports.

Things I’d encourage Ushahidi to think about:

It feels churlish to make suggestions to the platform, when I think it’s amazing and I wouldn’t have the  skills to make 1/100th of the site, but as someone who used it under pressure in this situation, here are a couple of suggestions:

1)    It would be useful if there was a scrolling news bars at the top, so we could put out top line information, which we know everyone would see by just going to the map. Something like ‘the circle line is suspended’ or ‘the roads are really starting to build with traffic’ was very hard to map. There’s no one spot on the circle line (for those who don’t know, it’s an underground line which runs in a continuous circle!)

2)    It would have been great to add more information to the first speech bubble which appeared when you clicked on a dot, e.g. a photo, an audioboo, more detail etc. I don’t think everyone was always clicking through to the next page.

3)    A way to visualise the timestamp more clearly from the map would have been great, e.g. the brighter the colour, the more recent the report. It was a shame to have to delete earlier reports.

4)    A way to differentiate between good and bad news. Most of the information we were reporting was negative – tube line suspended, traffic jams etc. Sometimes we got tips or advice about how to avoid the problems, and it would have been great if we could have shown those in a different way.

Overall, we created a map, which at many points during the day was more accurate than the Transport for London website, and which was a live and updated version of what was happening out on the streets of London. And most importantly it was built by the people of London.

If more people had known about it and understood how to upload reports it would have been even richer and even more useful and accurate.

While I don’t wish another strike on anyone, I secretly hope there’s another one so we can take crowdmap for another test drive.

Tubestrike Crowdsourcing Experiment

I have been working with BBC London for the past few weeks, helping to support them with some new social media initiatives. (It should be noted they already have a strong foundation – see @BBCTravelAlert as a great example of an engaged twitterfeed). A couple of weeks ago when the London Tube strike was announced I thought it would be interesting to try out the new Ushahidi crowdmap platform. It was launched about a month ago and is an attempt to do for crowdsourced mapping what wordpress has done to blogging….basically make it foolproof.

I had seen TBD’s map in DC, and thought it would be a perfect way of easily visualising a lot of information about the strike. We’re now one hour into the strike and I’m writing this while checking the Transport for London website and updating the crowdmap with new information. So far the only updates we’ve posted have been based on official data. Hopefully tomorrow, people will start sending us their own reports via text, twitter, audioboo and the webform on the site.

The hope is that the map will become the host for photos, audio clips and commuter experiences rather than simply parroting the official information, already available on the Transport for London site. That will depend on whether the map takes off and people want to help us.

I hope so. Crowdmap is a great site, easy to use, and aware of the all the issues of verification and information management which could make this type of journalism a minefield.

Hopefully you’ll take a look at our map even if you don’t live in London and aren’t affected by the Tube strike, and will see what a great resource this can be for anyone interested in collaborative journalism projects.

Making crowdsourcing work: some examples from the UK election

(This article was also posted today on the BBC College of Journalism blog – http://www.bbc.co.uk/journalism/blog/2010/06/what-makes-people-send-us-thei.shtml)

The idea that journalists can tap into online sources to access a kind of collective consciousness is often touted as the next – or at least the current – big thing. If this is the way forward, news organisations need to understand what makes audience members decide to interact or participate.

‘Crowdsourcing’ may sound very contemporary, but it’s simply a more sophisticated version of a process that’s been around for decades, in at least two earlier forms.

– Traditional feedback

In the beginning, interaction between audience and news organisation was a simple a call-to-action, with audience members responding independently of each other.

e.g. radio call-in shows, letters to the editor

– Technologically enhanced UGC (user-generated content)

More recently, this approach has blossomed into an almost unrecognisable diversity as the audience makes use of a much wider range of technology to submit material.

e.g emails and text messages to news organisations, pictures of breaking news events submitted via mobile phones, Tweets and additions to Facebook pages.

Both the above should be distinguished from the current phase:

– Social media

Today people can comment and discuss with one another, so a feedback mechanism exists beyond the news organisation.

e.g. online dialogue between audience members, integration of social networks into news organisation output, collaborative crowd sourcing

As I’ve written about previously, I led a piece of research about UGC at the BBC, exploring the motivation of people who submit material (pictures, comments, story suggestions) to a news organisation, and to examine what stops people from responding to calls-to-action. In December 2007, only 4% of the UK population had submitted anything to an online news website.

While there were a number of reasons that people gave for not submitting material – lack of time was a common one – the most interesting reasons were:

– “I don’t know enough to comment or add anything.”

– “It’s complicated. I wouldn’t know how to take a video or photo or even how I would contact a news organisation or send anything in.”

– “There would be no point contacting a news organisation with a suggestion about a potential story as they already know about it. If they don’t cover a story I’m interested in it’s because they don’t think it’s important enough, not because they don’t know about it.”

– “I would be much more likely to comment if I thought someone important was listening, such as the Prime Minister.”

I’m repeating these here as I want to stress how important they still are, although our research was completed when journalism was still very much in the second (Technologically enhanced UGC) of the above stages. At the time, social media had hardly made a dent on newsrooms.

News organisations are slowly experimenting with innovative ways to use social media to engage the audience. The imaginary apostophes around the term ‘crowdsourcing’ make it appear to be a new and exciting element of journalism, but many crowdsourcing initiatives depend on the same behavioural characteristics that limited participation in the earlier stages of audience interaction.

So how, specifically do those factors play out in the current context? I’d like to compare three crowdsourcing initiatives from around the time of the General Election in May 2010.

– “If I were Prime Minister” video wall (BBC News – Have Your Say)

The BBC’s election wall appeared at the beginning of the campaign and featured short videos of people talking to camera saying what they would do if they were prime minister.

The hope was that people would submit their own videos but in fact very few people did, and many films were professional vox pops filmed by BBC crews when they were on location during the campaign.

In lots of ways this initiative theoretically had potential and it could have taken off. The main problem with it was the barrier to participation: most people don’t know how to film themselves, and even if they did, they would struggle to know how to submit their video to the BBC or would worry about cost. In addition, many people would feel unable to comment for 20 seconds on political policies.

Message for the new Con/LibDem Coalition Government (Guardian Flickr Group)
The Guardian created a group on the photo-sharing site Flickr and asked people to send a message to the new Coalition Government using a photograph. In contrast to the BBC election wall, the Guardian overcame three key barriers to participation:

– Difficulty of creating and sending content

– Concern that participating was pointless

– Fear that they didn’t know enough to comment.

The project was based on photography – which is much more accessible to most people than video. The call-to-action also encouraged participation by emphasising that this wasn’t necessarily a serious project: “A greeting? A warning? Some sage (or silly) advice? An idea? A request.” And while there was no guarantee that David Cameron or Nick Clegg would look at the results, rhetorically it was set up with that intention, giving the project a purpose beyond simply contributing.

While someone might not have felt confident sending in a picture that outlined the finer points of a potential economic stimulus package, it was easy to draw “TELL THE TRUTH” in big letters on a piece of paper and hold it up to the camera.

Commons Sense book for new MPs (BBC Radio 5 Live)

This was a project which produced a real life object, not simply a digital object. A Commons Sense book was made, and given to all of new MPs as they arrived at the House of Commons for their first day.

The suggestions in the book came from audience members who had attended outside broadcasts at the party political conferences last September. The wider audience then whittled these 600 suggestions down further to select entries for the final book.

This is a powerful example of the merits of combining off- and online engagement. It always results in the best material. And like the Guardian example, the very design of the project emphasised that suggestions would be read by people that matter, rather than soliciting audience comments for the sake of it.

All three of these initiatives had merit. This post is merely attempting to compare and in the process share what we can learn from such experiments. And experiments they are. We’re still learning what works and what doesn’t.

My point is that we already know something about what motivates and prevents people from engaging with the news – whether in a call-in radio show or an ambitious crowdsourced project online. When newsrooms are planning these initiatives, it is worth considering…

– How simple is our call to action?

– Have we made it as easy as possible for people to contribute?

– Have we made it as clear as possible how people contribute (technology needs and associated costs)?

– Have we given people a reason to contribute – beyond simply contributing (‘faux interactivity’ as one of our participants called it)? Who might be listening? Might change actually happen as a result of someone contributing?

Successful crowdsourcing projects don’t have to involve the Prime Minister listening. Two of my favourite initiatives are the BBC World Service Save our Sounds project and BBC Radio 4’s History of the World.

In both, people are asked to contribute to an online ‘archive’ that would remain for others to experience and enjoy.

To be successful it can’t be ‘interactivity for interactivity’s sake’. The audience can smell it a mile off.

Social media & journalism: a research critique

This morning I awoke to a few people in my twitter stream retweeting a piece of research by the Society for New Communication Research about the increased reliance of journalists on social media. As someone who works every day with BBC journalists training them to do this, I was sceptical of the numbers, so did some digging.

The report was actually published in February this year, and you can read the executive summary here, and see a slideshare presentation here. The headline findings suggest social media is being taken very seriously in newsrooms.

48% are using Twitter and other microblogging sites
66% of journalists are using blogs
48% are using online video
25% are using podcasts

Overall, 90% of journalists agree that new media and communication tools and technologies are enhancing journalism to some extent.

For those of us who use social media everyday, and understand its value, it’s tempting to see these figures and quickly bookmark the document in delicious, happy they support the way we believe journalism should be headed.

The bottom line is that while the research is valuable as a qualitative study of some journalists’ attitudes (and the quotes listed in the slideshare offer some interesting insights into these), this research isn’t a quantitative representative study, however tempting the statistics are. When you look at the methodology, you can see that only 341 people took the survey and the survey was web-based.

Unfortunately I couldn’t find out how the survey was undertaken, and without that information it’s impossible to do a proper critique. For example, the executive statement claims a 95% confidence ratio, which doesn’t even make sense without other information. (Apologies to the authors if I’ve missed the detailed methodological explanation, but it should have been in the Executive Summary).

I’m particularly interested by how the web survey was distributed. Was the survey sent via email to journalists. If so, how were the journalists chosen? What was the ‘population’? To produce a representative survey, you need to be clear about the total population, and sample randomly from that population. In a world of fluid labour markets with increasing numbers of freelance journalists, how do you get a clear population, particularly when this survey boasts about its worldwide sample.

If it was a web-survey on a site where it was hoped journalists would stumble, you’re already self-selecting journalists who are spending more time on the web, and the types who are more likely to fill in web surveys. And even if it was an email with a web survey link attached, it would be self-selecting. That is fine, but the authors should have disclosed the number of journalists who received the email, and those who went on to complete the survey.

The reason I was sceptical about the figures, as much as I would like them to be true, is that I have spent the last 6 months leading training courses with BBC journalists about social media. While the journalists I have trained are mostly enthusiastic about learning more about these tools, my anecdotal evidence from training about the same number of journalists as were involved in this survey, does not support these figures.

That does not mean the BBC are behind in this regard. That also does not mean I have a self-selected sample because the people coming on the course are in some way ‘behind’. In many ways the journalists coming on the course are the ones who are ready to embrace it. But in very busy newsrooms, where resources are becoming increasingly limited (and I recognise the BBC is in a very special place in comparison to other newsrooms where income is not guaranteed), most journalists have not had the time to spend experimenting with these new tools. When your day is spent desperately trying to meet deadlines, there is no time to have a play with twitter, find relevant blogs, spend time verifying who the authors are, or battle with Facebook’s privacy settings.

Most journalists I train are aware of all the tools mentioned in this report, and want to feel more confident using them, but want time to experiment, and learn the subtleties before relying on them for newsgathering.

While they might have answered the question ‘have you used twitter in your reporting’ with a ‘yes’ because a colleague pointed them in the direction of twitter when a breaking news story unfolded and they were intrigued to see what was being said, that is different to actively using twitter as a newsgathering tool everyday. So when 48% of journalists from this survey said they have ‘used twitter’, how is ‘used’ defined? Does ‘used’ equal occasional or frequently lurking, or does ‘used’ mean regularly engaging with sources and contributing to the community?

I want to stress wholeheartedly that this post is not meant to highlight the shortcomings of BBC journalists. I have talked to a number of other people who lead training at other news organisations across the world, and I know everyone shares my experiences.

I offer anecdotal evidence to show how wary we have to be of research that is not statistically representative. This research is worthwhile and interesting, but beware of drawing conclusions which can not be drawn.

Social media offers everything the authors suggest in terms of disseminating news, finding story ideas and sources, monitoring sentiments and discussions, researching individuals and organisations, and keeping up on topics of interests and participating in conversations with the audience, but I would argue it is far from universally accepted in newsrooms.

There are some journalists who absolutely understand the value, engage daily, are wary of the potential pitfalls and their reporting is significantly improved because of it. The majority of journalists understand social media offers a range of new tool for newsgathering and building community, but understand it should be treated with caution, and the subtleties appreciated. I would say the same of this research.