Announcing Ocqur’s partnership with PoliticsHome

Since we began developing Ocqur at the start of the year, I’ve had an idea as to what our ideal clients would look like. They’d be well versed in the art of live coverage, keen to try new things and keep a finger on the pulse of whatever topic they wrote about.

It turns out our first client does just that.

PoliticsHome is the first commercial partner of Ocqur. We’ve been consistently impressed with the liveblogs they’ve produced across stories like the Cabinet reshuffle and the Hillsborough report results. They also do a grand job of covering PMQs every week.

We’re really pleased to be working with them on developing Ocqur – they have a great team of digital savvy journalists who know what works for them and consequently help us build a better product.  Most obviously, they, along with our alpha testers, prompted us to build in a local image upload feature, which you can see is key to many of the stories that they cover.

We’re really looking forward to working with PoliticsHome in the year ahead. If you’re interested in a similar partnership with us, get in touch.

Context: One thing that liveblogs can’t do at the moment

Liveblogging is still a nascent writing style. There have been several discussions about its value. You can debate for days about whether or not it’s appropriate for everything, or whether covering something through process journalism is always the right decision, but liveblogging is a form of storytelling that’s here to stay.

To me the humdinger that almost every single liveblog that I’ve come across has failed to address is the issue of context.

When talking to people about Ocqur, one of the most common pieces of feedback was not being able to properly understand what had been going on in a liveblog if joining the story or event part of the way through.

This is a problem for two reasons.

First, viewers are likely to spend several minutes trying to work out what happened and when before getting back to the latest updates, which is a poor reading experience.

The second reason is a byproduct of the first, in that as long as this problem keeps occuring and readers still view it as an issue, it’s going to put people off the liveblog experience.

There’s a reason why things like Longform and Readability have done so well – because they enshrine and bring out the simplicity that long form reading used to be before we were assailed by a horde of feeds, links and social networks. Right now it doesn’t seem like the frontend of many liveblogs seem to treat their readers in the same way.

So how can it be done better?

News organisations like the Guardian, the Times and the New York Times have it easier than most in this respect, simply because they have access to thousands of articles, hundreds of tags and topic pages, video content, photo archives, commentary and analysis. This alone should make the task of contextualising liveblogs a lot simpler for them than a standalone service like Cover It Live or Scribble Live.

In fact, the New York Times already seem to be halfway there. Take a look at this screenshot of their Facebook IPO liveblog.

The right column displays a graph tracking the stock price, with major shareholders listed below and a pane offering videos, interactives and documents. The NYT come closer than any other organisation to offering a full contextual experience alongside their liveblog.

So for us at Ocqur, this is potentially the toughest nut to crack. Feature requests are small fry – we can build multiple authors, we can give you more options for embedding, and we can add permalinks for individual entries.

But when it comes to context, how do we interpret that? It’s potentially a very abstract and subjective concept. One man’s article is another man’s YouTube video, and it’s very difficult to tell how everyone reads stories when they’re being played out live.

I think the solution lies in, ironically, looking at how ‘old’ media cover things like elections. Take the BBC’s 2010 general election coverage. The main coverage consisted of rolling news in the vein that we’re used to seeing from the BBC news channel and Sky News, with reporters from various counting halls around the country occasionally doing a piece to camera and reporting the local result. This is the broadcast equivalent of the liveblog, with the liveblog author taking the place of the program producer.

Between these results, the BBC would come back to the studio, which featured Jeremy Paxman, David Dimbleby, Emily Maitlis et al filling in viewers on the bigger picture. “Here’s the result” said the journalist in the counting hall, “and this is what it means” said the presenter in the studio.

The problem with this analogy is that we don’t seem to have found our presenters yet in liveblogging. There isn’t much contextualisation of the river of information that’s flowing through a liveblog, and it’s one of our main challenges in the ongoing development of Ocqur. @socialtechno has pointed out some excellent processes on how to address this in the comments.

As mentioned in my last post, we’ll be working with our testers in the next couple of months in order to really draw out and establish what it means to have a liveblog that truly allows the reader to stay up to date as well as understand the key issues quickly.

Ocqur – Reflections on user testing and the future

If you’ve ever built a product from scratch, you’ll inevitably have come up against the dilemma of whether to build it until you think it’s perfect before releasing it to users, or making a minimum viable product that ticks a few boxes and lets the users dictate the next iteration.

The latter is the approach we took with Ocqur, which is liveblogging software that I’ve been working on with Jonathan Frost and Andrew Fairbairn.

I’ve been overseeing the first round of user testing since we started building the service at the beginning of the year. It’s been really educational and also thrilling to see it being used outside of our small circle, so I  thought I’d post a few thoughts about lessons learnt and what we’re planning for the future.

Structuring feedback is really tough

Early testers of Ocqur have been giving us feedback over the testing period. Some emailed me their thoughts, others blogged or tweeted about it, but testers were also required to fill out a questionnaire I’d written.

The difficulty in providing a useful arena for feedback lies in getting an equal balance of serendipity and structure that allows you to get specific metrics. For example – you write a question that asks the tester “Which feature is the most important for Ocqur? A, B, or C?” What if there’s a “D” that you haven’t thought of? The tester might have “D” in mind as the most important feature, but you’re not giving them the option to suggest it.

I think I managed to get the balance fairly well – so we’ve got a workable set of percentages and figures regarding questions that can be answered with a yes or no, as well as long form feedback that’s the result of more free choice questions.

There is a gap in the market

When we set out to build Ocqur, we saw it as an opportunity to create a liveblogging system that was simple but powerful and married good design to nice functionality. A lot of the feedback we got from testers was that they were surprised and pleased with how simple the product was.

I’ve had some people ask me about the comparisons to Storify, and how to differentiate it from their offering.

To ask that kind of question is to miss the point a little. Storify is a great tool – I use it frequently. But it’s not what we’re after. Publishing a Storify “as live” requires the user to constantly republish the page (which doesn’t automatically refresh if you’re a viewer) and inevitably constantly notify viewers that updates have been made. It works so much better to collect thoughts after an event has happened.

We think that liveblogging shouldn’t be as complicated as it has been in the past. We think the current offerings are either poor or unaffordable to the majority of  bloggers, freelance and student journalists. Luckily at this early stage it seems like our testers felt the same.

People interpret features in different ways

The reason we decided to release to testers so early in development is because we didn’t want to spend another 10 weeks building something only to find out that no one wanted it. User input at this early stage was vital.

At the same time, it’s interesting when testers throw up something that you really didn’t think would be a big issue. For us this was being able to upload content from your desktop onto a liveblog.

I have never done this, having worked with pretty much all the consumer liveblogging services out there. I tend to scrape content from various web sources, and if I need to take any photos from my phone for a liveblog I either post to Twitter or share to Dropbox.

But clearly our testers want this feature, and they’ve voted overwhelmingly with their feet.

A breakdown of testers' views on the importance of desktop uploading

So now the question is, what do they use it for? Documents? Audio? Video?

Asking users to rank the importance of desktop upload may seem fairly specific, but in reality people may have all sorts of ideas of why it’s important to them and what they actually want. To that end I’m going to chat to those people who ranked it as very important individually and dig a bit deeper into why it’s an important feature.

The future

We had an overwhelming response when we put out a call for testers – over double the amount of registrations that we needed for the first stage. If you’ve signed up and haven’t been contacted this time round, don’t worry – we’ll be sending out another iteration of the software in the next couple of months and you’ll be the first ones to get your hands on it.

A big thank you to everyone who’s participated so far, we’re really looking forward to sharing our plans for Ocqur with you in the months ahead.

Review: Storify iPad app

Earlier this morning, Storify announced that they were releasing a free iPad app. I’ve downloaded it, and these are my first impressions.

The app works in landscape mode only. Getting to the login screen means typing in your username and password – slightly confusing for me because I’ve always logged in via twitter since the beta version. Having tried all the possible iterations of my twitter password I then had to do a password reset to my email in order to get in – this might just be me being forgetful, but those of you who’ve associated your twitter account with Storify may also hit this problem.

Anyway once you’re in you get access to all your Storify stories in a nice gallery view. You can edit them all from here, but I thought I’d create a short story just for this review.

The page for composing your story is similar enough, with the familiar tabs of Twitter, Facebook, Instagram, YouTube, Flickr and browser links available for you to run searches in.

The only difference between the desktop version is that there isn’t a tab for Google content, which normally pulls out web searches, news and images. I never use that tab, but worth bearing in mind.

Once you tap on any of these, it’s very much like the desktop version. You can filter tweets by user, search and images, and the drag and drop interface makes it really easy to quickly create the story. Interestingly the iPad app also has one feature that the desktop version doesn’t – the ability to tweet from your own account while inside the app.

Pulling content from Flickr and YouTube is similarly pain-free, once you’ve run a search just pick up a piece of content by tapping and holding and then moving it over to the desired area on your story.

I can see the iPad app being incredibly useful for a couple of reasons.

The first obvious one is conference use. iPads are already ubiquitous at conferences – they’re better for tweeting and note taking than a smartphone without being as cumbersome as a laptop.

But because the iPad app’s drag and drop interface is so intuitive, you’d easily be able to collect together content in the break between a conference session. I’ve already written a few blog posts entirely in Storify, and I think this will only increase that trend.

The second obvious use is news coverage combined with mobile journalism. If you’re out and about covering an event with your smartphone – taking photos, video, livetweeting, it’s now really easy to just sling an iPad in your bag for some post-event curation in a nearby coffee shop. Again, getting rid of that laptop.

Once you’ve finished your story, you’re presented with the publish screen which thankfully has all the functionality of the desktop app – publishing to Facebook and Twitter, and the ability to @ reply anyone who’s been quoted in your story.

Maybe the announcement wasn’t as big as some people were expecting. It wasn’t an acquisition like some were predicting, but the Storify iPad app stands on its own two feet.

It has a few bugs (it crashed several times when swiping between stories) but that’s to be expected from an app that’s just been released.

In the long run this’ll mean only good things for Storify – capturing a particularly savvy audience of content creators while they’re on the move and giving people yet another reason to ditch their laptops in favour of an iPad when they’re covering events.

Here’s my finished story that I made on my iPad in about 5 minutes: