Paying for Journalism Online

wpid-Photo-4-Jan-2013-0940-PM.jpgIt’s been some time now since the fall of Andrew Sullivan’s blog. Sully himself has moved back to a mainstream publication (this time the New York Magazine), and the world has moved on.

Andrew had an interesting idea. Continuing his blog as he had done on places like the Atlantic before, but on his own, allowing people to see a certain number of full-posts through a (leaky) paywall, but giving people full access for a single low yearly fee.

Alas, even Andrew Sullivan, with his huge blog following, couldn’t quite keep it going. For his own health, as much as anything else, he finally called it a day.

Does that mean his experiment failed? Does it mean that paying for online journalism just won’t work? I hope that’s not what people take out of his venture.

First of all, I’m just not sure people will pay to read a blog, that was mostly an aggregate of content from other sites. There might – and only might – be a viable platform for paying to read content, where there’s that direct link between the author and the reader.

The only truly viable platform that exclusively works like that right now is the humble book. Be it a “dead tree” version or an ebook. People, it seems, are still more than willing to pay to consume that written content.

You can say that people are still paying for newspapers and magazines too. Yes they are (though circulations are falling), but think about it, the cost of selling those publications almost never pays for the content. They all have adverts running in them. Today here in the UK, free papers like The Metro are actually (in an income/expenditure sense) among the most successful models right now. They make a lot of money, and don’t charge their consumers for that content.

However, that simple model isn’t working in the online world. Rewind a number of years back. Newspapers started getting websites. Journalists, keen to have their work  be seen by as many people as possible, convinced their bemused editors to let them post all their articles online. That content was available for free. But the value of the advertising (especially considering how clever those media-rich ads could be) was never really understood by the sales teams and editorial teams. Very quickly Google dominated that game, allowing advertisers pay pennies for ads whose equivalent in print would cost many pounds. Google was happy with this because they are working to scale. They can get tens of millions of customers and be quite happy. A modestly popular site that gets, say, 50,000 visitors a week will make a fraction of the revenue from advertising that a weekly magazine with a circulation of 50,000 would make from its ads.

So those appear to be the two main models that people concentrate on. However, I think there’s another model we dismiss at our peril.

Journalists (particularly older ones, like, say, Andrew Sullivan), really dislike “native ads”, sometimes called “sponsored content” or “advertorials”. These are articles usually made by the in-house editorial team, but used to promote a message by an advertiser. Some associate it with Buzzfeed (which does very well, btw), and the like. I don’t see why the concept, with a different tone, couldn’t work in other forms. I’m personally totally okay with that content, as are many of my fellow millennials.

I spoke to a load of people my age (and younger) about this. The response was fascinating. We often seem to be okay with advertorials, as long as they’re called “sponsor content” and is clearly labelled as such. We’re just as likely to read it (if it sounds interesting) as we are the rest of the content. We don’t like being deceived into thinking that an article is purely editorial rather than “sponsored content”, but apart from that, I think we’re okay with it. It’s just more content in the mix.

Unlike the Googlefication of banner ads, etc., sponsored content needs to be high-quality. It needs to be readable. For the consumer, it shouldn’t be in-your-face and offensive (like awful intrusive ads that block the content unless you find the ‘X’ to close it for example). And for the advertiser, it appears to actually have much higher conversion rates than an ad.

It’s scalable, but can’t be automated. A computer can’t automatically write a beautiful, artistic, engaging, clever article for a client. That takes good journalists and copywriters. Therefore, it can’t be made for a few cents. You need to spend real money, and get it out there.

I think that this kind of content can help pay for the other stuff, the content that’s unshackled from the burden of commercial pressures, while making it free to the consumer. It might be online journalism’s best hope for growth.

So I remain optimistic for the future of written journalism and content creation, and I see sponsor content – be it on blogs, news sites, Medium, etc. – as being one of the most interesting and practical ways of getting us there.

Imagine your favourite sites, clean, ad-free, fully-acessable and gratis, with sponsored content among the rest of the work. But paid for and sustainable.

So what do you think? Can sponsored content (done in the right way), be the digital shot-in-the-arm this business needs?

Advertisements

Apple, The Feds, and The Good Fight.

On Monday 21st of March, Apple gave a keynote presentation on some of their news products. We heard about the advancements and breakthroughs already bearing fruit in the HealthKit API’s (leading to a fascinating discovery that there are likely to be several strands of type-2 diabetes for example), plus announcements on a new iPhone, iPad Pro, and much more besides.

But many wondered if the CEO Tim Cook would address the elephant in the room. The companies’ on-going battle with the FBI over the phone previously owned by the San Bernardino terrorist.

Rather than shy away from the debate, Tim Cook dealt with it head-on and admirably. Since this talk, it appears the Federal government has backed down, but it doesn’t stop this from being an important issue. To his credit, Cook didn’t wallow in the message, but made his points and moved on. You can find out more in the keynote below:

For a more detailed look at his musings, check out the unedited transcript of his talk with Nancy Gibbs and Lev Grossman for Time Magazine, here.

In the interview, Cook makes many salient points. The excerpt below summarises the situation nicely:

But at the end of the day, we’re going to fight the good fight not only for our customers but for the country. We’re in this bizarre position where we’re defending the civil liberties of the country against the government. Who would have ever thought this would happen?

Cook might be bemused at the fact it’s up to a company like Apple to fight this “good fight”, but it’s quite typical for private business and individuals to be the ones that curb the excessive over-reach of government. In fact, that’s usually the default situation.

I understand perfectly the concern over our safety. But it is never good enough to give up essential liberties in order to give us some possible, abstract and temporary security. Now, it appears the FBI are backing down, and don’t think they need Apple to do anything to help. That’s great. But this is a fight that will need to continue.

It’s interesting – and anecdotal, with exceptions on both sides – that generally those who are technology literate (including Google, Facebook, Twitter, et al), support Apple’s stance, and those who are arguing in favour of the Federal government are typically not as technology literate.

I don’t know if this helps clarify things in a simpler away, but let me use an analogy to explain the situation that doesn’t bring technology into the debate at all: Imagine that there was a very slim chance that a now deceased terrorist might – and only might – have left some information about future plans behind a locked door. The FBI go to the lock-maker and demand that they build a universal key that not only unlocks that one door, but could unlock every door in the country. And then the key will be duplicated thousands of times and given to thousands of federal employees.

As yourself, in that scenario, are we really any safer if the lock-maker complies? Or will his compliance make us all less safe?

Answer that question honestly, and whichever conclusion you reach, I think you’ll figure out what side of the debate you’re on.

The Machines Won’t Destroy Us

Terminator HeadThere's been a hell of a lot of doomsday movies and books over the years, chronicling mankind's fall at the hands of the machines we've created.

While I find these stories to take on the position of the Luddite, I often find them entertaining. But it's always worth pondering their message: Will the machines one day rise against us?

Advances in neuroscience are coming on in leaps and bounds. We're entering a new dawn of artificial intelligence, combined with astonishing improvements in microtechnology. These will lead to better and smarter machines, and maybe, eventually machines that are smarter than us.

It's why a number of scientists, innovators and thinkers (not least of all Stephen Hawking and Elon Musk) have expressed concerns about where this is all headed.

It's not a new worry. Every time new technology supplants old, we wonder what the tragic human cost will be. In the short term, we often see people lose jobs and so on. But factoring out what economists call “creative destruction”, basically we do okay in the long run. Things get better, and innovation leads us to a better place.

As machines get smarter, I can't see them wanting to wipe us out. Many of the top neuroscientists in this field seem to lean to the same conclusion. While we'll make strives in creating machines that share many of our emotional traits, most of the innovation that we have seen and will see in the future, will be in the neuro cortex area, dealing with logic, reasoning, and knowledge.

Machines will be able to pass on lots of knowledge, and store more information than you or I could ever possibly hope to. But that's not the same as “feeling” or anger, hate or any other traits. Machines will learn from past experience, and then (just as slowly as us in many ways) discover new ideas, and work out how effective they are.

Even if they have ideas about destroying us, and replicating themselves without needing us, the logical part of the neuro cortex programming will almost certainly always lead them to one reasonable conclusion: “Humans made us. They innovated and brought us to be. If the goal is to grow and advance, they are our best hope for precisely that kind of innovation and advancement.”

With this in mind, I don't think we have much to worry about from the technology we're creating.

Skynet's not going to be coming for us any time soon.

 

Why Journalists Should Learn Markdown

Computer CodeHere in the UK, we have what is generally regarded to be the toughest training for journalists in the world.

One of the requirements of the NCTJ (National Council for the Training of Journalists) is to learn Teeline Shorthand, up to 100 words a minute. If you don’t get your 100 words, you haven’t really passed.

There’s quite a bit of debate about whether or not we need to learn it any more. Those of us with the ability to write in shorthand find it invaluable, and those without it don’t think they’re missing out. Even I have to concede, American journalists are pumping out really great work, and shorthand has been missing as a core requirement for almost two generations.

Though I still highly recommend learning shorthand for a variety of reasons, (not least of all because it demonstrates a dedication to the field: a way of showing that you really want to be a journalist, not just someone who wants to get on TV/radio/your name in print), maybe there is one thing more relevant to a modern journalist that could be the new mandatory requirement: markdown.

I write in markdown all the time, and it’s really easy to use. In fact, there’s a good chance you know markdown even if you’ve never heard of it before.

Markdown is a useful tool for writing essentially raw text files, with a few little connotations that signify basic formatting. So putting some words *between two asterisks like this* signifies italics, putting words **between two asterisks like this** signifies bold, etc.

There’s also easy formats for headers, embedded photos, and hyperlinks. Once you’ve written in markdown, there’s a ton of free or cheap apps that convert it to full-blown HTML.

Why is this relevant to a 21st century journalist? Well, most journalists spend a not insignificant chunk of their time in CMS these days. That’s Content Management Systems, which is a fancy word for website back-ends like WordPress, and other more specialised systems. Every time you want to add a blog post or a new news story, you often write on your phone, tablet (I’m writing this post on my iPad) or computer, using a variety of applications, one of the big-daddies is of course Microsoft’s Word.

Then you can paste your article into the CMS page, click save and you’re done!

Except you rarely are done. Especially if you’ve written in Word, the funny formatting in that app can cause all sorts of headaches. If the journalist prefers to write in a particular font say, or a specific font size, that can translate into the page of the website, making it look totally out of style and step with the website’s visual design. Repairing it – and adding the hyperlinks and embedded images by too – is a subeditors nightmare.

So what options does a journalist have? Well, we could write in raw text. Certainly no formatting issues there, but unfortunately it won’t allow you to add emboldened text, headers, links, etc.

So we could all use HTML – but that might take time for many journalists and creative-types to learn, and a simple mistake in the coding could end up causing more problems than copy-and-pasting Microsoft Word’s text into the CMS in the first place.

So that leaves markdown. Simple to learn, easy to apply, you can do it in any word processing app (yes, including Word), and there’s even dirt-cheap apps that allow you see the effects of your markdown as you write (like Byword, which I’m writing this on right now. Using markdown).

It’s 2014. Journalists are all having to get to grips with content management sites now. For me, markdown makes putting things on so much easier, allowing you to concentrate on your content, and less on formatting issues.

The Future of Android and iOS

iOS vs AndroidRight now, Android has an 80% share in the world smartphone market, compared to iOS’ 15%. However, if only one of those were to still be around (in some form) in ten years, I’d bet on iOS. Here’s why:

There’s no doubt that Android is winning the market share war at the moment. But Apple has never really been a company that cares about market share, and that seems to have served them pretty well over the years.

But the real war to win – for sustainability if nothing else – is the financial one, and here, Android is failing in a way that might eventually seal its doom.

The great thing about the free market: around half the Fortune 500 companies that were in that top list when I was born around 30 years ago are not there now. Either they vanished into oblivion, or no longer find their names there because they’re owned by other larger companies. Also about half of the Fortune 500 companies today didn’t even exist thirty years ago. As Bob Dylan once told us in The Time’s They Are A-Changin’, “The first one now will be later the last.”

It’s with the context of this fluid market dynamic that I look at Android today.

The market share of Google’s mobile OS, is very similar to the share the Symbian platform held in the mid noughties, before a certain Californian fruit company unveiled the iPhone and changed everything.

Symbian, not unlike Android, could be used on all sorts of phones. It became the most widely-licensed mobile platform in history. Nokia (the ones really pushing it), Motorola, Ericsson, Samsung and most of the rest all used it. It went from the number one smartphone OS in 2005 to as dead as Dillinger in 2009.

Android, as even the name suggests, was the child of Symbian. Developed by Google, it was originally designed to be the “new, improved” successor to the world’s most popular OS. But once iOS (or iPhone OS was it was initially known) was shown to the world, Google went back to the drawing-board, making Android a touch-screen experience only.

Desperate to find an OS that could rival Apple’s offering, dozens of manufacturers dived into Android, including Korean giant Samsung. In 2009, 80% of Samsung’s smartphones were using Windows Mobile. 20% Symbian. They announced around a third of their phones in 2010 would be Android. With Google’s open-source OS, they could add a “makeover” (in their case, Touchwiz) and differentiate themselves in the market. In short, as we’ve seen, Samsung’s plan was to be another Apple. They wanted their OS, apps, and phones to replicate that “feel” that Apple have, without the R&D. Just cram in some extra features, and take market share away from the new upstart, iOS.

Here lies the problem. Because Android is free to add to any smartphone, (even apple could build Android phones if they were so inclined, you don’t even need to ask Google’s permission), lots of cheap phones are made with a version of it running. Many of these cheap smartphones – particularly the ones in the Far East – aren’t even used as smartphones. People just make calls, and send-and-recieve texts.

The only really premium smartphone company running Android on their phones is still Samsung, and it’s only there that Google makes any serious money. Google gets paid when Android users utilise their search and other Google products. This revenue, like most of Google’s, is from primarily advertising. Their other revenue source is from their 30% commission from the Google Play store, just like Apple.

The best estimate is that 80% of Google’s search/app revenue comes from Samsung smartphone users, and 90% of all Google Play sales come from them too. They may have many companies using their OS, but most of those users don’t even use them as smartphones, just like with Android’s father Symbian. Despite Android coming out later than iOS, it actually represents the continuation of the Symbian (and Java Mobile) status quo. iOS was – and in many respects remains – the plucky little upstart. It just so happens to be the upstart that also makes most of the money. Even Google make more money from iOS users than they do those using Android phones.

And it gets worse for Google. This year, Samsung are releasing smartphones that will have the Tizen operating system. Another open system, not massively different to Android, but one that Samsung pretty much controls. The version they’ve got running in beta is so similar to their Touchwiz Android phones that most users won’t tell them apart. Many say that Samsung is tired of having to wait for Google to update their OS – and Samsung even has to add Knox, their own security integration, because they can’t rely on Google to provide anything secure enough.

The transition for Samsung from Android to Tizen will be slow – and potentially unsuccessful – but once it fully moves over (if customers are happy with it), then Google is left with a bunch of also-ran companies like Motorola, who don’t make them any money at all. I can see Google choosing to walk away from it then, especially if they continue to make money from their potentially game-changing Chromebooks. After all, why waste money in the sphere that’s dying, when you can spend more creative energy in the bit of the business that’s really working well?

Then Samsung can duke it out in the premium market with Apple. But they still have a big fight on their hands. In 2013, 150 million iPhones were sold, while Samsung only sold 100 million – that’s from their entire lineup of premium Galaxy S and Note phones. Plus, with Tizen slowly coming into the market during the great transition, Apple can use that time to point out their other great strength relative to the rest of the market: a lack of fragmentation, which is good for end-users and developers.

In the same time that it took iOS 7 to go from a 0% to 90% install base for devices currently in use, Android’s KitKat has gone from 0% to just under 5%. So iOS developers can build using the latest and greatest APIs, knowing full-well that the vast majority of their customers can make use of them. Plus – or maybe in part because of this – iOS users are more likely to spend money on (or in) apps. The latest figures suggest that even though iOS is 15% of the smartphone market, it’s 74% of revenue, with Android at 20% and “other” at 6%.

No one really knows what the future holds for the smartphone market. Maybe Tizen will be a failure and Samsung will stick with Android. Maybe the whole world will suddenly decide not to bother with iOS any more. Maybe Microsoft was something game-changing up its sleeve.

But, judging by the state of the market today, I’d bet on the Californian fruit company.

In the Land of the Free, Disruption is King

Robot ArmThere’s a very human fear of technology ruining our lives.

“It’ll replace us,” we say, when a new-fangled bit of technology comes along. But is there any real truth behind that anxiety?

It’s our instinctive reaction. But that doesn’t make it the right one.

When I was young, I can still recall a (serious) news report, commenting – with furrowed brows – how, shock horror, some children as young as 8 “had access to the internet.”

That was it. It wasn’t that they were doing something wrong, or even that they were unsupervised. It was the fear that they were using new technology. I posted a great video here a while ago featuring the founder of Wired Magazine, who says that in the early/mid 90s, mainstream newspapers were still writing articles about the web as “The Internet – Threat or Menace?” and, though that precise type of hysteria about the web has died down, general fear-peddling about new advances hasn’t ceased at all.

The truth this, technological change is part of an observed phenomenon I’ve written about before: “creative destruction”.

In my great-grandparents day, there used to be someone who’d ride down the road on a horse and cart, selling blocks of ice, so the local housewives throughout the city could preserve meat and other things for longer. Eventually a bunch of clever American and Japanese people came up with an electrical device called a refrigerator, which has become a feature of pretty much every house everywhere.

Now the ice-block salesman’s job has gone. Indeed, so has the job of the people who made the ice. But seriously, are we worse off because of it? Furthermore, are they? Did those people who found themselves out of work lose out long term?

In the short term it must have been hard, but surely with their expertise in the freezing business, they were well-placed to sell and support those buying the new appliances. And today: well, those people are long gone, and their descendants are just like the rest of us – with better jobs, better (inflation-adjusted) wages, more buying power, higher standards of living, and even more leisure time than their great-grandparents could have dreamed of.

The smart and thoughtful Roger Bootle has raised some interesting points related to robots and artificial intelligence in the workforce. His article in the telegraph, though too negative for my money, is nonetheless well worth a read.

We can’t begin to imagine the hardship of living in an early hunter-gatherer society. Where the life expectancy is around 25, things were, to put it mildly, pretty hard. The agricultural revolution must have rendered thousands of hunter-gatherer “jobs” redundant. But everything got better. The industrial revolution must have rendered millions of agricultural jobs redundant. But everything got better.

This technological revolution is, well, yes, rendering many jobs redundant. But the wisest among us shouldn’t worry too much about it. I’ve been made redundant. It’s an awful feeling. But for those of us who do find themselves out of work, the stats show that it virtually always leads to better things. And the world gets more efficient, and the standard of living of the people globally continues to rise.

In the land of the free, disruption is king. Long may it last.

Writing Apps

20140316-070120 pm.jpg

Are you a writer?

What software do you use to write? It’s a question I’m asked a lot.

The truth – cliched and as obvious as it is – is that there’s not one single application that is the writer’s tool. Everyone writes differently, so everyone will find different tools work better for them.

Here’s some I use:

When I’m sitting by a computer, just trying to get some ideas down, I’m a big fan of OmmWriter. It’s a distraction-free writing tool that provides a sound-scape, inspiring backgrounds and encouraging clicking sounds as you type. There’s a few options, but it’s basically a text-editor and all the tools get out of the way when you start writing.

When I have a general idea of, say a novel I want to write, I use one of a number of iPad apps to “block out” the story. One I’m currently using quite a bit is Index Card, because of it’s integration with Scrivener (which I’ll talk more about in a moment). But Scrivener is supposedly releasing an iPad app in its own right that I’m looking forward to, so that might be my go-to app for planning a story in the future.

And that’s because when it comes to putting a novel or long-form written piece together, for my money, Scrivener is currently the best app I can find. I use the Mac version, so I can’t say anything about the Windows version (which I hear has fewer features) but it’s fast and nimble. Whether you are working on an empty new document, or one with thousands of documents, images, notes, and chapters making up a 250,000-word masterpiece, Scrivener remains incredibly responsive.

It’s cheap for what it is, and is feature-rich. But the real beauty of Scrivener is that you only need to learn the features you need, and you can discard the rest, or use them when you find a use for them. Not learning everything (and there is a lot to learn) doesn’t hamper your ability to get a lot out of this remarkable and well-thought-out app.

But that’s just me, your mileage my vary.

I say try everything out you can, and you’ll find a workflow that works. This “software experimentation” requires time and patience, but it’s worth investing that time as it could save you hours (or possibly months) of time later when you find yourself knee-deep in an epic project and only then realise you’d rather work in a different way. Get it right for you from the start, and you’ll avoid lots of headaches later. And I speak as someone who has made that mistake far too many times.

If you’re really inspired, a simple notepad and a text editor will do the trick. But using some of these tools allow the difficulty and mechanics of writing to get out of the way, leaving you with your ideas and the tale you want to tell.

Happy writing and good luck!