E-books vs. e-readers

I was happy to see Barnes & Noble reporting digital books outselling physical ones on BN.com and Amazon announcing that the Kindle was its best-selling product ever because I hope this means much more e-reader content will become available.

I’m always surprised and disappointed when a book I want to buy isn’t Kindle-ized.   Then I’ll usually just forget about it; too bad, one sale lost.

Nook has 2 million titles
...but often not the ones I want

If e-readers use is exploding, as B&N and Amazon want everyone to believe, then why isn’t every book, magazine and newspaper available in an e-reader form?  Why are B&N and Amazon being so coy about releasing the type of numbers that would help publishers justify the investment?

Both B&N and Amazon have been trumpeting the number of devices sold, but this metric is meaningless, as industry watchers such as John Paczkowski at All Things Digital and Seth Fiegerman at MainStreet.com have pointed out.  It’s really mysterious (or is it?) why B&N and Amazon haven’t been releasing information that would give a complete picture of the number of people who are using e-readers, the type of content they’re paying for, and the amount of content they’re buying.

Here are a few of the metrics I’d want to monitor to determine whether the audience is there to justify making e-readers a more significant part of an overall digital strategy.

Increased e-book sales will come either from current e-bookers buying more or more new e-bookers getting devices.  Or both.   Thus, I’d start with these two actionable key performance indicators, both ratios (not counts):

  1. Number of e-books sold per current registered e-book buyer
  2. Number of e-books sold per new registered e-book buyer

Kindle buying boxI’d focus on increasing the number of e-books bought by new e-book users, especially those who got an e-reader as gifts and thus didn’t necessarily choose to become e-readers themselves.  (If you’re analytics-driven Amazon then you know this number because you’ve asked that question in the buying process.)  E-readers seemed to be a popular Christmas gift in 2010; B&N sold more than 1 million e-books on Christmas day alone, according to MainStreet.com.

Without knowing more than this one measly number, I’m not convinced e-books are booming.   Think about it.  You get a Nook for Christmas, you try it out and buy a book while the generous gift giver is right there, smiling at you and saying “Isn’t it great?”  Then you go on to the next present or Christmas dinner or talking to your cousin or whatever.

Kindle bounce But how many new e-readers continued to buy e-books past the first one? (Bounce rate – one of the greatest metrics of all-time.) If they only bought one, was it because the experience was confusing?  Did they just not like the e-book experience?  Did they not find the content they wanted?  Were they dismayed to find their favorite magazine or newspaper only puts puts part of its content in an e-reader format?  (WHY do publishers do this?  Oh yeah, cannibalization. Yeah, I’m going to go out and buy that print thing right now.)

If there’s a significant drop in e-book sales from new users then you can dig deep into data that will indicate the specific actions you should take, like improving the buying process, offering incentives to one-e-book buyers in exchange for info on why they don’t buy more, and adding the content people are willing to pay for.

Because e-books are sold, there’s a treasure trove of demographic and behavioral audience data collected from the purchase process, data that gives all kinds of actionable insights about what kind of content is worth offering in an e-reader form, e.g., number of e-books sold by type (book, magazine, newspaper, etc.), category/topic, fiction/nonfiction, author, new/old.

One million e-books sold in a day?  Tantalizing.  Let’s see more data.

The content’s there but the data often isn’t

Neil Heyside’s Oct. 18 story on PBS MediaShift about how newspapers should analyze their content by source type – staff-produced, wire/syndicated or free from citizen journalists – got me thinking about other ways content should be analyzed to craft audience-driven hyper-local and paid-content strategies.


Picture 1 Most news sites have navigation that mimics traditional media products – News, Sports, Business, Opinion, and Entertainment.   However, those types of broad titles don’t work well with digital content because people consume news and information in bits and pieces rather than in nicely packaged 56-page print products and 30-minute TV programs.

Picture 2

Each chunk, each piece of content – story, photo, video, audio, whatever – should be tagged or classified with a geographic area and a niche topic so a news org can determine how much content it has for each of its highest priority audience segments – and how much traffic each type of content is getting.

By geographic area I mean hyper-local.   East Cyberton, not Cyberton.  Maybe even more hyper – East Cyberton north of Main Street, for example, or wherever there’s a distinct audience segment that has different characteristics and thus different news needs and news consumption patterns.


Similarly, news orgs need hyper-topic codes, especially for hyper-local topics.  The Cyberton community orchestra – not Classical Music, Music, or Entertainment.  If a news org is looking at web traffic data for “Music” it should know whether that traffic is for rock music or classical, and whether the content was about a local, regional, national or international group.


Picture 3 Oh, and there’s one more aspect to this hyper-coding.  Content should be coded across the site.  Ruthlessly.  For example, to really understand whether it needs to add or cut coverage in East Cyberton, a news org needs to add up those East Cyberton stories in Local News, plus those East Cyberton Main St. retail development stories in Business, and those editorials and op-eds in Opinion about how ineffective the East Cyberton neighborhood council is, and….


Sometimes these hyper-codes are in content management systems but not in web analytics systems like Omniture or Google Analytics.  Knowing what you’ve got is great – but knowing how much traffic each hyper-coded chunk of content is equally if not more important.


Whether the hyper-codes and thus the data are there only makes a difference if a news org is willing to take a hard, nontraditional look at itself.   The data may suggest it needs to radically change what it covers and the way it allocates its news resources so it can produce “relevant, non-commodity local news that differentiates” it, as Neil Heyside’s PBS MediaShift story points out.


Picture 4 Heyside’s study of four U.K. newspaper chains has some interesting ideas about how a news org can cut costs but still maintain quality by changing the ways it uses staff-produced, wire, and free, citizen journalist content.  The news orgs in the study “had already undergone extensive staff reductions.  In the conventional sense, all the costs had been wrung out.  But newspapers have to change the way they think in order to survive.  If you’ve wrung out all the costs you can from the existing content creation model, then it’s time to change the model itself.”


If a news org doesn’t know, in incredibly painful detail, what type of content it has and how much traffic each type is getting, then it’s not arming itself with everything it can to mitigate the risks of making radical changes such as investing what it takes to succeed in hyperlocal news and in setting up pay walls.  Both are pretty scary, and it’s going to take a lot of bold experimentation – and data – to get it right.




Total unique visitors and paid content

Because it’s easy to gather and it looks like circulation and readership, the number of monthly unique visitors continues to be a key indicator of online success for news orgs.  This is really dangerous, especially if used to develop news business models.

The total number of monthly UVs just doesn’t give any information about how engaged audiences are.  Let’s say you have 100 million monthly uniques, as paidContent.org reports the new Steve Brill Journalism Online venture is aiming for.

This number doesn’t tell you whether those 100 million of those visitors visited once or 10 times, or whether they went to one page or to 20.

You really need to know the level of engagement to sell online advertising.  And, you really need to know how engaged people are if your business model depends on paid subscribers or content.

According to paidContent.org, Journalism Online is counting on about 10 percent of its news affiliates’ audiences to pay for content.  Sounds like a realistic, reasonable number, right?

No, it’s faulty business logic.  Simply assuming a small percent of any total audience will do anything is really dangerous, and something that savvy entrepreneurs know or learn in Marketing 101.  “There are 100 million people living in this area of the U.S.  If I build a better mousetrap that costs $1, and if only 1 percent of those 100 million buy my mousetrap, I’ll have a million dollars!”

First, not all 100 million care about trapping mice.  Others won’t pay even $1 for it.  Still others don’t live near a store where they would be sold, and wouldn’t order it online or by other ways.

Estimating audiences is an art and a science.  Estimating the audiences for paid content involves more art than science, but I hope news orgs will start with understanding what online audiences want.  It doesn’t do much good to set these types of numbers based on what the news orgs need to desperately meet their revenue goals.