Yesterday, Dear Author blogger Sunita raised the idea that self-published genre fiction is creating a market for lemons–an environment where readers have no easy way to identify good books from bad books. If true, the author argues, this would be a very bad thing: if readers have no way to tell good from bad, many will simply quit reading altogether, turning to other media instead.
The argument goes like this: on Amazon, the chief ways to determine whether a book might be good are a) price and b) reviews. Yet both are highly flawed. In other markets, higher prices are usually an indicator of better quality. But with ebooks, you’ll often find a New York-published bestseller priced the exact same as a completely unknown self-published title. Thus price tells us nothing about whether a book is likely to be any good.
Reviews are no better. As evidence of this, Sunita points out that bestselling genre fiction typically has higher ratings than literary classics like The Great Gatsby. Self-published bestsellers have even higher ratings than the classics within their genres. Hugh Howey’s Wool, for instance, is shown to have better ratings than works like Ender’s Game, Cryptonomicon, or Neuromancer. Since the author can’t believe Wool might actually be better, Amazon’s reviews clearly aren’t useful for helping readers find good books, either.
It seems to me the discrepancy in ratings is evidence of a much simpler possibility: there is no problem at all. The system is working perfectly.
If the reviews are better on popular, bestselling genre fiction than on the classics, maybe what that means is.. genre fans enjoy genre fiction more than the general populace enjoys the classics. Classics which, incidentally, are largely recommended through word of mouth and trusted sources like reviewers and critics–who Sunita states are the best ways to discover new writers. Yet reviews are better on self-published bestsellers, whose initial popularity is generated almost entirely through Amazon’s recommendation system. Wouldn’t that mean that Amazon’s recommendation system is better than word of mouth or “trusted sources”?
Well, no. Not for her, anyway. Because she’s making two big mistakes. The first is assuming that her consumer habits are commonplace. I.e., the way she uses reviews doesn’t work well for her, therefore they must not be working for any customers. Yet the amount of people participating in the review system indicates that’s far from universal.
The second mistake is one she actually approaches in the article–and then immediately dismisses: “It’s entirely possible that readers of the Ward and Howey books were more satisfied with their reading experience than readers of the Tartt, Gibson, etc. … I have more trouble with the idea that the Ward and Howey books are better books.”
What is the difference between a “better” book and a book that readers are more satisfied with?
I think that, to many if not most readers, that’s two ways of saying the same thing. For Sunita, however, there is clearly a distinction. That’s because she only seems to recognize one area of quality: a book’s artistic or literary quality. What she’s leaving out is a book’s commercial or entertainment quality. These aren’t exclusionary. I like both. My personal favorite books are the ones that combine literary flair with strong and active plots (including many of the SF titles Sunita listed).
But I think it is beyond clear that most readers care far more about being entertained than being arted at.
Since more people are reading for entertainment than literature, Amazon’s reviews reflect those interests. Since Sunita values the opposite, it’s no wonder the system doesn’t work for her.
You know who it does seem to be working for? The readers. Who choose genre fiction 70% of the time they enter the Kindle store. And who, within those genres, choose self-published fiction as much as 50% of the time. And who leave higher ratings for both genre fiction and self-published titles.
If we’re lobbing lemons into the market, they must taste pretty god damn good.
ETA: Some cool stuff in the comments, particularly from Courtney Milan, who says smart things about the Amazon review system and the way indies interact with it. (At this point, I feel like using “smart” in conjunction with Courtney is getting redundant.) Some interesting replies from Sunita, too.
I’ll say this: it’s weird and somewhat counterintuitive that indie books average higher ratings than trad-published titles. (The main reason for this, as Courtney mentions, is probably that we push more actively for them.) Obviously, that could have implications on reader purchasing behavior–but even so, that would only matter if the books weren’t actually all that good, right? Which ought to result in more negative reviews, which would then balance things out. I’m still confused by the thrust of this post, and think its conclusions are overstated.