May 18, 2012

Everybody’s an Expert, or, 1,972 Amazon Reviewers Can’t Be Wrong


Alison Flood reports in the Guardian that a Harvard Business School study finds that “Amazon reviews are just as likely to give an accurate summary of a book’s quality as those of professional newspapers.” The study, “What Makes a Critic Tick? Connected Authors and the Determinants of Book Reviews” by Loretti I. Dobrescu, Michael Luca, and Alberto Motta, asks “what are the factors that influence reviewers?” Unsurprisingly, “when a book’s author also writes for a media outlet, that outlet is 25% more likely to review the book relative to other media outlets.” And the review is more likely to be favorable. But the study also suggests that reviewers, and the books reviewed, are determined above all by the perceived tastes of that outlet’s audience: “The New York Times has a very different audience than the Village Voice.” And all of them, it seems, have very different audiences than some of the bestselling books in the country.

“Professional critics are less favorable to first time authors and more favorable to authors who have garnered other attention in the press,” the study concludes. “This implies that one drawback of expert reviews is that they may be slower to learn about new and unknown books.” I don’t see how that follows (although it is certainly true that many of the books appearing on the bestseller lists, especially the fiction lists, are not reviewed by the 40 major media outlets — newspapers, magazines, and three websites, Bookslut, Slate, and Salon — sampled).

“Newspapers and magazines are the primary outlet for expert book reviews,”  the authors say — an obsolescent claim, with other implications for this study, which considers only the 100 highest ranked non-fiction books on between 2004 and 2007. A lot has changed in the book reviewing world in five years.

The authors acknowledge another more significant problem at the center of their study: “the definition of expert is a fuzzy one.” But their solution — “Since we are interested in understanding the role of bias in expert reviews, we want to restrict the list of reviewers to recognized experts. To do this, we use Metacritic’s list of media outlets to restrict the sample to 40 media outlets. We rely on Metacritic because it is a well-known aggregator of expert reviews” — is dizzyingly circular. The only distinguishing criterion seems to be that “professional reviewers may have a different sense of what it means to be a ‘quality’ book.”  On the other hand, “the quality of consumer reviews may be called into question because of the selection of reviews, since we do not know who is leaving the review.” Consumer reviews, we are told,  represent “popular appeal”, not “professional judgment”  — here the authors are quoting from an article by M.B. Holbrook in Consumer Research, “Popular Appeal Versus Expert Judgments of Motion Pictures,” which may not be entirely analogous. Nevertheless, the study claims, “On average, consumers and experts have correlated preferences.”

Given that the study asserts that the professionals prefer books that have already won prizes; that they “reward books written by authors who have received media attention (measured by the number of mentions of the authors in the New York Times outside of the book review section)” — in other words, praise most that which has already been praised by somebody else;  and that the professionals are many more times likely to review a colleague; it begins to sound like the problem is that we do “know who is leaving the review.” The most dispiriting statistics are compiled in the study’s Table 1. Of the top 100 rated non-fiction books on the authors write,

To highlight the popularity of these books, consider the fact that more than 85,000 books are published each year in the United States alone, while each of these [40] media outlets reviews less than 0.5% of them. By contrast, the same media outlets review roughly 50% of these [top 100] books. Hence, there is at least some agreement about which books are best.

Below, I’ve made a comparison of sorts between reviews from the 10 “Mainstream US” media outlets on Metacritic’s list and Amazon reviews for Heaven is for Real, by Todd Burpo (with Sarah Palin’s Going Rogue ghostwriter Lynn Vincent), which during its 77 weeks on the New York Times paperback non-fiction list spent more than a year in the number one spot (it has dropped to number 2).

First, the media elite:

The New York Times took note of Heaven is for Real in three pieces, none of them reviews.

Washington Post: reviewed — “trashed” is not inaccurate — by Susan Jacoby in her online “Spirited Atheist” column.

Wall Street Journal: on its list of “best-selling” books; no review.

Los Angeles Times: online interview with Todd Burpo; no review.

Chicago Sun Times: no review.

USA Today: feature; no review.

Boston Globe: no review.

San Francisco Chronicle: no review.

Houston Chronicle: feature; no review.

On, by contrast, Heaven is for Real has been reviewed 2,841 times; 1,972 of those are 5-star reviews, for an average rating of “4.3 out of 5 stars.”


Dan O'Connor is the Managing Editor of Melville House.