I took the Locus Magazine list of SF/F novels of 2011 and pulled from Goodreads the ratings and most common “shelves” for each book. Then, for each shelf (which usually represents a genre or sub-genre), I totaled up the stars given to the books on that shelf and divided by the number of books. The result was a list of ratings given to each sub-genre. I don’t think that list constitutes actual Goodreads data, anymore, but rather an original synopsis, so here it is, along with the number of books contributing to that score in parentheses:
Taking the “fiction” shelf as the zero point zero of fiction ratings, here are the offsets of each shelf’s score from it, perhaps showing biases for and against particular genres:
But I suspect some genres are more sequel-prone than others. Taking the series position for each title as a sort of pseudo-shelf we can rate, as above, here’s the value of having a particular series position:
And here are the offsets from ‘not-a-sequel,’ showing a clear and understandable selection bias in favor of sequels (i.e. people who don’t like a series tend to stop reading it, leaving only those who liked it plunging on into later volumes):
Naively applying the offsets to a book’s rating, as if that would determine an unbiased score, turned out pretty obviously not to yield unbiased scores. While the results did look vastly more plausible overall from my personal point of view, they allowed numerous ‘niche-interest’ books to stand high in the rankings, even though you would only read them if you really liked a particularly contentious topic or form. A Bayesian recalculation would presumably help with that, but I’ll leave it for another time.