Recently, Jean-Philippe Magué advised me of the newly introduced text stats on Amazon. A good summary by Gabe Habash on the news blog of Publishers Weekly describes the perspectives and the potential interest of this new software : Book Lies: Readability is Impossible to Measure. The stats seem to have been available since last summer. I decided to contribute to the discussion on Amazon’s text readability statistics : to what extent are they reliable and useful ?


Gabe Habash compares several well-known books and concludes that the sentence length is determining in the readability measures used by Amazon. In fact, the readability formulas (Fog Index, Flesch Index and Flesch-Kincaid Index, for an explanation see Amazon’s text readability help) are centered on word length and sentence length, which is convenient but by far not always adapted.

There is another metric named ‘word complexity’, which Amazon defines as follows : ‘A word is considered “complex” if it has three or more syllables’ (source : complexity help). I wonder what happens in the case of proper nouns like (again…) Schwarzenegger. There are cases where the syllable recognition is not that easy for an algorithm that was programmed and tested to perform well on English words. The frequency of a proper noun is also interesting per se, because one can expect well-known personalities to be identified much more quickly by the reader. In fact, named entities recognition is a key to readability assessment. Besides, not everyone is famous for every kind of reader, which is another reason to use reader profiles.

The comparison function is interesting, but only if the categories fit your needs. The example Amazon gives is only half convincing, since the ‘Children’s Books > Ages 4-8’ category could deal with books whose content varies a lot. I found no explanation on how the categories were made and how relevant they might be, apart from the fact that the complexity of a text is of an evolving rather than a fixed variable. A book may be easy at the beginning and become more and more complex.


It means that we can get a general idea of how difficult a book is through these numbers, but not really enough to call it reliably accurate.’ writes Gabe Habash.

I would add that the Amazon text stats give a very general idea of text difficulty. The formulas are robust, they will work on all kind of texts. They are efficient, since the indexes have been developed with particular care of the teaching levels and of the children. But these strengths bear two main disadvantages. On one hand, the generalization of the process is also its main weakness. On the other hand, the indexes are based on a scale that fits the US school system, but won’t satisfy everyone. The profile issue is not addressed. You won’t be able to know if the book contains sentence types or vocabulary you don’t like or you don’t understand well.