A note on Computational Models of Psycholinguistics

I would like to sum up a clear synthesis and state of the art of scientific traditions and ways to deal with language features as a whole. In a chapter entitled ‘Computational Models of Psycholinguistics’ and published in the Cambridge Handbook of Psycholinguistics, Nick Chater and Morten H. Christiansen distinguish three main traditions in psycholinguistic language modeling :

  • a symbolic (Chomskyan) tradition
  • connectionnist psycholinguistics
  • probabilistic models

They state that the Chomskyan approach (as well as nativist theories of language in general) outweighed until recently by far any other one, setting the ground for cognitive science :

Chomsky’s arguments concerning the formal and computational properties of human language were one of the strongest and most influential lines of argument behind the development of the field of cognitive science, in opposition to behaviorism.” (p. 477)

The Symbolic Tradition

They describe the derivational theory of complexity (the hypothesis that number and complexity of transformations correlate with processing time and difficulty) as proving ‘a poor computational model when compared with empirical data’ (p. 479). Further work on generative grammar considered the relationship between linguistic theory and processing as indirect, this is how they explain that this Chomskyan tradition progressively disengaged from work on computational modeling …

more ...

Review of the readability checker DeLite

Continuing a series of reviews on readability assessment, I would like to describe a tool which is close to what I intend to do. It is named DeLite and is named a ‘readability checker’. It has been developed at the IICS research center of the FernUniversität Hagen.

From my point of view, its main feature is that it has not been made publicly available, it is based on software one has to buy and I did not manage to find even a demo version, although they claim to have been publicly (i.e. EU-)funded. Thus, my description is based on what its designers mention in the articles quoted below.

Fundamentals

The article by Glöckner et al. (2006) offers a description of the fundamentals of the software, as well as an interesting summary of research on readability. They depict the ‘classical’ pattern used to come to a readability formula :

  • select elements in a text that are related to readability’,
  • then ‘correlate element occurrences with text readability (measured by established comprehension tests)’,
  • and finally ‘combine the variables into a regression equation’ (p. 32).

This is the approach that led to a preponderance of criteria like word and sentence length, because they …

more ...

On global vs. local visualization of readability

It is not only a matter of scale : the perspective one chooses is crucial when it comes to visualize how difficult a text is. Two main options can be taken into consideration:

  • An overview in form of a summary which enables to compare a series of phenomena for the whole text.
  • A visualization which takes the course of the text into account, as well as the possible evolution of parameters.

I already dealt with the first type of visualization on this blog when I evoked Amazon’s text stats. To sum up, their simplicity is also their main problem, they are easy to read and provide users with a first glimpse of a book, but the kind of information they deliver is not always reliable.

Sooner or later, one has to deal with multidimensional representations as the number of monitored phenomena keeps increasing. That is where a real reflexion on finding a visualization that is faithful and clear at the same time. I would like to introduce two examples of recent research that I find to be relevant to this issue.

An approach inspired by computer science

The first one is taken from an article by Oelke et al. (2010 …

more ...

Gerolinguistics” and text comprehension

The field of “gerolinguistics” is becoming more and more important. The word was first coined by G. Cohen in 1979 and it has been regularly used ever since.

How do older people read ? How do they perform when trying to understand difficult sentences ? It was the idea I was following when I recently decided to read a few papers about linguistic abilities and aging. As I work on different reader profiles I thought it would be an interesting starting point.

The fact is that I did not find what I was looking for, but was not disappointed since the assumption I had made on this matter were proved wrong by recent research. Here is what I learned.

Interindividual variability increases with age

First of all, it is difficult to build a specific profile that would address ‘older people’, as this is a vast category which is merely a subclass of the ‘readers’, and which (as them) contains lots of variable individual evolutions. Very old people (and not necesarily old people) do have more difficulties to read, but this can be caused by very different factors. Most of all, age is not a useful predictor :

Many aspects of language comprehension remain …

more ...

Microsoft to analyze social networks to determine comprehension level

I recently read that Microsoft was planning to analyze several social networks in order to know more about users, so that the search engine could deliver more appropriate results. See this article on geekwire.com : Microsoft idea: Analyze social networks posts to deduce mood, interests, education.

Among the variables that are considered, the ‘sophistication and education level’ of the posts is mentionned. This is highly interesting, because it assumes a double readability assessment, on the reader’s side and on the side of the search engine. More precisely, this could refer to a classification task.

Here is an extract of a patent describing how this is supposed to work.

[0117] In addition to skewing the search results to the user’s inferred interests, the user-following engine 112 may further tailor the search results to a user’s comprehension level. For example, an intelligent processing module 156 may be directed to discerning the sophistication and education level of the posts of a user 102. Based on that inference, the customization engine may vary the sophistication level of the customized search result 510. The user-following engine 112 is able to make determinations about comprehension level several ways, including from a user’s …

more ...

Amazon’s readability statistics by example

I already mentioned Amazon’s text stats in a post where I tried to explain why they were far from being useful in every situation: A note on Amazon’s text readability stats, published last December.

I found an example which shows particularly well why you cannot rely on these statistics when it comes to get a precise picture of a text’s readability. Here are the screenshots of text statistics describing two different books (click on them to display a larger view):

Comparison of two books on Amazon

The two books look quite similar, except for the length of the second one, which seems to contain significantly more words and sentences.

The first book (on the left) is Pippi Longstocking, by Astrid Lindgren, whereas the second is The Sound and The Fury, by William Faulkner… The writing style could not be more different, however, the text statistics make them appear quite close to each other.

The criteria used by Amazon are too simplistic, even if they usually perform acceptably on all kind of texts. The readability formulas that output the first series of results only take the length of words and sentences into account and their scale is designed for the US school system. In …

more ...

Canadian research on readability in the ‘90s

I would like to write a word about the beginnings of computer-aided readability assessment research in Canada during the ‘90s, as they show interesting ways of thinking and measuring the complexity of texts.

Sato-Calibrage

Daoust, Laroche and Ouellet (1997) start from research on readability as it prevailed in the United States : they aim at finding a way to be able to assign a level to texts by linking them to a school level. They assume that the discourses of the school institutions are coherent and that they can be examined as a whole. Among other things, their criteria concern lexical, dictionary-based data and grammatical observations, such as the amount of proper nouns, of relative pronouns and of finite verb forms.

Several variables measure comparable aspects of text complexity and the authors wish to avoid being redundant, so they use factorial analysis and multiple regression to group the variables and try to explain why a text targeted a given school grade. They managed to narrow down the observations to thirty variables, whose impact on readability assessment is known. This is an interesting approach. The fact that they chose to keep about thirty variables in their study shows that readability formulas lack …

more ...

Word lists, word frequency and contextual diversity

How to build an efficient word list ? What are the limits of word frequency measures ? These issues are relevant to readability.

First, a word about the context : word lists are used to find difficulties and to try to improve the teaching material, whereas word frequency is used in psychological linguistics to measure cognitive processing. Thus, this topic deals with education science, psychological linguistics and corpus linguistics.

Coxhead’s Academic Word List

The academic word list by Averil Coxhead is a good example of this approach. He finds that students are not generally familiar with academic vocabulary, giving following examples : substitute, underlie, establish and inherent (p. 214). According to him, this kind of words are are “supportive” but not “central” (these adjectives could be good examples as well).

He starts from principles from corpus linguistics and states that “a register such as academic texts encompasses a variety of subregisters”, one has to balance the corpus.

Coxhead’s methodology is interesting. As one can see he probably read the works of Douglas Biber or John Sinclair, just to name a few. (AWL stands for Academic Word List.)

« To establish whether the AWL maintains high coverage over academic texts other than those in …

more ...

Interview with children’s books author Sabine Ludwig

Last week I had the chance to talk about complexity and readability with an experienced children’s books author, Sabine Ludwig (see also the page on the German Wikipedia). She has published around 30 books so far, as well as a dozen books which were translated from English to German. Some of them were awarded. The most successful one, Die schrecklichsten Mütter der Welt, had sold about 65.000 copies by the end of 2011 (although a few booksellers first thought it was unadapted to children). I was able to record the interview so that I could take extensive notes afterward, which I am going to summarize.

Sabine Ludwig writes in an intuitive way, which means that she does not pay attention to the complexity of the sentences she creates. She tries to see the world through a child’s eyes, and she pretends that the (inevitable) adaptation of both content and style takes place this way. She does not shorten sentences, neither does she avoid particular words. In fact, she does not want to be perfectly clear and readable for children. She does not find it to be a reasonable goal, because children can progressively learn words from a …

more ...

Tendencies in research on readability

In a recent article about a readability checker prototype for italian, Felice Dell’Orletta, Simonetta Montemagni, and Giulia Venturi provide a good overview of current research on readability. Starting from the end of the article, I must say the bibliography is quite up-to-date and the authors offer an extensive review of criteria used by other researchers.

Tendencies in research

First of all, there is a growing tendency towards statistical language models. In fact, language models are used by Thomas François (2009) for example, who considers they are a more efficient replacement for the vocabulary lists used in readability formulas.

Secondly, readability assessment at a lexical or syntactic level has been explored, but factors at a higher level still need to be taken into account. It is acknowledged since the 80s that the structure of texts and the development of discourse play a major role in making a text more complex. Still, it is harder to focus on discourse features than on syntactic ones.

« Over the last ten years, work on readability deployed sophisticated NLP techniques, such as syntactic parsing and statistical language modeling, to capture more complex linguistic features and used statistical machine learning to build readability assessment tools. […] Yet …

more ...