I would like to sum up a clear synthesis and state of the art of scientific traditions and ways to deal with language features as a whole. In a chapter entitled ‘Computational Models of Psycholinguistics’ and published in the Cambridge Handbook of Psycholinguistics, Nick Chater and Morten H. Christiansen distinguish three main traditions in psycholinguistic language modeling :

  • a symbolic (Chomskyan) tradition
  • connectionnist psycholinguistics
  • probabilistic models

They state that the Chomskyan approach (as well as nativist theories of language in general) outweighed until recently by far any other one, setting the ground for cognitive science :

Chomsky’s arguments concerning the formal and computational properties of human language were one of the strongest and most influential lines of argument behind the development of the field of cognitive science, in opposition to behaviorism.” (p. 477)

The Symbolic Tradition

They describe the derivational theory of complexity (the hypothesis that number and complexity of transformations correlate with processing time and difficulty) as proving ‘a poor computational model when compared with empirical data’ (p. 479). Further work on generative grammar considered the relationship between linguistic theory and processing as indirect, this is how they explain that this Chomskyan tradition progressively disengaged from work on computational modeling (p. 479). Nonetheless, work on cognitive models has continued, they classify the work of Matthew W. Crocker and Edward Gibson in this category.

One of the main issues would be how to deal with the huge local ambiguity of human language.

Connectionnist Psycholinguistics

In this (more recent) approach, neurons are viewed as entities mapping and transmitting real-valued inputs to each other. The model corresponding to this hypothesis is a connectionist net. According to the authors, “‘soft’ regularities in language [are] more naturally captured by connectionist rather than rule-based method” (p. 481).

The framework of optimality theory is said to take inspiration from both traditions.

Probabilistic Models

According to Chater and Christiansen, “Memory or instance-based views are currently widely used across many fields of cognitive science”. The fundamental divide in neural network architectures (i.e. between connectionnist and probabilistic models) is whether the input is to be processed just unidirectionally or also using a top-down feedback (p. 483). As far as I know, there are indeed recent trends to develop probabilistic models of reading comprehension, where the experience of the reader is seen as an exposure to statistical regularities embracing phenomena at word or multi-word level, like collocations, but also sentence features, like subject-object inversion.

Application to sentence processing

Not only the authors agree on the fact that sentence processing has often been thought of in symbolic terms. But the shift toward statistical approaches led to theoretical changes in the way sentence structure is thought of. In fact, according to the authors there have recently been attempts to capture the statistical regularities between words : “‘Lexicalized grammars’, which carry information about what material co-occurs with specific words, substantially improve computational parsing performance (Charniak, 1997; Collins, 2003)” (p. 489). Chater and Christiansen stand for a hybrid approach :

There is a variety of overlapping ways in which rule-based and probabilistic factors may interact.”
“The project of building deeper models of human language processing and acquisition involves paying attention to both rules and to graded/probabilistic structure in language.” (p. 498).

N. Chater and M.H. Christiansen, “Computational Models of Psycholinguistics”, in The Cambridge Handbook of Psycholinguistics, R. Sun (ed.), Cambridge University Press, 2008.