I attended yesterday the first day of a workshop organized by Salikoko Mufwene and held at the ENS Lyon. This “Workshop on Complexity in Language: Developmental and Evolutionary Perspectives” lasts two days: HTML version of the program.
Here is my personal report on what I heard during the first day and on what I found interesting.
Complexity and complexity science
Tom Schoenemann talked about the increasing richness, subtlety and complexity of hominin conceptual understanding which created a need for syntax and grammar as characteristics resulting from it. In the course of history brain areas appear less directly connected, they process information more independently. What he calls “conceptual complexity” bases on the idea of “grounded cognition” developed by Lawrence W. Barsalou.
Barbara L. Davis said of the complexity science that it was another paradigm. Indeed, most of the debate took place on an abstract level, with many different (and not really compatible) notions of language and complexity. William Croft for instance said the whole context of language needed to be taken into account, and presented a theory for it which tackled among other things the communicational conventions, the joint attention and the notion of common ground.
Open questions and objections
A kind of “reality problem” was present in the background, as questions dealing with the nature of language and with the possibility to model or simulate it were asked at the end of the talks. The models of Lucia Loureiro-Porto for example were put into question as a debate about the reasons to speak a given language arose.
Salikoko Mufwene intervened several times, to ask if it was so obvious that language always evolved from simple to complex, to denounce an algorithmic conception of language which does not consider the materiality of physical expression or to point out the fact that it was difficult to find a “natural laboratory” or significant parameters for linguistic modeling. In my opinion, Albert Bastardas-Boada‘s talk was the closest to Mufwene’s conception of language.
Last, Fermin Moscoso del Prado explained his way to measure complexity. Surprisingly, this talk was not interesting in the way that I expected. I already knew his approach (which apart from the words “measure” and “complexity” has nothing to do with mine, since it is based not on linguistic features but on the algorithmic compressibility of regularities), but I think I can learn a lot from the objections of the audience.
Luc Steels just said that this calculations fully discarded the meaning issue. Another participant said there can only be estimates of algorithmic or effective complexity and that it was not clear the charts shown really measured what they were supposed to measure. Bart de Boer raised the question of the validity of the corpus, since the whole American-English Corpus contained many different idiolects, which are not constant.
However, even if it was not clear what the method and the charts prove, it was the only presentation to include experiments and not nearly untestable theories. Ironically, it was also by far the most criticized talk. Still, the charges that were made provide a good insight of the requirements to bear in mind if one wants to talk about complexity measures.
(to be continued… see next post)