Blind reason, Leibniz and the age of cybernetics

I would like to introduce an article resulting from a talk I recently held. I had the chance to speak at a conference for young researchers in philosophy held at the Université Paris-Est Créteil. The global frame was the criticism of ratio and rationalism during the 20th century. In order to illustrate such a criticism, I tackled the idea of blind reason in an article entitled ‘La Raison aveugle ? L’époque cybernétique et ses dispositifs‘, which I made available online (PDF file).

Brief summary

In a late interview, Martin Heidegger states that philosophy is bound to be replaced by cybernetics. Starting from this contestable point of view, I try to describe the value of the cybernetics paradigm for philosophy of technology.

I already mentioned the work of Gilbert Hottois on this blog (see the philosophy of technology category). In this article, I shed light on the relationship between what Hottois calls ‘operative techno-logy’ (in a functional sense) and the origins of this notion, dating back, according to him, to the calculability of signs by Leibniz, who writes about this particular type of combinatorial way to gain knowledge that it is ‘blind’ (cognitio caeca vel symbolica).

On one hand, that which ...

more ...

Bibliography and links updates

As I try to put my notes in order by the end of this year, I changed a series of references, most notably in the bibliography and in the links sections.

Bibliography

I just updated the bibliography, using new categories. I divided the references in two main sections:

Corpus Linguistics, Complexity and Readability Assessment

Background

Links

First of all, I updated the links section using the W3C Link Validator. It is very useful, as it points out dead links and moved pages.

Resources for German

This is a new subsection:

Other links

I added a subsection to the links about LaTeX: LaTeX for Humanities (and Linguists).

I also added new tools and new Perl links.

more ...

Philosophy of technology, how things started: a typology

In my previous post, I presented a few references. I went on reading books and articles on this topic, and I am now able to sort them in several kinds of approaches.

This is mostly thanks to these books in French on philosophy of technology:

  • G. Simondon, L’invention dans les techniques : cours et conférences, Paris: Seuil, 2005.
  • G. Hottois, Philosophies des sciences, philosophies des techniques, Paris: Odile Jacob, 2004.
  • J. Goffi, La philosophie de la technique, Presses Universitaires de France, 1988.
  • G. Hottois, Le signe et la technique : la philosophie à l’épreuve de la technique, Paris: Aubier, 1984.

In his second lesson at the Collège de France (Philosophies des sciences, philosophies des techniques, p. 94-118), Gilbert Hottois tries to provide a state-of-the-art in philosophy of technology: he describes several traditions and backgrounds. Here is how things started:

  1. A German origin of the reflexion on technology (Ernst Kapp, Friedrich Dessauer) which is mostly analyzed by engineers who shed a new light on this topic and try to think it as a system. The VDI (Verein Deutscher Ingenieure) continues this tradition. From 1956 onwards, this association organizes a series of meetings entitled Man and Technology which notably sees the question ...
more ...

Philosophy of technology: a few resources

As I once studied philosophy (back in the classes préparatoires), I like to keep in touch with this kind of reflexion. Moreover, in this research field where everything is moving very fast, it is a way to find a few continuities and to ground the peculiar questions regarding the analysis of language in a more conceptual framework.

Here is a list of texts available on the Internet (some of them partly) that seem important to me. Some are written in English, some in French or in German, as I chose the original ones.

It does not have the pretension to be complete ! Other references may follow.

  • Denis Diderot wrote the article Art in the Encyclopédie. It is a state of the art introducing the word and its different meanings (which by that time included arts, techniques and technology). Diderot is speaking in favor of the techniques developed by the craftsmen and give an account of the ideas of the time about liberal arts, theory and usage.
    The whole text was made available by the ARTFL Encyclopédie Project.

    Les Artisans se sont crus méprisables, parce qu’on les a méprisés; apprenons - leur à mieux penser d’eux - mêmes: c’est le ...

more ...

Resource links update

I recently updated the blogroll section and I also would like to share a few links:

As I will be teaching LaTeX soon the LaTeX links section of the blog has expanded.

Last but not least, here is an E-Book, Mining of Massive Datasets, by A. Rajaraman and J. D. Ullmann. It was made of classes taught at Stanford and is now free to use (available chapter by chapter or as a whole), very up-to-date and informative on this hot topic. It seems to be a good introduction as well. That said I cannot really review it since I am not an expert of this research field.

Here is the reference:

A. Rajaraman and J. D. Ullmann, Mining of Massive Datasets, Stanford, Palo Alto, CA: e-book, 2010.

more ...

Three series of recorded lectures

Here is my selection of introductory courses given by well-known specialists in Computer Science or Natural Language Processing and recorded so that they can be followed at home.

1. Artificial Intelligence | Natural Language Processing, Christopher D. Manning, Stanford University.
More than 20 hours, 18 lectures.
Introduction to the key topics of NLP, summary of existing models.
Lecture 12 : Dan Jurafsky as a guest lecturer.
Requires the Silverlight plugin (no comment). Transcripts available.

2. Bits, Harry R. Lewis, Harvard University.
A general overview of information as quantity and quantitative methods.
Very comprehensive lecture (data theories, internet protocols, encryption, copyright issues, laws…), cut in small pieces for you to pick a focused topic.
Several formats available, links to blog posts.

3. Search Engines: Technology, Society, and Business, various lecturers, UC Berkeley.
Fall 2007, 13 lectures.
Overview of the topic.
Requires iTunes (no comment).

more ...

On Text Linguistics

Talking about text complexity in my last post, I did not realize how important it is to take the framework of text linguistics into account. This branch of linguistics is well-known in Germany but is not really meant as a topic by itself elsewhere. Most of the time, no one makes a distinction between text linguistics and discourse analysis, although the background is not necessarily the same.

I saw a presentation by Jean-Michel Adam last week, who describes himself as the “last of the Mohicans” to use this framework in French research. He drew a comprehensive picture of its origin and its developments which I am going to try and sum up.

This field started to become popular in the ‘70s with books by Eugenio Coseriu, Harald Weinrich (in Germany), Frantisek Danek (and the Functional Sentence Perspective Framework) or MAK Halliday who was a lot more read in English-speaking countries. Text linguistics is not a grammatical description of language, nor is it bound to a particular language. It is a science of the texts, a theory which comes on top of several levels such as semantics or structure analysis. It enables to distinguish several classes of texts at a global ...

more ...

Commented bibliography on readability assessment

I have selected a few papers on readability published in the last years, all available online (for instance using a specialized search engine, see previous post):

  1. First of all, I reviewed this one last week, it is a very up-to-date article. L. Feng, M. Jansche, M. Huenerfauth, and N. Elhadad, “A Comparison of Features for Automatic Readability Assessment”, 2010, pp. 276-284.
  2. The seminal paper to which Feng et al. often refers, as they combine several approaches, especially statistical language models, support vector machines and more traditional criteria. A comprehensive bibliography. S. E. Schwarm and M. Ostendorf, “Reading level assessment using support vector machines and statistical language models”, in Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, 2005, pp. 523-530.
  3. A complementary approach, also a combination of features, this time mainly of lexical and grammatical ones, with a focus on the latter, as the authors use parse trees and subtrees (i.e. «relative frequencies of partial syntactic derivations») at three different levels. I found this convincing. A comparison of three statistical models: Linear Regression, Proportional Odds Model and Multi-class Logistic Regression. M. Heilman, K. Collins-Thompson, and M. Eskenazi, “An analysis of statistical models and features for reading difficulty ...
more ...

Comparison of Features for Automatic Readability Assessment: review

I read an interesting article, “featuring” an up-to-date comparison of what is being done in the field of readability assessment:

A Comparison of Features for Automatic Readability Assessment”, Lijun Feng, Martin Jansche, Matt Huenerfauth, Noémie Elhadad, 23rd International Conference on Computational Linguistics (COLING 2010), Poster Volume, pp. 276-284.

I am interested in the features they use. Let’s summarize, I am going to do a quick recension:

Corpus and tools

  • Corpus: a sample from the Weekly Reader
  • OpenNLP to extract named entities and resolve co-references
  • the Weka learning toolkit for machine learning

Features

  • Four subsets of discourse features:
  • 1. entity-density features 2. lexical-chain features (chains rely on semantic relations as they are automatically detected) 3. co-reference inference features (a research novelty) 4. entity grid features (transition patterns according to the grammatical roles of the words)
  • Language Modeling Features, i.e. train language models
  • Parsed Syntactic Features, such as parse tree height
  • POS-based Features
  • Shallow Features, i.e. traditional readability metrics
  • Other features, mainly “perplexity features” according to Schwarm and Ostendorf (2005), see below

Results

  • Combining discourse features doesn’t significantly improve accuracy, discourse features do not seem to be useful.
  • Language models trained with information gain outperform those trained ...
more ...

A short bibliography on Latent Semantic Analysis and Indexing

To go a bit further than my previous post, here are a few references that I recently found to be interesting.

For a definition and/or other short bibliographies, see Wikipedia or something else this time : Scholarpedia, with an article “curated” by T.K. Landauer and S.T. Dumais.

U. Mortensen, Einführung in die Korrespondenzanalyse, Universität Münster,2009.

G. Gorrell and B. Webb, “Generalized Hebbian Algorithm for Incremental Latent Semantic Analysis,” in Ninth European Conference on Speech Communication and Technology, 2005.

P. Cibois, Les méthodes d’analyse d’enquêtes, Que sais-je ?, 2004.

B. Pincombe, Comparison of Human and Latent Semantic Analysis (LSA) Judgements of Pairwise Document Similarities for a News Corpus, Australian Department of Defence,2004.

M. W. Berry, S. T. Dumais, and G. W. O’Brien, “Using Linear Algebra for Intelligent Information Retrieval,” SIAM Review, vol. 37, iss. 4, p. pp. 573-595, 1995.

S. Dumais, Enhancing performance in latent semantic indexing (LSI) retrieval, Bellcore,1992.

S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, “Indexing by latent semantic analysis”, Journal of the American society for information science, vol. 41, iss. 6, pp. 391-407, 1990.

G. Salton, A. Wong, and C. S. Yang, “A vector ...

more ...