Using a rule-based tokenizer for German

Tokenization is a text segmentation process whose objective resides in dividing written text into meaningful units. This post introduces two rule-based methods to perform tokenization on German, English and beyond.

more ...

A one-pass valency-oriented chunker for German

I recently introduced at the LTC‘13 conference a tool I developed to help performing fast text analysis on web corpora: a one-pass valency-oriented chunker for German.

Motivation

It turns out that topological fields together with chunked phrases provide a solid basis for a robust analysis of German sentence structure.” E. W. Hinrichs, “Finite-State Parsing of German”, in Inquiries into Words, Constraints and Contexts, A. Arppe and et al. (eds.), Stanford: CSLI Publications, pp. 35–44, 2005.

Abstract

Non-finite state parsers provide fine-grained information but they are computationally demanding, so that it can be interesting to see how far a …

more ...

Review of the readability checker DeLite

Continuing a series of reviews on readability assessment, I would like to describe a tool which is close to what I intend to do. It is named DeLite and is named a ‘readability checker’. It has been developed at the IICS research center of the FernUniversität Hagen.

From my point of view, its main feature is that it has not been made publicly available, it is based on software one has to buy and I did not manage to find even a demo version, although they claim to have been publicly (i.e. EU-)funded. Thus, my description is based …

more ...

Two open-source corpus-builders for German and French

Introduction

I already described how to build a basic specialized crawler on this blog. I also wrote about crawling a newspaper website to build a corpus. As I went on work on this issue, I decided to release a few useful scripts under an open-source license.

The crawlers are not just mere link-harvesters, they are designed to be used as corpus-builders. As one cannot republish anything but quotations of the texts, the purpose is to enable others to make their own version of the corpora. Since the newspapers are updated quite often, it is not imaginable to create exact duplicates …

more ...

2nd release of the German Political Speeches Corpus

Last Monday, I released an updated version of both corpus and visualization tool on the occasion of the DGfS-CL Poster-Session in Frankfurt, where I presented a poster (in German).

The first version had been made available last summer and mentioned on this blog, cf this post: Introducing the German Political Speeches Corpus and Visualization Tool.

For stability, the resource is available at this permanent redirect: http://purl.org/corpus/german-speeches

Description

In case you don’t remember it or never heard of it, here is a brief description:

The resource presented here consists of speeches by the last German …

more ...

Bibliography and links updates

As I try to put my notes in order by the end of this year, I changed a series of references, most notably in the bibliography and in the links sections.

Bibliography

I just updated the bibliography, using new categories. I divided the references in two main sections:

Corpus Linguistics, Complexity and Readability Assessment

Background

Links

First of all, I updated the links section …

more ...

Quick review of the Falko Project

The Falko Project is an error-annotated corpus of German as a foreign language, maintained by the Humboldt Universität Berlin who made it publicly accessible.

Recently a new search engine was made available, practically replacing the old CQP interface. This tool is named ANNIS2 and can handle complex queries on the corpus.

Corpus

There are several subcorpora, and apparently more to come. The texts were written by advanced learners of German. There are most notably summaries (with the original texts and a comparable corpus of summaries written by native-speakers), essays who come from different locations (with the same type of comparable …

more ...