Selected Publications

We present a simple neural network for word alignment that builds source and target word window representations to compute alignment scores for sentence pairs. To enable unsupervised training, we use an aggregation operation that summarizes the alignment scores for a given target word. A soft-margin objective increases scores for true target words while decreasing scores for target words that are not present. Compared to the popular Fast Align model, our approach improves alignment accuracy by 7 AER on English-Czech, by 6 AER on Romanian-English and by 1.7 AER on English-French alignment.
In WMT, 2016

This paper introduces a greedy parser based on neural networks, which leverages a new compositional sub-tree representation. The greedy parser and the compositional procedure are jointly trained, and tightly depends on each-other. The composition procedure outputs a vector representation which summarizes syntactically (parsing tags) and semantically (words) sub-trees. Composition and tagging is achieved over continuous (word or tag) representations, and recurrent neural networks. We reach F1 performance on par with well-known existing parsers, while having the advantage of speed, thanks to the greedy nature of the parser. We provide a fully functional implementation of the method described in this paper
In ICLR, 2015*

Recent Publications

Teaching

I am a teaching instructor for the following courses at TELECOM Nancy:

  • Data Mining and Knowledge Extraction, MSc degree
  • Database, Bachelor degree

Contact

Download

A fully functional implementation of the syntactic parser described in Legrand and Collobert, 2015 can be downloaded here.

If you use this parser in your work, please cite the following paper:

  • Joël Legrand, Ronan Collobert, Joint RNN-Based Greedy Parsing and Word Composition, Proceedings of the International Conference on Learning Representations (ICRL, 2015)