Elsnet
 
   


ELSNET-list archive

Category:   E-CFP
Subject:   Cross-Framework, Cross Domain Parser Evaluation
From:  
Email:   oe_(on)_ifi.uio.no
Date received:   28 Apr 2008
Deadline:   05 May 2008
Start date:   23 Aug 2008

FINAL CALL FOR PAPERS 22nd International Conference on Computational Linguistics Workshop on Cross-Framework and Cross-Domain Parser Evaluation August 23 --- Manchester, UK http://lingo.stanford.edu/events/08/pe/ http://www-tsujii.is.s.u-tokyo.ac.jp/pe08-st/ Background and Motivation ------------------------- Broad-coverage parsing has come to a point where distinct approaches can offer (seemingly) comparable performance: statistical parsers acquired from the PTB; data-driven dependency parsers; `deep' parsers trained off enriched treebanks (in linguistic frameworks like CCG, HPSG, or LFG); and hybrid `deep' parsers, employing hand-built grammars in, for example, HPSG, LFG, or LTAG. Evaluation against trees in the WSJ section of the Penn Treebank (PTB) has helped advance parsing research over the course of the past decade. Despite some scepticism, the crisp and, over time, stable task of maximizing ParsEval metrics over PTB trees has served as a dominating benchmark. However, modern treebank parsers still restrict themselves to only a subset of PTB annotation; there is reason to worry about the idiosyncrasies of this particular corpus; it remains unknown how much the ParsEval metric (or any intrinsic evaluation) can inform NLP application developers; and PTB-style analyses leave a lot to be desired in terms of linguistic information. This workshop aims to bring together developers of broad-coverage parsers who are interested in questions of target representations and cross-framework and cross-domain evaluation and benchmarking. From informal discussions that the co-organizers had among themselves and with colleagues, it seems evident that there is comparatively broad awareness of current issues in parser evaluation, and a lively interest in detailed exchange of experience (and beliefs). Specifically, the organizers hope to attract representatives from diverse parsing approaches and frameworks, ranging from `traditional' treebank parsing, over data-driven dependency parsing, to parsing in specific linguistic frameworks. Quite likely for the first time in the history of these approaches, there now exist large, broad-coverage, parsing systems representing diverse traditions that can be applied to running text, often producing comparable representations. In our view, these recent developments present a new opportunity for re-energizing parser evaluation research. Call for Papers --------------- The workshop organizers invite papers on all aspects of parser evaluation, qualitative and quantitative, including but not limited to: + in-depth or contrastive evaluation of parsing systems; + methology, test data, and technology for parser evaluation; + reflections on existing standards and evaluation metrics; + correlations between intrinsic and extrinsic parser evaluation; + proposals for new target representations or success measures. Seeing the general theme of this workshop, submissions that discuss aspects of cross-framework, cross-domain, or cross-linguistic parser evaluation are especially welcome. One of the workshop goals is to establish an improved shared knowledge among participants of the strengths and weaknesses of extant annotation and evaluation schemes. In order to create a joint focus and in-depth discussion, there will be a `lightweight' shared task. For a selection of 50 sentences (of which ten are considered obligatory, the rest optional) for which PTB, GR, and PropBank (and maybe other) annotations are available, we will invite contributors to scrutinize existing gold-standard representations contrastively, identify perceived deficiencies, and sketch what can be done to address these. As an optional component, participants in the shared task are welcome to include `native', framework-specific output representations and actual results for a parsing system of their choice (be it their own or not) in the contrastive study. In either case, submissions to the shared task should aim to reflect on the nature of different representations, highlight which additional distinctions are made in either scheme, and argue why these are useful (for some task) or unmotivated (in general). Please see the workshop web pages for detailed submission information: http://lingo.stanford.edu/events/08/pe/ Lightweight Shared Task ----------------------- One of the workshop goals is to establish an improved shared knowledge among participants of the strengths and weaknesses of extant annotation and evaluation schemes. In order to create a joint focus and in-depth discussion, there is a `lightweight' shared task. For a selection of 34 sentences (of which ten are considered obligatory), for which gold- standard annotations are provided in several formats (including PTB, GR, and PropBank) which are potentially useful for parser evaluation, we invite contributors to scrutinize existing representations contrastively, identify perceived deficiencies, and sketch what can be done to address these. As an optional component, participants in the shared task are welcome to include `native', framework-specific output representations and actual results for a parsing system of their choice (be it their own or not) in the contrastive study. In either case, submissions to the shared task should aim to reflect on the nature of different representations, highlight which additional distinctions are made in either scheme, and discuss why these are useful (for some task) or unmotivated (in general). The parser evaluation shared task is not a competition or a bake-off, but a collaborative effort intended to provide a focus for discussion on current issues on parser evaluation. Authors submitting to the workshop can decide whether they want to participate in the shared task or submit a more general paper on parser evaluation (or both). Depending on the volume and distribution of accepted papers, we anticipate that the presentation and discussion of shared task results may account for about half of the available time at the workshop. Please see the shared task web pages for submission information: http://www-tsujii.is.s.u-tokyo.ac.jp/pe08-st/ Important Dates --------------- + Initial Call for Papers March 1 + Shared Task Release March 22 + Paper Submission Deadline May 5 + Notification of Acceptance June 6 + Camera-Ready Papers Deadline July 1 + One-Day Workshop August 23 Workshop Organizers and Programme Committee ------------------------------------------- The workshop aims to appeal to a wide range of researchers across frameworks, hence it has a relatively large and diverse group of organizers. The co-organizers will jointly make all decisions regarding the workshop form and programme, and it is expected that most of the co-organizers participate in the actual workshop. + Johan Bos, University of Rome `La Sapienza' (Italy) + Edward Briscoe, University of Cambridge (UK) + Aoife Cahill, University of Stuttgart (Germany) + John Carroll, University of Sussex (UK) + Stephen Clark, Oxford University (UK) + Ann Copestake, University of Cambridge (UK) + Dan Flickinger, Stanford University (USA) + Josef van Genabith, Dublin City University (Ireland) + Julia Hockenmaier, University of Illinois at Urbana-Champaign (USA) + Aravind Joshi, University of Pennsylvania (USA) + Ronald Kaplan, Powerset, Inc. (USA) + Tracy Holloway King, PARC (USA) + Sandra Kuebler, Indiana University (USA) + Dekang Lin, Google Inc. (USA) + Jan Tore Lønning, University of Oslo (Norway) + Christopher Manning, Stanford University (USA) + Yusuke Miyao, University of Tokyo (Japan) + Joakim Nivre, Växjö and Uppsala Universities (Sweden) + Stephan Oepen, University of Oslo (Norway) and CSLI Stanford (USA) + Kenji Sagae, Tokyo University (Japan) + Nianwen Xue, University of Colorado (USA) + Yi Zhang, DFKI GmbH and Saarland University (Germany) Please see the workshop web pages for additional contact information. http://lingo.stanford.edu/events/08/pe/ http://www-tsujii.is.s.u-tokyo.ac.jp/pe08-st/ _______________________________________________ Elsnet-list mailing list Elsnet-list_(at)_elsnet.org http://mailman.elsnet.org/mailman/listinfo/elsnet-list
 

[print/pda] [no frame] [navigation table] [navigation frame]     Page generated 02-05-2008 by Steven Krauwer Disclaimer / Contact ELSNET