| Category: ||E-CFP |
| Subject: ||Word Sense Disambiguation, Special Issue |
| From: ||Mark Stevenson |
| Email: ||M.Stevenson_(on)_dcs.shef.ac.uk |
| Date received: ||17 Jan 2003 |
| Deadline: ||01 Oct 2003 |
Call for Papers:
Journal of Computer Speech and Language
Special Issue on WORD SENSE DISAMBIGUATION
Judita Preiss, Judita.Preiss_(on)_cl.cam.ac.uk
Mark Stevenson, M.Stevenson_(on)_dcs.shef.ac.uk
The process of automatically determining the meanings of words, word
disambiguation (WSD), is an important stage in language understanding.
has been shown to be useful for many natural language processing
applications including machine translation, information retrieval (mono-
and cross-lingual), corpus analysis, summarization and document
The usefulness of WSD has been acknowledged since the 1950's and the
has recently enjoyed a resurgence of interest including the creation of
SENSEVAL, an evaluation exercise allowing a basic precision/recall
comparison of participating systems, which has been run twice to date.
The current availability of large corpora and powerful computing
has made the exploration of machine learning and statistical methods
possible. This is in contrast to the majority of early approaches which
relied on hand-crafted disambiguation rules.
This special issue of Computer Speech and Language, due for
publication in 2004, is intended to describe the current state of the
art in word sense disambiguation. Papers are invited on all aspects of
WSD research, and especially on:
* Combinations of methods and knowledge sources.
Which methods or knowledge sources complement each other and which
similar disambiguation information? How should they be combined? Do
disambiguation results justify the extra cost of producing systems which
combine multiple techniques or use multiple knowledge sources? Can any
method or knowledge source be determined to be better or worse than
* Evaluation of WSD systems.
Which metrics are most informative and would new ones be useful? Can WSD
be evaluated in terms of the effect it has on another language
task, for example parsing? Can evaluations using different data sets
(corpora and lexical resources) be compared? Can the cost of producing
evaluation data be reduced through the use of automatic methods?
* Sense distinctions and sense inventories.
How do these affect WSD? How does the granularity of the lexicon affect
the difficulty of the WSD task? Are some types of sense distinction
difficult to distinguish in text? What can be gained from combining
inventories and how can this be done?
* The effect of WSD on applications.
To what extent does WSD help applications such as machine translation or
text retrieval? What kind of disambiguation is most useful for these
applications? What is the effect when the disambiguation algorithm makes
* Minimising the need for hand-tagged data.
Hand-tagged text is expensive and difficult to obtain while un-tagged
is plentiful and, effectively, limitless. What techniques can be used to
make use of un-tagged text, would weakly/semi-supervised learning
algorithms be useful? What use can be made of parallel text? Can
text be made as useful as disambiguated text?
Initial Submission Date: 1 October 2003
All submissions will be subject to the normal peer review process for
Submissions in electronic form (PDF) are strongly preferred and must
conform to the Computer Speech and Language specifications, which are
Any initial queries, should be addressed to