2003
Conference article  Restricted

Automatic coding of open-ended surveys using text categorization techniques

Giorgetti D, Sebastiani F, Prodanof I

Automatic coding 

Open-ended questions do not limit respondents' answers in terms of linguistic form and semantic content, but bring about severe problems in terms of cost and speed, since their coding requires trained professionals to manually identify and tag meaningful text segments. To overcome these problems, a few automatic approaches have been proposed in the past, some based on matching the answer with textual descriptions of the codes, others based on manually building rules that check the answer for the presence or absence of code-revealing words. While the former approach is scarcely effective, the major drawback of the latter approach is that the rules need to be developed manually, and before the actual observation of text data. We propose a new approach, inspired by work in information retrieval (IR), that overcomes these drawbacks. In this approach survey coding is viewed as a task of multiclass text categorization (MTC), and is tackled through techniques originally developed in the .eld of supervised machine learning. In MTC each text belonging to a given corpus has to be classi.ed into exactly one from a set of prede.ned categories. In the supervised machine learning approach to MTC, a set of categorization rules is built automatically by learning the characteristics that a text should have in order to be classified under a given category. Such characteristics are automatically learnt from a set of training examples, i.e. a set of texts whose category is known. For survey coding, we equate the set of codes with categories, and all the collected answers to a given question with texts. Giorgetti and Sebastiani have carried out automatic coding experiments with two di.erent supervised learning techniques, one based on a naÏve Bayesian method and the other based on multiclass support vector machines. Experiments have been run on a corpus of social surveys carried out by the National Opinion Research Center, University of Chicago (NORC). These experiments show that our methods outperform, in terms of accuracy, previous automated methods tested on the same corpus.



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:91138,
	title = {Automatic coding of open-ended surveys using text categorization techniques},
	author = {Giorgetti D and Sebastiani F and Prodanof I},
	year = {2003}
}