Debole F, Sebastiani F
Supervised term weighting Text categorization Text classification Support vector machines Supervised learning Term weighting
The construction of a text classifier usually involves (i) a phase of emph{term selection}, in which the most relevant terms for the classification task are identified, (ii) a phase of emph{term weighting}, in which document weights for the selected terms are computed, and (iii) a phase of emph{classifier learning}, in which a classifier is generated from the weighted representations of the training documents. This process involves an activity of {em supervised learning}, in which information on the membership of training documents in categories is used. Traditionally, supervised learning enters only phases (i) and (iii). In this paper we propose instead that learning from the training data should also affect phase (ii), i.e. that information on the membership of training documents to categories be used to determine term weights. We call this idea emph{supervised term weighting} (STW). As an example of STW, we propose a number of ``supervised variants'' of $tfidf$ weighting, obtained by replacing the $idf$ function with the function that has been used in phase (i) for term selection. The use of STW allows the terms that are distributed most differently in the positive and negative examples of the categories of interest to be weighted highest. We present experimental results obtained on the standard textsf{Reuters-21578} benchmark with three classifier learning methods (Rocchio, $k$-NN, and support vector machines), three term selection functions (information gain, chi-square, and gain ratio), and both local and global term selection and weighting.
@misc{oai:it.cnr:prodotti:160620, title = {Supervised term weighting for automated text categorization}, author = {Debole F and Sebastiani F}, year = {2002} }