2017
Conference article  Restricted

Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and beyond

Berry D. M., Cleland-Huang J., Ferrari A., Walid M., John M., Didar Z.

App review analysis  Ambiguity finding  Precision  False positives  Information retrieval  Requirements specification defect finding  Natural language processing  Recall  Abstraction finding  Tracing  False negatives 

Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.

Source: Requirements Engineering Conference (RE), 2017 IEEE 25th International, pp. 570–573, 04


Metrics



Back to previous page
BibTeX entry
@inproceedings{oai:it.cnr:prodotti:382477,
	title = {Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and beyond},
	author = {Berry D.  M. and Cleland-Huang J. and Ferrari A. and Walid M. and John M. and Didar Z.},
	doi = {10.1109/re.2017.64},
	booktitle = {Requirements Engineering Conference (RE), 2017 IEEE 25th International, pp. 570–573, 04},
	year = {2017}
}