Found 66 result(s)
Found 4 page(s)
Page Size: 10, 20, 50
Export: bibtex, xml, json, csv

2016 Conference object Unknown

Empowering requirements elicitation interviews with vocal and biofeedback analysis
Ferrari A., Spoletini P., Brock C., Shahwar R.
Interviews with stakeholders are the most commonly used elicitation technique, as they are considered one of the most effective ways to transfer knowledge between requirements analysts and customers. During these interviews, ambiguity is a major obstacle for knowledge transfer, as it can lead to incorrectly understood needs and domain aspects and may ultimately result in poorly defined requirements. To address this issue, previous work focused on how ambiguity is perceived on the analyst side, i.e., when the analyst perceives an expression of the customer as ambiguous. However, this work did not consider how ambiguity can affect customers, i.e., when questions from the analyst are perceived as ambiguous. Since customers are not in general trained to cope with ambiguity, it is important to provide analysts with techniques that can help them to identify these situations. To support the analysts in this task, we propose to explore the relation between a perceived ambiguity on the customer side, and changes in the voice and bio parameters of that customer. To realize our idea, we plan to (1) study how changes in the voice and bio parameters can be correlated to the levels of stress, confusion, and uncertainty of an interviewee and, ultimately, to ambiguity and (2) investigate the application of modern voice analyzers and wristbands in the context of customer-analyst interviews. To show the feasibility of the idea, in this paper we present the result of our first step in this direction: an overview of different voice analyzers and wristbands that can collect bio parameters and their application in similar contexts. Moreover, we propose a plan to carry our research out.Source: 2016 IEEE 24th International Requirements Engineering Conference, pp. 371–376, Beijing, China, 12-16 September 2016
DOI: 10.1109/RE.2016.56

See at: DOI Resolver | ieeexplore.ieee.org | CNR People


2011 Report Unknown

CBTC preliminary report
Ferrari A.
CBTC are modern railway signaling systems used in urban railway lines for light rail (e.g., tranvia), heavy rail (e.g., metro) and APM (Automated People Mover, e.g., Airport metros). Sometimes, they can be deployed also on commuter lines (rails going to suburban areas, e.g., S-Bhan). This document summarizes the main features that a generic CBTC shall support, based on the current IEEE standards and on the currently analyzed implementations

See at: CNR People


2018 Article Unknown

Using machine learning to predict soil bulk density on the basis of visual parameters: Tools for in-field and post-field evaluation
Bondi G., Creamer R., Ferrari A., Fenton O., Wall D.
Soil structure is a key factor that supports all soil functions. Extracting intact soil cores and horizon specific samples for determination of soil physical parameters (e.g. bulk density (Bd) or particle size distribution) is a common practice for assessing indicators of soil structure. However, these are often difficult to measure, since they require expensive and time consuming laboratory analyses. Our aim was to provide tools, through the use of machine learning techniques, to estimate the value of Bd based solely on soil visual assessment, observed by operators directly in the field. The first tool was a decision tree model, derived through a decision tree learning algorithm, which allows discrimination among three Bd ranges. The second tool was a linear equation model, derived through a linear regression algorithm, which predicts the numerical value of soil Bd. These tools were validated on a dataset of 471 soil horizons, belonging to 201 soil profile pits surveyed in Ireland. Overall, the decision tree model showed an accuracy of ~ 60%, while the linear equation model has a correlation coefficient of about 0.65 compared to the measured Bd values. For both models, the most relevant property affecting soil structural quality appears to be the humic characteristics of the soil, followed by soil porosity and pedogenic formation. The two tools are parsimonious and can be used by soil surveyors and analysts who need to have an approximate in-situ estimate of the structural quality for various soil functional applications.Source: Geoderma (Amst.) 318 (2018): 137–147. doi:10.1016/j.geoderma.2017.11.035
DOI: 10.1016/j.geoderma.2017.11.035

See at: DOI Resolver | CNR People | www.sciencedirect.com


2017 Conference object Unknown

Requirements Elicitation: A Look at the Future Through the Lenses of the Past
Spoletini P., Ferrari A.
Requirements elicitation is the initial step of the requirements engineering process and aims at gathering all the relevant requirements through the direct or indirect interactions between requirements analysts and stakeholders. Even if the requirements elicitation problem is not new and has been approached many times over the years, it is still considered one of the most challenging of the requirements engineering process. In the proposed presentation, we aim at analyzing the journey of the research on requirements elicitation through the 25 years of the Requirements Engineering conference not only by considering the different proposed approaches and their evolution, but also by evaluating the role of requirements elicitation in the conference. Moreover, we will present the lessons learnt during this analysis and will use them as a starting point to present the current trends and outline possible future directions.Source: Requirements Engineering Conference (RE), 2017 IEEE 25th International, pp. 476–477, 04
DOI: 10.1109/RE.2017.35

See at: doi.org | DOI Resolver | CNR People


2011 Report Unknown

Evaluation of the IBM rhapsody tool for modeling automatic train protection (ATP) systems: the restrictive signal confirmation (RSC) button
Ferrari A., Illiashenko Oleg, Parfenov Sergii
The current document reports the evaluation of the IBM Rational Rhapsody tool for the modeling of Automatic Train Protection (ATP) systems software. The focus of the activity is on the Restrictive Signal Confirmation (RSC) button, a typical control component that, through not used in every ATP system, is considered a good representative of the expected functionality of an ATP software.

See at: CNR People


2017 Conference object Unknown

Panel: Context-Dependent Evaluation of Tools for NL RE Tasks: Recall vs. Precision, and beyond
Berry D. M., Cleland-huang J., Ferrari A., Walid M., John M., Didar Z.
Context and Motivation Natural language processing has been used since the 1980s to construct tools for performing natural language (NL) requirements engineering (RE) tasks. The RE field has often adopted information retrieval (IR) algorithms for use in implementing these NL RE tools. Problem Traditionally, the methods for evaluating an NL RE tool have been inherited from the IR field without adapting them to the requirements of the RE context in which the NL RE tool is used. Principal Ideas This panel discusses the problem and considers the evaluation of tools for a number of NL RE tasks in a number of contexts. Contribution The discussion is aimed at helping the RE field begin to consistently evaluate each of its tools according to the requirements of the tool's task.Source: Requirements Engineering Conference (RE), 2017 IEEE 25th International, pp. 570–573, 04
DOI: 10.1109/RE.2017.64

See at: DOI Resolver | CNR People | www.scopus.com


2015 Conference object Open Access OPEN

Automated service selection using natural language processing
Bano M., Ferrari A., Zowghi D., Gervasi V., Gnesi S.
With the huge number of services that are available online, requirements analysts face a paradox of choice (i.e., choice overload) when they have to select the most suitable service that satisfies a set of customer requirements. Both service descriptions and requirements are often expressed in natural language (NL), and natural language pro- cessing (NLP) tools that can match requirements and service descrip- tions, while filtering out irrelevant options, might alleviate the problem of choice overload faced by analysts. In this paper, we propose a NLP approach based on Knowledge Graphs that automates the process of service selection by ranking the service descriptions depending on their NL similarity with the requirements. To evaluate the approach, we have performed an experiment with 28 customer requirements and 91 service descriptions, previously ranked by a human assessor. We selected the top- 15 services, which were ranked with the proposed approach, and found 53% similar results with respect to top-15 services of the manual ranking. The same task, performed with the traditional cosine similarity ranking, produces only 13% similar results. The outcomes of our experiment are promising, and new insights have also emerged for further improvement of the proposed technique.Source: APRES 2015 - Requirements Engineering in the Big Data Era. Second Asia Pacific Symposium, pp. 3–17, Wuhan, China, 18-20 October 2015
DOI: 10.1007/978-3-662-48634-4_1

See at: PUblication MAnagement Open Access | DOI Resolver | link.springer.com | CNR People


2012 Conference object Unknown

Product line engineering applied to CBTC systems development.
Ferrari A., Spagnolo G. O., Martelli G., Menabeni S.
Communications-based Train Control (CBTC) systems are the new frontier of automated train control and operation. Currently developed CBTC platforms are actually very complex systems including several functionalities, and every installed system, developed by a different company, varies in extent, scope, number, and even names of the implemented functionalities. International standards have emerged, but they remain at a quite abstract level, mostly setting terminology. This paper reports intermediate results in an effort aimed at defining a global model of CBTC, by mixing semi-formal modelling and product line engineering. The effort has been based on an in-depth market analysis, not limiting to particular aspects but considering as far as possible the whole picture. The adopted methodology is discussed and a preliminary model is presented.Source: Leveraging Applications of Formal Methods, Verification and Validation. Applications and Case Studies. 5th International Symposium., pp. 216–229, Heraclion, Crete, 15-18 October 2012
DOI: 10.1007/978-3-642-34032-1_22

See at: DOI Resolver | link.springer.com | CNR People


2014 Conference object Unknown

Context transformations for goal models
Spoletini P., Ferrari A., Gnesi S.
This paper proposes a technique to support the requirements engineer in transforming existing models into new models to address the customer's needs. In particular, we identify a set of possible categories of context change that indicate in which direction the original model needs to evolve. Furthermore, we associate a transformation to each category, and we formalise it in terms of graph grammars. Our results are a generalisation of an experimental evaluation based on 10 models retrieved from the literature and 25 scenarios of context change. This work represents a step forward in the formalisation of requirements models since it provides the foundations of a tool to support the automatic transformation of models, and employs graph grammars to provide a formal layer to the approach.Source: MoDRE 2014 - IEEE 4th International Model-Driven Requirements Engineering Workshop, pp. 17–26, Karlskrona, Sweden, 25 August 2014
DOI: 10.1109/MoDRE.2014.6890822
Project(s): LEARN PAD via OpenAIRE

See at: DOI Resolver | ieeexplore.ieee.org | CNR People


2017 Conference object Unknown

Interview Review: Detecting Latent Ambiguities to Improve the Requirements Elicitation Process
Ferrari A., Spoletini P., Donati B. Zowghi D., Gnesi S.
In requirements elicitation interviews, ambiguities identified by analysts can help to disclose the tacit knowledge of customers. Indeed, ambiguities might reveal implicit or hard to express information that needs to be elicited. The perception of ambiguity might depend on the subject who is acting as analyst, and different analysts might identify different ambiguities in the same interview. Based on this intuition, we propose to investigate the difference between ambiguities explicitly revealed by an analyst during a requirements elicitation interview, and ambiguities annotated by a reviewer who listens to the interview recording, with the objective of defining a method for interview review. We performed an exploratory study in which two subjects listened to a set of customer-analyst interviews. Only in 26% of the cases the ambiguities revealed by the analysts matched with the ambiguities found by the reviewers. In 46% of the cases, ambiguities were found by the reviewers, and were not detected by the analysts. Based on these preliminary findings, we are currently performing a controlled experiment with students of two universities, which will be followed by a real-world case study with companies. This paper discusses the current results, together with our research planSource: Requirements Engineering Conference (RE), 2017 IEEE 25th International, pp. 400–405, 04
DOI: 10.1109/RE.2017.15

See at: DOI Resolver | CNR People


2017 Conference object Unknown

Using Argumentation to Explain Ambiguity in Requirements Elicitation Interviews
Elrakaiby Y., Ferrari A., Spoletini P., Gnesi S., Nuseibeh B.
The requirements elicitation process often starts with an interview between a customer and a requirements analyst. During these interviews, ambiguities in the dialogic discourse may reveal the presence of tacit knowledge that needs to be made explicit. It is therefore important to understand the nature of ambiguities in interviews and to provide analysts with cognitive tools to identify and alleviate ambiguities. Ambiguities perceived by analysts are sometimes triggered by specific categories of terms used by the customer such as pronouns, quantifiers, and vague or under-specified terms. However, many of the ambiguities that arise in practice cannot be rooted in single terms. Rather, entire fragments of speech and their relation to the mental state of the analyst need to be considered.In this paper, we show that particular types of ambiguities can be characterised by means of argumentation theory. Argumentation is the study of how conclusions can be reached through logical reasoning. In an argumentation theory, statements are represented as arguments, and conflict relations among statements are represented as attacks. Based on a set of ambiguous fragments extracted from interviews, we define a model of the mental state of the analyst during an interview and translate it into an argumentation theory. Then, we show that many of the ambiguities can be characterized in terms of 'attacks' on arguments. The main novelty of this work is in addressing the problem of explaining fragment-level ambiguities in requirements elicitation interviews through the formal modeling of the analyst's mental model using argumentation theory. Our contribution provides a data-grounded, theoretical basis to have a more complete understanding of the ambiguity phenomenon, and lays the foundations to design intelligent computer-based agents that are able to automatically identify ambiguities.Source: Requirements Engineering Conference (RE) 2017, pp. 51–60, 04/09/2017, 08/09/2017
DOI: 10.1109/RE.2017.27

See at: DOI Resolver | CNR People | www.scopus.com


2016 Article Restricted

Ambiguity and tacit knowledge in requirements elicitation interviews
Ferrari A., Spoletini P., Gnesi S.
Interviews are the most common and effective means to perform requirements elicitation and support knowledge transfer between a customer and a requirements analyst. Ambiguity in communication is often perceived as a major obstacle for knowledge transfer, which could lead to unclear and incomplete requirements documents. In this paper, we analyze the role of ambiguity in requirements elicitation interviews, when requirements are still tacit ideas to be surfaced. To study the phenomenon, we performed a set of 34 customer-analyst interviews. This experience was used as a baseline to define a framework to categorize ambiguity. The framework presents the notion of ambiguity as a class of four main sub-phenomena, namely unclarity, multiple understanding, incorrect disambiguation and correct disambiguation. We present examples of ambiguities from our interviews to illustrate the different categories, and we highlight the pragmatic components that determine the occurrence of ambiguity. Along the study, we discovered a peculiar relation between ambiguity and tacit knowledge in interviews. Tacit knowledge is the knowledge that a customer has but does not pass to the analyst for any reason. From our experience, we have discovered that, rather than an obstacle, the occurrence of an ambiguity is often a resource for discovering tacit knowledge. Again, examples are presented from our interviews to support this vision.Source: Requirements engineering (Lond., Print) (2016). doi:10.1007/s00766-016-0249-3
DOI: 10.1007/s00766-016-0249-3
Project(s): LEARN PAD via OpenAIRE

See at: PUblication MAnagement Restricted | PUblication MAnagement Restricted | DOI Resolver | link.springer.com | CNR People


2016 Conference object Unknown

Ensuring action: identifying unclear actor specifications in textual business process descriptions
Sanne U., Witschel H. F., Ferrari A., Gnesi S.
In many organisations, business process (BP) descriptions are available in the form of written procedures, or operational manuals. These documents are expressed in informal natural language, which is inherently open to different interpretations. Hence, the content of these documents might be incorrectly interpreted by those who have to put the process into practice. It is therefore important to identify language defects in written BP descriptions, to ensure that BPs are properly carried out. Among the potential defects, one of the most relevant for BPs is the absence of clear actors in action-related sentences. Indeed, an unclear actor might lead to a missing responsibility, and, in turn, to activities that are never performed. This paper aims at identifying unclear actors in BP descriptions expressed in natural language. To this end, we define an algorithm named ABIDE, which leverages rule-based natural language processing (NLP) techniques. We evaluate the algorithm on a manually annotated data-set of 20 real-world BP descriptions (1,029 sentences). ABIDE achieves a recall of 87%, and a precision of 56%. We consider these results promising. Improvements of the algorithm are also discussed in the paper.Source: International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, pp. 140–147, Porto, Portugal, 9-11 November 2016
DOI: 10.5220/0006040301400147

See at: DOI Resolver | CNR People | www.scitepress.org


2017 Conference object Unknown

Detecting domain-specific ambiguities: An NLP approach based on wikipedia crawling and word embeddings
Ferrari A., Donati B., Gnesi S.
In the software process, unresolved natural language (NL) ambiguities in the early requirements phases may cause problems in later stages of development. Although methods exist to detect domain-independent ambiguities, ambiguities are also influenced by the domain-specific background of the stakeholders involved in the requirements process. In this paper, we aim to estimate the degree of ambiguity of typical computer science words (e.g., system, database, interface) when used in different application domains. To this end, we apply a natural language processing (NLP) approach based on Wikipedia crawling and word embeddings, a novel technique to represent the meaning of words through compact numerical vectors. Our preliminary experiments, performed on five different domains, show promising results. The approach allows an estimate of the variation of meaning of the computer science words when used in different domains. Further validation of the method will indicate the words that need to be carefully defined in advance by the requirements analyst to avoid misunderstandings when editing documents and dealing with experts in the considered domains.Source: Requirements Engineering Conference Workshops (REW), 2017 IEEE 25th International, pp. 393–399, 04/
DOI: 10.1109/REW.2017.20

See at: DOI Resolver | CNR People | www.scopus.com


2012 Conference object Unknown

Lessons learnt from the adoption of formal model-based development.
Ferrari A., Fantechi A., Gnesi S.
This paper reviews the experience of introducing formal model-based design and code generation by means of the Simulink/Stateflow platform in the development process of a railway signalling manufacturer. Such company operates in a standard-regulated framework, for which the adoption of commercial, non qualified tools as part of the development activities poses hurdles from the verification and certification point of view. At this regard, three incremental intermediate goals have been defined, namely (1) identification of a safe-subset of the modelling language, (2) evidence of the behavioural conformance between the generated code and the modelled specification, and (3) integration of the modelling and code generation technologies within the process that is recommended by the regulations. These three issues have been addressed by progressively tuning the usage of the technologies across different projects. This paper summarizes the lesson learnt from this experience. In particular, it shows that formal modelling and code generation are actually powerful means to enhance product safety and cost effectiveness. Nevertheless, their adoption is not a straightforward step, and incremental adjustments and refinements are required in order to establish a formal model-based process.Source: NASA Formal Methods Symposium. 4th International Symposium, pp. 24–36, Norfolk, VA, USA, 3-5 April 2012
DOI: 10.1007/978-3-642-28891-3_5

See at: DOI Resolver | link.springer.com | CNR People


2013 Article Unknown

The Metro Rio case study
Ferrari A., Fantechi A., Magnani G., Grasso D., Tempestini M.
This paper reports on the Simulink/Stateflow based development of the on-board equipment of the Metrô Rio Automatic Train Protection system. Particular focus is given to the strategies followed to address formal weaknesses and certification issues of the adopted tool-suite. On the development side, constraints on the Simulink/Stateflow semantics have been introduced and design practices have been adopted to gradually achieve a formal model of the system. On the verification side, a two-phase approach based on model-based testing and abstract interpretation has been followed to enforce functional correctness and runtime error freedom. Formal verification has been experimented as a side activity of the project. Quantitative results are presented to assess the overall strategy: the effort required by the design activities is balanced by the effectiveness of the verification tasks enabled by model-based development and automatic code generation.Source: Science of computer programming (Print) 78 (2013): 828. doi:10.1016/j.scico.2012.04.003
DOI: 10.1016/j.scico.2012.04.003

See at: DOI Resolver | CNR People | www.sciencedirect.com


2016 Conference object Unknown

Formal methods and safety certification: Challenges in the railways domain
Fantechi A., Ferrari A., Gnesi S.
The railway signalling sector has historically been a source of success stories about the adoption of formal methods in the certification of software safety of computer-based control equipment.Source: Leveraging Applications of Formal Methods, Verification and Validation: Discussion, Dissemination, Applications. 7th International Symposium, pp. 261–265, Corfu, Greece, 10-14 October 2016
DOI: 10.1007/978-3-319-47169-3_18

See at: DOI Resolver | CNR People | www.scopus.com


2014 Article Unknown

From commercial documents to system requirements: an approach for the engineering of novel CBTC solutions
Ferrari A., Spagnolo G. O., Menabeni S., Martelli G.
Communications-based train control (CBTC) systems are the new frontier of automated train control and operation. Currently developed CBTC platforms are actually very complex systems including several functionalities, and every installed system, developed by a different company, varies in extent, scope, number, and even names of the implemented functionalities. International standards have emerged, but they remain at a quite abstract level, mostly setting terminology. This paper presents the results of an experience in defining a global model of CBTC, by mixing semi-formal modelling and product line engineering. The effort has been based on an in-depth market analysis, not limiting to particular aspects but considering as far as possible the whole picture. The paper also describes a methodology to derive novel CBTC products from the global model, and to define system requirements for the individual CBTC components. To this end, the proposed methodology employs scenario-based requirements elicitation aided with rapid prototyping. To enhance the quality of the requirements, these are written in a constrained natural language (CNL), and evaluated with natural language processing (NLP) techniques. The final goal is to go toward a formal representation of the requirements for CBTC systems. The overall approach is discussed, and the current experience with the implementation of the method is presented. In particular, we show how the presented methodology has been used in practice to derive a novel CBTC architecture.Source: International journal on software tools for technology transfer (Print) (2014): 1–21. doi:10.1007/s10009-013-0298-6
DOI: 10.1007/s10009-013-0298-6

See at: DOI Resolver | link.springer.com | CNR People


2016 Other Unknown

User interface for content analysis component
Spagnolo G. O., Ferrari A.
This is the Content Analysis component of the LearnPAd platform. It implements automated procedures to verify that the textual content that describes the tasks of a Business Process (e.g., documents created in the Collaborative Workspace) provides information that is consistent with respect to the Business Process model itself, and to automatically identify ambiguous sentences and vague terms in natural language requirements, and estimates quantitative indexes concerning the linguistic quality of the contents.Project(s): LEARN PAD via OpenAIRE

See at: github.com | CNR People


2012 Conference object Unknown

A clustering-based approach for discovering flaws in requirements specifications
Ferrari A., Gnesi S., Tolomei G.
In this paper, we present the application of a clustering algorithm to exploit lexical and syntactic relationships occurring between natural language requirements. Our experiments conducted on a real-world data set highlight a correlation between clustering outliers, i.e., requirements that are marked as "noisy" by the clustering algorithm, and requirements presenting "flaws". Those flaws may refer to an incomplete explanation of the behavioral aspects, which the requirement is supposed to provide. Furthermore, flaws may also be caused by the usage of inconsistent terminology in the requirement specification. We evaluate the ability of our proposed algorithm to effectively discover such kind of flawed requirements. Evaluation is performed by measuring the accuracy of the algorithm in detecting a set of flaws in our testing data set, which have been previously manually-identified by a human assessor.Source: 27th Annual ACM Symposium on Applied Computing, pp. 1043–1050, Riva del Garda, Trento, ITALY, 26-30 marzo 2012
DOI: 10.1145/2245276.2231939

See at: dl.acm.org | DOI Resolver | CNR People