2006
Journal article
Restricted
Joint structured/non structured parallelism exploitation through data flow
Dazzi P, Danelutto MStructured parallel programming promises to raise the level of abstraction p erceived by programmers when implementing parallel applications. In the meanwhile, however, it restricts the freedom of programmers to implement arbitrary parallelism exploitation patterns. In this work we discuss a data flow implementation methodology for skeleton based structured parallel programming environments that easily integrates arbitrary, user-defined parallelism exploitation patterns while preserving most of the b enefits typical of structured parallel programming models.
See at:
CNR IRIS | CNR IRIS
2007
Journal article
Open Access
Muskel: a skeleton library supporting skeleton set expandability
Dazzi P, Danelutto M, Aldinucci MProgramming models based on algorithmic skeletons promise to raise the level of abstraction perceived by programmers when implementing parallel applications, while guaranteeing good performance figures. At the same time, however, they restrict the freedom of programmers to implement arbitrary parallelism exploitation patterns. In fact, efficiency is achieved by restricting the parallelism exploitation patterns provided to the programmer to the useful ones for which efficient implementations, as well as useful and efficient compositions, are known. In this work we introduce muskel, a full Java library targeting workstation clusters, networks and grids and providing the programmers with a skeleton based parallel programming environment. muskel is implemented exploiting (macro) data flow technology, rather than the more usual skeleton technology relying on the use of implementation templates. Using data flow, muskel easily and efficiently implements both classical, predefined skeletons, and user-defined parallelism exploitation patterns. This provides a means to overcome some of the problems that Cole identified in his skeleton "manifesto" as the issues impairing skeleton success in the parallel programming arena. We discuss fully how user-defined skeletons are supported by exploiting a data flow implementation, experimental results and we also discuss extensions supporting the further characterization of skeletons with non-functional properties, such as security, through the use of Aspect Oriented Programming and annotations.Source: SCALABLE COMPUTING. PRACTICE AND EXPERIENCE, vol. 8 (issue 4), pp. 325-341
See at:
CNR IRIS | ISTI Repository | www.scpe.org | CNR IRIS
2006
Conference article
Restricted
A Java/Jini framework supporting stream parallel computations
Dazzi P, Danelutto MJJPF (the Java/Jini Parallel Framework) is a framework that can run stream parallel applications on several parallel-distributed architectures. JJPF is a distributed execution server, actually. It uses JINI to recruit the computational resources needed to compute parallel applications. Parallel applications can be run on JJPF provided they exploit parallelism accordingly to an arbitrary nesting of task farm and pipeline skeletons/patterns. JJPF achieves almost perfect, fully automatic load balancing in the execution of such kind of applications. It also transparently handles any number of node and network faults. Scalability and efficiency results are shown on workstation networks, both with a synthetic (embarrassingly parallel) image processing application and with a real (not embarrassingly parallel) page ranking application.
See at:
CNR IRIS | CNR IRIS
2007
Conference article
Restricted
New perspectives in autonomic design patterns for stream-classification-systems
Dazzi P, Nidito F, Pasquali MNowadays, systems are growing in size and are becoming more ad more complex. Such a complexity outline a new need for mechanisms enabling the system self-management, freeing administrators of low-level task management whilst delivering an optimized system. The autonomic systems sense their operating environment and automatically take action to change the environment or their own behavior. They are able to achieve it with a minimum human effort. They have the following properties: self-configuring, self- healing, self-optimizing and self-protecting. Current auto- nomic systems are ad hoc solutions: each system is designed and implemented from scratch i.e. there are not standard (or well-established) methodologies that autonomic system de- signers and/or programmers can exploit to drive their work. In this paper, we propose a design pattern that can be eas- ily exploited by the stream-classification-systems designer to achieve autonomicity with a minimal effort. The pattern is described using a java-like notation for the classes and interfaces. A simple UML class diagram is depicted.
See at:
CNR IRIS | CNR IRIS | portal.acm.org
2007
Conference article
Restricted
Workflows on top of a macro data flow interpreter exploiting aspects
Danelutto M, Dazzi PWe describe how aspect oriented programming techniques can be exploited to support the development of workflow-based grid applications. In particular, we use aspects to adapt simple Java workflow code to be executed on top of muskel, our experimental, macro data flow based skeleton programming environment. Aspects are used to extract
See at:
CNR IRIS | CNR IRIS
2008
Conference article
Restricted
From ORC models to distributed Grid Java code
Dazzi P, Aldinucci M, Danelutto M, Kilpatrick PWe present O2J, a Java library that allows implementation of Orc programs on distributed architectures including grids and clusters/networks of workstations. With minimal programming effort the grid programmer may implement Orc programs, as he/she is not required to write any low level code relating to distributed orchestration of the computation but only that required to implement Orc expressions. Using the prototype O2J implementation, grid application developers can reason about abstract grid orchestration code described in Orc. Once the required orchestration has been determined and its properties analysed, a grid application prototype can be simply, efficiently and quickly implemented by taking the Orc code, rewriting it into corresponding Java/O2J syntax and finally providing the functional code implementing the sites and processes involved. The proposed modus operandi brings a Model Driven Engineering approach to grid application development.
See at:
CNR IRIS | CNR IRIS
2008
Contribution to book
Restricted
From ORC models to distributed Grid Java code
Dazzi P, Aldinucci M, Danelutto M, Kilpatrick PWe present O2J, a Java library that allows implementation of Orc programs on distributed architectures including grids and clusters/networks of workstations. With minimal programming effort the grid programmer may implement Orc programs, as he/she is not required to write any low level code relating to distributed orchestration of the computation but only that required to implement Orc expressions. Using the prototype O2J implementation, grid application developers can reason about abstract grid orchestration code described in Orc. Once the required orchestration has been determined and its properties analysed, a grid application prototype can be simply, efficiently and quickly implemented by taking the Orc code, rewriting it into corresponding Java/O2J syntax and finally providing the functional code implementing the sites and processes involved. The proposed modus operandi brings a Model Driven Engineering approach to grid application development.
See at:
CNR IRIS | CNR IRIS | link.springer.com
2008
Other
Restricted
Tools and models for high level parallel and Grid programming
Dazzi PWhen algorithmic skeletons were first introduced by Cole in late 1980 (50) the idea had an almost immediate success. The skeletal approach has been proved to be effective when application algorithms can be expressed in terms of skeletons composition. However, despite both their effectiveness and the progress made in skeletal systems design and implementation, algorithmic skeletons remain absent from mainstream practice. Cole and other researchers, respectively in (51) and (19), focused the problem. They recognized the issues affecting skeletal systems and stated a set of principles that have to be tackled in order to make them more effective and to take skeletal programming into the parallel mainstream. In this thesis we propose tools and models for addressing some among the skeletal programming environments issues. We describe three novel approaches aimed at enhancing skeletons based systems from different angles. First, we present a model we conceived that allows algorithmic skeletons customization exploiting the macro data-flow abstraction. Then we present two results about the exploitation of metaprogramming techniques for the run-time generation and optimization of macro data-flow graphs. In particular, we show how to generate and how to optimize macro data-flow graphs accordingly both to programmers provided non-functional requirements and to execution platform features. The last result we present are the Behavioural Skeletons, an approach aimed at addressing the limitations of skeletal programming environments when used for the development of component-based Grid applications. We validated all the approaches conducting several test, performed exploiting a set of tools we developed.
See at:
CNR IRIS | CNR IRIS
2006
Conference article
Restricted
Joint structured/unstructured parallelism exploitation in muskel
Danelutto M, Dazzi PStructured parallel programming promises to raise the level of abstraction perceived by programmers when implementing parallel applications. In the meanwhile, however, it restricts the freedom of programmers to implement arbitrary parallelism exploitation patterns. In this work we discuss a data flow implementation methodology for skeleton based structured parallel programming environments that easily integrates arbitrary, user-defined parallelism exploitation patterns while preserving most of the benefits typical of structured parallel programming models. © Springer-Verlag Berlin Heidelberg 2006.
See at:
CNR IRIS | CNR IRIS | link.springer.com
2018
Conference article
Restricted
Spinstreams: A static optimization tool for data stream processing applications
Mencagli G, Dazzi P, Tonci NThe ubiquity of data streams in different fields of computing has led to the emergence of Stream Processing Systems (SPSs) used to program applications that extract insights from unbounded sequences of data items. Streaming applications demand various kinds of optimizations. Most of them are aimed at increasing throughput and reducing processing latency, and need cost models used to analyze the steady-state performance by capturing complex aspects like backpressure and bottleneck detection. In those systems, the tendency is to support dynamic optimizations of running applications which, although with a substantial run-time overhead, are unavoidable in case of unpredictable workloads. As an orthogonal direction, this paper proposes SpinStreams, a static optimization tool able to leverage cost models that programmers can use to detect and understand the inefficiencies of an initial application design. SpinStreams suggests optimizations for restructuring applications by generating code to be run on the SPS. We present the theory behind our optimizations, which cover more general classes of application structures than the ones studied in the literature so far. Then, we assess the accuracy of our models in Akka, an actor-based streaming framework providing a Java and Scala API.
See at:
dl.acm.org | CNR IRIS | CNR IRIS
2018
Other
Open Access
BASMATI - D7.4 Communication plan and activities
Dazzi P, Tserpes KDissemination and communication are a key pillar for maximizing the impact of the project. Both of them, indeed, related to the same activities for different targeted audiences. These activities, together with training, standardization and exploitation, conform the strategy for ensuring the sustainability of project results.
Within this deliverable, the communication plan is depicted at the beginning, followed with a summary of all the activities performed, ending with a set of Key Performance Indicators (KPIs) used to measure the results of the performed activities and as a reference for the future.
Basic strategy can be considered as a mean for creating awareness around the project, letting different stakeholders know about BASMATI, its research goals and results, as well as to attract and engage them with the final objective of creating a community of interest around the project.Project(s): BASMATI
See at:
CNR IRIS | ISTI Repository | CNR IRIS
2017
Contribution to book
Restricted
Integrated Cloud Broker System and Its Experimental Evaluation
Youn Ch, Chen M, Dazzi PIn distributed computing environment, there are a large number of similar or equivalent resources provided by different service providers. These resources may provide the same functionality, but optimize different Quality of Service (QoS) metrics. These computing resources are managed and sold by many different service providers [1]. Service providers offer necessary information about their services such as the service capability, and the utility measuring methods and charging policies, which will be later referred to as the "resource policy" in this book. Each resource policy bears a tuple of two components, such as (capability, price). For capability, we model the resource capability as a set of QoS metrics which include the CPU type, the memory size, and the storage/hard disk size.Project(s): BASMATI
See at:
CNR IRIS | CNR IRIS | link.springer.com