Abstract:Component-based modelling is used as the basis of a number of approaches including Enterprise Architecture and System Architecture Design. Service Oriented Architecture (SOA) is a popular component-based approach but it has been criticised as not being suficiently flexible. A more flexible alternative is Event Driven Architecture (EDA) that can support Complex Event Processing. Dynamic reconfiguration of component behaviour is attractive because it allows an architecture to be extended and modified in situ without being taken off-line, updated and redeployed. This article shows how higher-order functions and reflection can support dynamic reconfiguration and how this approach is integrated with EDA. The approach is defined as patterns expressed in a component modelling language called LEAP and validated through a case study.
Abstract:Media content has become the major traffic of Internet and will keep on increasing rapidly. Various innovative media applications, services, devices have emerged and people tend to consume more media contents. We are meeting a media revolution. But to maintain the sustainability of the tremendous media consumption, it requires great capability of media processing, which demands huge amount of computing resources. Meanwhile cloud computing has emerged as a prosperous technology and the cloud computing platform has become a fundamental facility providing various services, great computing power, massive storage and bandwidth with modest cost. The integration of cloud computing and media processing is therefore a natural choice for both of them, and hence comes forth the media cloud. In this paper we make a comprehensive overview on the recent media cloud research work. We first discuss the challenges of the media cloud, and then summarize its architecture, the processing, and its storage, delivery and resource management mechanisms. As the result, we propose a new architecture for the media cloud. At the end of this paper, we make suggestions on building media clouds and propose several future research topics as the conclusion.
Abstract:Service selection has been widely investigated by the SOA research community as an effective adaptation mechanism that allows a service broker, offering a composite service, to bind at runtime each task of the composite service to a corresponding concrete implementation, selecting it from a set of candidates which differ from one another in terms of QoS parameters. In this paper we present a load-aware per-request approach to service selection which aims to combine the relative benefits of the well known per-request and perflow approaches. Our service selection policy represents the core methodology of the Plan phase of a self-adaptive service oriented system based on the MAPE-K reference loop. Since the service broker operates in a variable and uncertain environment where the QoS levels negotiated with the service providers can fluctuate, it requires some mechanism to enforce the QoS constraints with its users. To this end, we also propose an algorithm for the Analyze phase of MAPE-K which is based on the adaptive Cusum algorithm and allows to determine whether a change in the QoS level requires a service selection replanning. We present experimental results obtained with a prototype implementation of a service broker. Our results show that the proposed load-aware approach is superior to the traditional perrequest one and combines the ability of sustaining large volume of service requests, as the perflow approach, while at the same time offering a finer customizable service selection, as the per-request approach. Furthermore, the results show that the adaptive Cusum algorithm can quickly detect changes in the execution environment and trigger a new optimization plan before the system performance degrades.
Abstract:Computing Clouds are typically characterized as large scale systems that exhibit dynamic behavior due to variance in workload. However, how exactly these characteristics affect the dependability of Cloud systems remains unclear. Furthermore provisioning reliable service within a Cloud federation, which involves the orchestration of multiple Clouds to provision service, remains an unsolved problem. This is especially true when considering the threat of Byzantine faults. Recently, the feasibility of Byzantine Fault-Tolerance within a single Cloud and federated Cloud environments has been debated. This paper investigates Cloud reliability and the applicability of Byzantine Fault-Tolerance in Cloud computing and introduces a Byzantine fault-tolerance framework that enables the deployment of applications across multiple Cloud administrations. An implementation of this framework has facilitated in-depth experiments producing results comparing the reliability of Cloud applications hosted in a federated Cloud to that of a single Cloud.
Abstract:By their very nature, services are accessible only as black-boxes through their published interfaces. It is a well known issue that lack of implementation details may reduce service testability. In previous work, we proposed testable services as a solution to provide third-party services with structural coverage information after a test session, yet without revealing their internal details. However, integrators do not have enough information to improve their test set when they get a low coverage measure because they do not know which test requirements have not been covered. This paper proposes an approach in which testable services are provided along with test metadata that may help integrators to get a higher coverage. The approach is illustrated on a case study of a real system that uses orchestrations and testable services. A formal experiment designed to compare the proposed solution with a functional approach is also presented. The results show evidences that subjects using the testable service approach augmented with metadata can achieve better coverage than subjects using only a functional approach.
Abstract:The entity-based approach for operations modeling was published for the first time three decades ago. Specifically, the notion of entities as the main subjects of processes and entity life-cycle as a technique for dynamic modeling of operations were introduced independently by K. Robinson in 1979, C. Rosenquist in 1982 and M. Jackson in 1983. This modeling work emerged in clear contrast with static entity-relationship modeling found in the data-base tradition. These three pioneer contributions and other substantial research done at the realm of information engineering, structured systems analysis and social sciences in the 80's and 90's have established an important foundation for business operations modeling. On the other hand, Business Process Management (BPM) has continued to receive great attention from practitioners and scholars. Being one of the main hinges between theory and practice of business operations, BPM enjoys contributions from several domains of research such as economics, social sciences, engineering and computing. In spite of its steady growth, the industry side of BPM seems to have evolved somewhat unaware of related progress in the above sister disciplines. Specifically, recent claims on the need to integrate information and activities in process modeling and some rediscoveries of core ideas from entity-based dynamic modeling offer some examples of the disconnection. These and other findings suggest that the BPM field may not have yet fully benefited from the work done in the tradition of structured analysis, information engineering and process theory schools. Furthermore, the possibility of using entity life-cycle for modeling operations addressed by Case Management is an important byproduct. Entity-based life cycle offers a conceptual framework to integrate different types of enterprise operations whose modeling has not yet been reconciled in the BPM tradition. This paper presents an in-depth, multidisciplinary review of the state-of-the-art on entity life cycle modeling. The focus of this review is exclusively on modeling concepts and methodology while tools, programming models and other aspects of entity-life life cycle implementation will be addressed in companion papers. This review should also help pave more holistic approaches to business process modeling.
Abstract:To provide an effective service-oriented solution for a given business problem, it is necessary to explore all available options for providing the required functionality while ensuring a flawless data transfer within the composed services. Existing service composition approaches fall short of this ideal, as functional requirements and data mediation are not considered in a unified framework. We propose a service composition framework that addresses both of these aspects by integrating existing techniques in formal methods, service oriented computing and data mediation. Our framework guarantees the correct interaction of services in a composition by verifying certain behavioral constraints, and resolving data mismatches at semantic, syntactic and structural levels, in a unified manner. A tableau based algorithm is used to generate and explore compositions in a goal-directed fashion that proves or disproves the existence of a service choreographer. Data models, to detect and resolve data mismatches, are generated using WSDL documents and regular expressions. We also apply our framework to examples adapted from the existing service composition literature that provide strong testimony that the approach can be effectively applied in practical settings.
Abstract:Standardized business documents are a prerequisite for successful information exchange in electronic business transactions. The United Nations Centre for Trade Facilitation and eBusiness (UN/CEFACT) provides a conceptual modeling approach, called Core Components, used by Business Partners (BPs) for defining business document models (BDMs). BDMs are essential for defining service interfaces in service-oriented systems. However, in such a highly dynamic environment with ever-changing market demands, BPs are confronted with the need to revise their BDMs resulting in a multitude of different versions. BPs may dictate the use of new versions of BDMs, but small- and medium-sized enterprises (SMEs) may not always adopt new BDM versions due to the cost and effort involved, inhibiting automated electronic information exchange. In this article, we propose a framework including (i) a classification of the impact of changes in BDMs, (ii) evolution templates for the automated transformation of business documents between different BDM versions, and (iii) mitigation strategies for evolutions where fully-automated and semantic-preserving transformations are not feasible. Having such a framework at hand provides SMEs with a low-cost and light-weight approach for dealing with evolving market requirements and hence evolving business documents. Finally, we analyze the evolution of UN/CEFACT's Cross Industry Invoice which has been mandated to be used for electronic invoicing within the European Union as well as present a critical discussion of the evolution templates defined.