Abstract:DCM'09 was the fifth installment of the DCM series of workshop on Developments in Computational Models. The workshop was held the 11th of July 2009 as a satellite event of the international conference ICALP in Rhodes (Greece). The DCM series, which is still ongoing, aims at exploring the variety of computational models.
Abstract:In the paper, we develop a method for constructing quantum algorithms for computing Boolean functions by quantum ordered read-once branching programs (quantum OBDDs). Our method is based on ˉngerprinting technique and representation of Boolean functions by their characteristic polynomials. We use circuit notation for branching programs for desired algorithms presentation. For several known functions our approach provides optimal QOBDDs. Namely we consider such functions as MODm, EQn, Palindromen, and PERMn (testing whether given Boolean matrix is the Permutation Matrix). We also propose a generalization of our method and apply it to the Boolean variant of the Hidden Subgroup Problem.
Abstract:The calculus of looping sequences is a formalism for describing the evolution of biological systems by means of term rewriting rules. Here we enrich this calculus with a type discipline which preserves some biological properties deriving from the requirement of certain elements, and the repellency of others. In particular, the type system guarantees the soundness of the application of reduction rules with respect to the elements which are required (all requirements must be satisfied) and to the elements which are excluded (two elements which repel each other cannot occur in the same compartment). As an example, we model the possible interactions (and compatibility) of di?erent blood types with different antigens. The type system does not allow transfusion with incompatible blood types.
Abstract:In this paper, we propose an abstract interpretation-based framework for reducing the state space of stochastic semantics for protein-protein interaction networks. Our approach consists in quotienting the state space of networks. Yet interestingly, we do not apply the widely-used strong lumpability criterion which imposes that two equivalent states behave similarly with respect to the quotient, but a weak version of it. More precisely, our framework detects and proves some invariants about the dynamics of the system: indeed the quotient of the state space is such that the probability of being in a given state knowing that this state is in a given equivalence class, is an invariant of the semantics. Then we introduce an individual-based stochastic semantics (where each agent is identified by a unique identifier) for the programs of a rule-based language (namely Kappa) and we use our abstraction framework for deriving a sound population-based semantics and a sound fragments-based semantics, which give the distribution of the traces respectively for the number of instances of molecular species and for the number of instances of partially defined molecular species. These partially defined species are chosen automatically thanks to a dependency analysis which is also described in the paper.
Abstract:This paper studies some issues related to autopoietic automata, a model of evolving interactive systems, where the automata produce other automata of the same kind. It is shown how they relate to interactive Turing machines. All results by Jiri Wiedermann on nondeterministic autopoietic automata are extended to deterministic computations. In particular, nondeterminism is not needed for a single autopoietic automaton to generate all autopoietic automata.
Abstract:There are both benefits and drawbacks to cultural diversity. It can lead to friction and exacerbate differences. However, as with biological diversity, cultural diversity is valuable in times of upheaval; if a previously effective solution no longer works, it is good to have alternatives available. What factors give rise to cultural diversity? This paper describes a preliminary investigation of this question using a computational model of cultural evolution. The model is composed of neural network based agents that evolve fitter ideas for actions by (1) inventing new ideas through modification of existing ones, and (2) imitating neighbors' ideas. Numerical simulations indicate that the diversity of ideas in a population is positively correlated with both the proportion of creators to imitators in the population, and the rate at which creators create. This is the case for both minimum and peak diversity of actions over the duration of a run.
Abstract:There have been several different approaches of investigating computation over the real numbers. Among such computable analysis seems to be the most amenable to physical realization and the most widely accepted model that can also act as a unifying framework. Computable analysis was introduced by A. Turing , A. Grzegorczyk , and D. Lacombe . A representation based approach to the field was then developed by C. Kreitz and K. Weihrauch . Any typical representation is based on using the rationals, a countable dense subset of the reals with finite representation, to approximate the real numbers. The purpose of this article is to investigate the transition phenomena between rational computation and the completion of such computation (in some given representation) that induces a computability concept over the reals. This gives deeper insights into the nature of real computation (and generally computation over infinite objects) and how it conceptually differs from the corresponding foundational notion of discrete computation. We have studied both the computability and the complexity-theoretic transition phenomena. The main conclusion is the finding of a conceptual gap between rational and real computation manifested, for instance, by the existence of computable rational functions whose extensions to the reals are not computable and vice versa. This gap can be attributed to two main reasons: (1) continuity and smoothness of real functions play necessary roles in their computability and complexity-theoretic properties whereas the situation is the contrary in rational computation and (2) real computation is approximate and hence robust whereas rational computation is exact and rigid. Despite these negative results, if we relax our framework to include relative computation, then we can bridge the rational-real computation gap relative to ￠2 oracles of the arithmetical hierarchy. We have shown that ￠2 is a tight bound for the rational-real computational equivalence. That is, if a continuous function over the rationals is computable, then its extension to the reals is computable relative to a ￠2 oracle; and vice versa. This result can also be considered an extension of the Shoenfield's Limit Lemma from classical recursion theory to the computable analysis context.
Abstract:We introduce a natural language interface for building stochastic \pi calculus models of biological systems. In this language, complex constructs describing biochemical events are built from basic primitives of association, dissociation and transformation. This language thus allows us to model biochemical systems modularly by describing their dynamics in a narrative-style language, while making amendments, refinements and extensions on the models easy. We give a formal semantics for this language and a translation algorithm into stochastic \pi calculus that delivers this semantics. We demonstrate the language on a model of Fcr receptor phosphorylation during phagocytosis. We provide a tool implementation of the translation into a stochastic \pi calculus language, Microsoft Research's SPiM, which can be used for simulation and analysis.
Abstract:In recent years, a considerable portion of the computer science community has focused its attention on studying the living cell biochemistry. Efforts to understand such complicated reaction environments have stimulated a range of activities, from the ones focusing on the systems biology techniques, through network analysis (motif identification), towards developing languages and simulations for low-level biochemical processes. The approaches that do not use computer simulation techniques, frequently employ mean field equations or, equivalently, classical chemical kinetics. The central quantity of interest is the concentration of reactants, and (mean field) equations describe the time evolution of this quantity. Such equations are used to address various issues among which stability, robustness, or sensitivity analysis. Rarely is the validity of mean field equations questioned. This paper will discuss various situations when mean field equations fail and should not be used. These equations can be derived from the more general theory of diffusion-controlled reactions by assuming that reactants mix well.