Approximation
results for the optimum cost partition problem
by Klaus Jansen
Abstract:
In this paper, we study the optimum cost chromatic partition (OCCP)
problem for several graph classes. The OCCP problem is the problem of
coloring the vertices of a graph such that adjacent vertices get
different colors and that the total coloring costs are minimum.
We prove several approximation results for the OCCP problem
restricted to bipartite, chordal, comparability, interval,
permutation, split and unimodular graphs. We prove that there
exists no polynomial approximation algorithm with ratio
O(|V|^{0.5-\epsilon}) for the OCCP problem restricted to
bipartite and interval graphs, unless P = NP.
Furthermore, we propose approximation algorithms with ratio
O(|V|^{0.5}) for bipartite, interval and unimodular graphs. Finally,
we prove that there exists no polynomial approximation algorithm with ratio O(|V|^{1-\epsilon}) for the OCCP problem restricted to split,
chordal, permutation and comparability graphs, unless P=NP.
Efficiency of Asynchronous Systems and Read Arcs in Petri Nets
by Walter Vogler
Abstract:
Two solutions to the MUTEX-problem are compared
w.r.t. their temporal efficiency. For this, a formerly developed
efficiency-testing for asynchronous systems is adapted to nets with
so-called read arcs. The close relation between efficiency-testing and
fair behaviour is pointed out, and it is shown that read arcs are
necessary for any solution to the MUTEX-problem.
On the concentration of the height of binary search trees
by John Michael Robson
Abstract:
The expectation of the absolute value of
the difference between the heights of two random binary search trees
of n nodes is less than 6.25 for infinitely many n.
Given a plausible assumption, this expectation is less than 4.96 for all but a
finite
number of values of n.
Recursive Computational Depth
by James Lathrop, Jack Lutz
Abstract:
In the 1980's, Bennett introduced computational depth as a formal
measure of the amount of computational history that is evident in an
object's structure. In particular, Bennett identified the classes of
weakly deep and strongly deep sequences, and showed that the halting
problem is strongly deep. Juedes, Lathrop, and Lutz subsequently
extended this result by defining the class of weakly useful sequences,
and proving that every weakly useful sequence is strongly deep.
The present paper investigates refinements of Bennett's notions of
weak and strong depth, called recursively weak depth (introduced
by Fenner, Lutz and Mayordomo) and recursively strong depth
(introduced here). It is argued that these refinements naturally
capture Bennett's idea that deep objects are those which ``contain
internal evidence of a nontrivial causal history.'' The fundamental
properties of recursive computational depth are developed, and it is
shown that the recursively weakly (respectively, strongly) deep
sequences form a proper subclass of the class of weakly (respectively,
strongly) deep sequences. The above-mentioned theorem of Juedes,
Lathrop, and Lutz is then strengthened by proving that every weakly
useful sequence is recursively strongly deep. It follows from these
results that not every strongly deep sequence is weakly useful,
thereby answering a question posed by Juedes.
An Algebra-Based Method to Associate Rewards with EMPA Terms
by Marco Bernardo
Abstract:
We present a simple method to associate rewards with EMPA terms in
order to make the specification and the computation of performance
measures easier. The basic idea behind this method is to specify rewards
inside actions of EMPA terms, so it substantially differs from methods
based on modal logic.
The main motivations of this method are its ease of use as well as the
possibility of defining a notion of equivalence which relates terms
having the same reward, thus allowing for simplification without
altering the performance index. We prove that such an equivalence is a
congruence finer than the strong extended Markovian bisimulation
equivalence, and we present its axiomatization.
The Word Matching Problem Is Undecidable For Finite Special String-Rewriting Systems That Are Confluent
by Paliath Narendran, Friedrich Otto
Abstract:
We present a finite, special, and confluent string-rewriting system
for which the word matching problem is undecidable.
Since the word matching problem is the non-symmetric restriction
of the word unification problem, this presents a non-trivial
improvement of the recent result that for this type of string-rewriting
systems, the word unification problem is undecidable (Otto 1995).
In fact, we show that our undecidability result remains valid even when
we only consider very restricted instances of the word matching problem.
The name discipline of uniform receptiveness
by Davide Sangiorgi
Abstract:
In a process calculus, we say that a name x is {\em uniformly
receptive} for a process P if: (1) at any time P is able of
offering an input at x, at least as long as there are processes that
could send messages at x; (2) the input offer at x is functional,
that is, all messages received by P at x are applied to the same
continuation. In the \pi-calculus this discipline is employed, for
instance, when modeling functions, objects, higher-order
communications, remote-procedure calls. We formulate the discipline
of uniform receptiveness by means of a type system, and then we study
its impact on behavioural equivalences and process reasoning. We
develop some theory and proof techniques for uniform receptiveness,
and illustrate their usefulness on some non-trivial examples.
Independent sets in asteroidal triple-free graphs
by Hajo Broersma, Ton Kloks, Dieter Kratsch, Haiko Mueller
Abstract:
An asteroidal triple is a set of three vertices such that there is a
path between any pair of them avoiding the closed neighborhood of
the third. A graph is called AT-free if it does not have an
asteroidal triple. We show that there is an O(n^4) time algorithm
to compute the maximum weight of an independent set for AT-free
graphs. Furthermore we obtain O(n^4) time algorithms to solve the
independent dominating set and the
independent perfect dominating set problem on AT-free graphs.
We also show how to adapt these algorithms such that they solve the
corresponding problem for graphs with bounded asteroidal number in
polynomial time. Finally we observe that the problems clique
and partition into cliques remain NP-complete when restricted
to AT-free graphs.
Keywords: algorithms, graphs, asteroidal triple, independent set
Computation Paths Logic: An Expressive, yet Elementary, Process Logic
by David Harel, Eli Singerman
Abstract:
A new process logic, is defined, called computation paths logic
(CPL), which treats formulas and programs essentially alike. CPL
is a pathwise extension of PDL in the spirit of the logic R
of Harel and Peleg. It enjoys most of the advantages of previous
process logics, yet is decidable in elementary time. We also
offer extensions for modeling asynchronous/synchronous
concurrency and infinite computations.
The Minimum Color Sum of Bipartite Graphs
by Amotz BarNoy, Guy Kortsarz
Abstract:
The problem of minimum color sum of a graph is to color the
vertices of the graph such that the sum (average) of allassigned
colors is minimum. Recently, it was shown that finding a
polynomial approximation to this problem in general graphs is NP-hard.
In the same paper, a 9/8-approximation algorithm was presented
for bipartite graphs. The hardness question for this problem
on bipartite graphs was left open. In this paper we show that
the minimum color sum problem for bipartite graphs admits no
polynomial approximation scheme, unless P=NP. The proof is by
L-reducing this problem to the problem of finding the maximum
independent set in a graph whose maximum degree is four. This
result indicates clearly that the minimum color sum problem
is much harder than the traditional coloring problem which is
trivially solvable in bipartite graphs. As for the approximation
ratio, we make a further step towards finding the precise
threshold. We present a polynomial 10/9-approximation algorithm. Our
algorithm uses a flow procedure in addition to the maximum
independent set procedure used in previous results.
KEYWORDS: Sum Coloring, Approximation Algorithms, L-Reduction, Bipartite Graphs.
Results on Resource-Bounded Measure
by Harry Buhrman, Stephen Fenner, Lance Fortnow
Abstract:
We construct an oracle relative to which NP has p-measure 0 but
D has measure 1 in EXP. This gives a strong relativized negative
answer to a question posed by Lutz. Secondly, we give
strong evidence that BPP is small. We show that BPP has
p-measure 0 unless EXP = MA and the polynomial-time hierarchy
collapses. This contrasts with the work of Regan et.
al., where it is shown that P/poly does
not have p-measure 0 unless exponentially strong pseudorandom
generators do not exist.
Recognizability Equals Definability for Partial k-Paths
by Valentine Kabanets
Abstract:
If a graph property is expressible in a counting monadic second-order
logic (CMS), then the family of partial k-trees that satisfy this
property can be recognized by a finite tree automaton. Courcelle
conjectured that the converse implication also holds. The cases of
k<=3 have been proved.
We show that every recognizable family oof partial k-paths is definable
in CMS for any k. Thus, we establish that the expressive power of
finite automata equals that oof logic on the class of partial k-paths.
A by-product of our solution is the formula in a monadic second-order
logic that defines the class of all partial k-paths for every k. This
means that the obstruction set of each such class is computable.
Termination of Constraint Logic Programs
by Salvatore Ruggieri
Abstract:
In the general case, the lifting of termination proof methods
from logic programming up to the CLP Scheme
is not a trivial task, since in many real CLP systems
declarativity is lost due to an incorrect operational semantics.
In this paper, we introduce a method for proving
universal termination of constraint logic programs
by strictly extending the approach
of Apt and Pedreschi [AP90].
Taking into account a generic constraint domain instead of the
standard Herbrand univers, acceptable (CLP) programs are defined.
We prove correctness and completeness of the method w.r.t. the
leftmost selection rule for the class of ideal constraint
systems, including CLP({\cal R}_{lin}), CLP({\cal RT}),
and CLP({\cal FT}) among the others.
However, many real systems are not ideal.
We specifically design sufficient conditions
for termination of CLP({\cal R}) programs, by pointing out
how the underlying insights can be extended to other systems.
A Complete and Efficiently Computable Topological Classification of D-dimensional Linear Cellular Automata over Z_m.
by Giovanni Manzini, Luciano Margara
Abstract:
We study the dynamical behavior of D-dimensional linear cellular automata
over Z_m. We provide easy-to-check necessary and sufficient conditions
for a D-dimensional linear cellular automata over Z_m to be sensitive to
initial conditions, expansive, strongly transitive, and equicontinuous.
This paper completes the work initiated in [3] where transitivity
and ergodicity were proved to be equivalent and exactly characterized.
As a consequence of our results, we have a complete and efficiently computable
topological classification of D-dimensional linear cellular automata over
Z_m according to the most important dynamical properties studied in the
theory of discrete time dynamical systems.
Enumerative sequences of leaves in rational trees
by Frederique Bassino, Marie-Pierre Beal, Dominique Perrin
Abstract:
We prove that any N-rational sequence of nonnegative
integers satisfying the Kraft strict inequality for
an integer k is the enumerative sequence of leaves
by height of a rational k-ary tree. Particular cases
of this result had been previously proved. We give
some partial results in the equality case.
A completion algorithm for codes with bounded synchronization delay
by Veronique Bruyere
Abstract:
We show that any rational code with bounded synchronization delay
is included in a rational maximal code with bounded
synchronization delay.
KEYWORDS: automata formal languages variable-length codes combinatorics on words
On modular properties of higher order extensional lambda calculi
by Roberto Di Cosmo, Neil Ghani
Abstract:
We prove that confluence and strong normalisation are
both modular properties for the addition of algebraic term
rewriting systems to Girard's F^{\omega} equipped with either
\beta-equality or \beta\eta-equality. The key innovation is the
use of \eta-expansions over the more traditional
\eta-contractions.
We then discuss the difficulties encountered in generalising these
results to type theories with dependent types. Here confluence
remains modular, but results concerning strong normalisation await
further basic research into the use of \eta-expansions in
dependent type theory.
Worst-case hardness suffices for derandomization: a new method for hardness-randomness trade-offs
by Alexander Andreev, Andrea Clementi, Jose Rolim
Abstract:
Up to now, the known derandomization methods have been derived
assuming average-case hardness conditions. In this paper, we instead
present the first worst-case hardness conditions sufficient to
obtain P=BPP.
Our conditions refer to the worst-case circuit complexity
of Boolean operators computable in time exponential in the input
size. Such results are achieved by a new method that departs
significantly from the usual known methods based on
pseudo-random generators. Indeed, we prove new particular
bounds on the circuit complexity of partial Boolean functions
which are then used to derive efficient constructions of hitting sets
generators. As recently proved, such generators can derandomize
any BPP-algorithm.
Our method also gives a worst-case hardness condition for the circuit
complexity of Boolean operators computable in NC (with respect to
their output size) to obtain an efficient reduction from any two-side
error NC algorithm into a zero-error NC algorithm thus obtaining
ZNC=BPNC.
The expressibility of languages and relations by word equations
by Juhani Karhumaeki, Wojciech Plandowski, Filippo Mignosi
Abstract:
We prove theorems which allow to show that certain properties of words
are not expressible as components of solutions of word equations.
In particular, "the primitiveness" and "the equal length" are such
properties, as well as being "any word over a proper subalphabet".
Constructive Linear Time Algorithms for Branchwidth
by Hans Bodlaender, Dimitrios Thilikos
Abstract:
Let {\cal G}_{k} be the class of graphs
with branchwidth at most k.
In this paper we prove that one can construct, for any k,
a linear time algorithm that checks if a graph belongs in {\cal G}_{k}
and, if so, outputs a branch decomposition of minimum width.
Moreover, we find the obstruction set
for {\cal G}_{3} and, for the same class, we give a safe and complete set
of reduction rules.
Our results lead to a practical linear time algorithm
that checks if a graph has branchwidth
\leq 3 and, if so, outputs a branch decomposition of minimum width.
Model Checking the Full Modal Mu-Calculus for Infinite Sequential Processes
by Olaf Burkart, Bernhard Steffen
Abstract:
In this paper we develop a new elementary algorithm for model-checking
infinite sequential processes, including \emph{context-free processes},
\emph{pushdown processes}, and \emph{regular graphs}, that decides the full
modal mu-calculus. Whereas the actual model checking algorithm results
straightforwardly from considering conditional semantics together with
backtracking caused by alternation, the corresponding correctness proof
requires a stronger framework, which uses \emph{dynamic environments}
modelled by finite-state automata.
Finite loops recognize exactly the regular open languages
by Martin Beaudry, Francois lemieux, Denis Therien
Abstract:
In this paper, we characterize exactly the class of languages
that are recognizable by finite loops, i.e. by cancellative
binary algebras with an identity.
This turns out to be the well-studied class of regular open
languages. Our proof technique is interesting in itself:
we generalize the operation of block product of monoids, which
is so useful in the associative case, to the situation where the
left factor in the product is non-associative.
Solving Trace Equations Using Lexicographical Normal Forms
by Volker Diekert, Yuri Matiyasevich, Anca Muscholl
Abstract:
Very recently, Matiyasevich showed that the question whether
an equation over a
trace monoid has a solution or not is decidable. In his proof this
question is reduced to the solvability of word equations with
constraints, by induction on the
size of the commutation relation. In the present paper we give
another proof
of this result using lexicographical normal forms. Our method is a
direct reduction of a trace equation system to a word equation
system with regular
constraints. We use for this a new result on lexicographical normal
forms.
Randomization and nondeterminism are incomparable for ordered read-once branching programs
by Farid Ablayev
Abstract:
In [2] we exhibited a simple boolean functions f_n in
n variables such that:
1) f_{n} can be computed by polynomial size randomized ordered
read-once branching program with one sided small error;
2) any nondeterministic ordered read-once branching program that
computes f_n has exponential size.
In this paper we present a simple boolean function g_n in n
variables such that:
1) g_n can be computed by polynomial size nondeterministic
ordered read-once branching program;
2) any two-sided error randomized ordered read-once branching
program that computes f_n has exponential size.
These mean that BPP and NP are incomparable in the context of
ordered read-once branching program.
On confluence in the pi-calculus
by Anna Philippou, David Walker
Abstract:
An account of the basic theory of confluence in the
pi-calculus is presented, useful techniques for showing
confluence of complex mobile systems are given, and the
utility of some of the theory presented is illustrated
via an analysis of a distributed algorithm.
Maintaining Minimum Spanning Trees in Dynamic Graphs
by Monika Rauch Henzinger, Valerie King
Abstract:
We present the first fully dynamic algorithm for
maintaining a minimum spanning tree in time o(\sqrt n) per operation.
To be precise, the algorithm uses O(n^{1/3} \log^2 n)
amortized time per update operation. The algorithm is
fairly simple and deterministic. We also present the first
fully dynamic deterministic algorithm for maintaining connectivity
(and, as a consequence, bipartiteness) in amortized time
O(n^{1/3} \log n) per update, with O(1) worst-case time per query.
A Semantically Sound Actor Tranlsation
by Ian Mason, Carolyn Talcott
Abstract:
In this paper we present two actor languages: a high level User Language and
a low level Kernel Language. The user language is object-oriented. Actors
qua objects have methods and synchronization constraints which allow an
object to control when a method can be invoked. The kernel language is an
extension of a simple functional language with primitives for actor
computation. Each of the languages is given a simple operational semantics
via a labelled transition system, and an abstract interaction semantics
derived from the computations of the transition system. The main result
presented here is a translation from the user language to the kernel
language and a proof that this mapping preserves the interaction semantics.
This demonstrates that the basic actor primitives of the kernel are as
expressive as those of the user language, although perhaps less convenient
from a programmers point of view. The proof itself is of interest since it
demonstrates a methodology based on abstract actor structures for proving
correctness of transformations and translations of actor languages and more
generally of concurrent object languages.
Improving Spanning Trees by Upgrading Nodes
by Sven Krumke, Madhav Marathe, Hartmut Noltemeier, Ramamurthy Ravi, SS Ravi, Ravi Sundaram, Hans-Christoph Wirth
Abstract:
We study budget constrained optimal network upgrading
problems. Such problems aim at finding optimal strategies for
improving a network under some cost measure subject to certain
budget constraints. A general problem in this setting is the
following. We are given an edge weighted graph G=(V,E) where nodes
represent processors and edges represent bidirectional communication
links. The processor at a node v can be upgraded at a cost
of c(v). Such an upgrade reduces the delay of each link
emanating from v. The goal is to find a minimum cost set of nodes
to be upgraded so that the resulting network has the best
performance with respect to some measure. We consider the problem
under two measures, namely, the weight of a minimum spanning tree
and the bottleneck weight of a minimum bottleneck spanning tree. We
present approximation and hardness results for the problem. Our
results are tight to within constant factors. We also show that
these approximation algorithms can be used to construct good
approximation algorithms for the dual versions of the problems where
there is a budget constraint on the upgrading cost and the
objectives are minimum weight spanning tree and minimum bottleneck
weight spanning tree respectively.
An improved master theorem for divide-and-conquer recurrences
by Salvador Roura
Abstract:
We present a new master theorem
for the study of divide-and-conquer recursive definitions,
which improves the old one in several aspects.
In particular, it provides more information,
frees us from technicalities like floors and ceilings,
and covers a wider set of toll functions and weight distributions,
stochastic recurrences included.
Upper bound on communication complexity of private information retrieval
by Andris Ambainis
Abstract:
We construct a scheme for private information retrieval
with k databases and communication complexity O(n^{1/(2k-1)}).
Axiomatizations for the Perpetual Loop
by Wan Fokkink
Abstract:
Milner proposed an axiomatization for the Kleene star
in basic process algebra, in the presence of deadlock
and empty process, modulo bisimulation equivalence. In this paper,
Milner's axioms are adapted to no-exit iteration x*delta, which
executes x infinitely many times in a row, and
it is shown that this axiomatization is complete for
no-exit iteration in basic process algebra with deadlock and
empty process, modulo bisimulation.
Star-free picture expressions are strictly weaker than first-order logic
by Thomas Wilke
Abstract:
We exhibit a first-order definable picture language which we prove
is *not* expressible by any star-free picture expression, ie,
it is not star-free. Thus first-order logic over pictures is
strictly more powerful than star-free picture expressions are. This
is in sharp contrast with the situation with words: the well-known
McNaughton-Papert theorem states that a word language is expressible
by a first-order formula if and only it is expressible by a
star-free (word) expression.
The main ingredients of the non-expressibility result are a
Fraisse style algebraic characterization of star freeness for
picture languages and combinatorics on words.
Colouring Paths in Directed Symmetric Trees with Applications to WDM Routing
by Luisa Gargano, Pavol Hell, Stephane Perennes
Abstract:
Let T be a symmetric directed tree, i.e., an undirected tree with each
edge viewed as two opposite arcs. We prove that the minimum number of
colours needed to colour the set of all directed paths in T, so that
no two paths of the same colour use the same arc of T, is equal to
the maximum number of paths passing through an arc of T.
This result is applied to solve the all-to-all communication problem
in wavelength--division--multiplexing (WDM) routing in
all--optical networks, that is, we give an efficient algorithm to
optimally assign wavelengths to the all the paths of a tree network.
Moreover, we study conditions for a given set S of paths be
coloured efficiently with the minimum possible number of
colours/wavelengths. It is known that the problem of colouring
a general subset of all possible paths in a symmetric directed tree
is an NP-hard problem.
A Primal-Dual Approach to Approximation of Node-Deletion Problems for Matroidal Properties
by Toshihiro Fujito
Abstract:
This paper is concerned with the polynomial time
approximability of node-deletion problems for hereditary properties.
It has been known that when such a graph property has only a finite
number of minimal forbidden graphs the corresponding node-deletion
problem can be efficiently approximated to within some constant factor
of the optimum. On the other hand when there exist infinitely many
minimal forbidden graphs no constant factor approximation has been
known except for the case of the Feedback Vertex Set (FVS) problem in
undirected graphs.
We will focus on such properties that are derived from matroids
definable on the edge set of any graph. It will be shown first that
all the node-deletion problem for such properties can be uniformly
formulated by a simple but non-standard form of the integer
programming. The primal-dual approximation algorithm for all such
problems is then presented based on this formulation and the dual of
its linear programming relaxation. The performance ratios implied by
this approach will be analyzed and given in terms of the combinatorial
structures underlying the problems of interest.
It will be shown next, as an application of the current primal-dual
approach, that the FVS problem is not the sole exceptional case; i.e.,
there exist other graph (hereditary) properties with an infinite
number of minimal forbidden graphs, for which the node-deletion
problems are efficiently approximable to within a factor of 2, the
best constant factor known for either Vertex Cover or FVS. In fact,
we will show, there are infinitely many of them. Such properties are
derived from the notion of matroidal families of graphs and relaxing
the definitions for them. The infinite sequence of these properties
will be constructed with those for both VC and FVS problems at its
basis and thus providing a proper generalization of them. It will be
also indicated that our formulation for these node-deletion problems
introduces the integrality gap of at most 2 unlike the more natural
"covering" formulations for them.
Bisimulation for probabilistic transition systems: a coalgebraic approach
by Erik de Vink, Jan Rutten
Abstract:
The notion of bisimulation as proposed by Larsen and Skou for
discrete probabilistic transition systems is shown to coincide
with a coalgebraic definition in the sense of Aczel and Mendler
in terms of a set functor, which associates to a set its collection
of simple probability distributions. This coalgebraic formulation
makes it possible to generalize the concepts of discrete probabilistic
transition system and probabilistic bisimulation to a continuous
setting involving Borel probability measures. A functor M_1 is
introduced that yields for a metric space its collection of Borel
probability measures. Under reasonable conditions, this functor
exactly captures generalized probabilistic bisimilarity. Application
of the final coalgebra paradigm to M_1 then yields an internally fully
abstract semantical domain with respect to probabilistic bisimulation,
which is therefore well-suited for the interpretation of probabilistic
specification and stochastic programming concepts.
Some bounds on the computational power of Piecewise Constant Derivative systems
by Olivier Bournez
Abstract:
We study the computational power of Piecewise
Constant Derivative (PCD) systems. PCD systems are
dynamical systems defined by a piecewise constant
differential equation. They can be seen as
computational machines working on a continuous space
with a continuous time. We show that the computation
times of these machines can be measured either as a discrete value, called
``the discrete time'', or either as a continuous value, called
``the continuous time''. We prove that PCD systems are equivalent to Turing machines
and to linear machines in finite discrete time. We prove that the languages recognized
by PCD systems in dimension d in finite continuous time
are precisely the
languages of the d-2 ^{th} level of the arithmetical
hierarchy. Hence we have a precise characterization of the
computational power of PCD systems and we solve a
problem left open by a [Asarin and Maler,95]
On recognizable and rational formal power series
in partially commuting variables
by Manfred Droste, Paul Gastin
Abstract:
Kleene's theorem on the coincidence of regular and rational
languages in free monoids has been generalized by Sch\"utzenberger
to a description of the recognizable formal power series in
non-commuting variables over arbitrary semi-rings, and by
Ochma\'nski to a characterization of the recognizable languages in
trace monoids. Here we will describe the recognizable formal
power series over arbitrary semi-rings and in partially commuting
variables, i.e. over trace monoids. We prove that the
recognizable series are certain rational power series, which can
be constructed from the polynomials by using the operations sum,
product and a restricted star which is applied only to series for
which the elements in the support all have the same connected
alphabet. The converse is true if the underlying semi-ring is
commutative. It is shown that these assumptions are necessary.
This provides a joint generalization of both Sch\"utzenberger's
and Ochma\'nski's theorems.
On a conjecture of J. Shallit
by Julien Cassaigne
Abstract:
We solve a conjecture of J. Shallit related to the automaticity
function of a unary language, or equivalently to the first occurrence
function in a symbolic sequence. The answer is negative: the
conjecture is false, but it can be corrected by changing the constant
involved. The proof is based on a study of paths in the Rauzy graphs
associated to the sequence.
Symbolic Model Checking for Probabilistic Processes
by Christel Baier, Ed Clarke, Vassili
Hartonas-Garmhausen, Marta Kwiatkowska, Mark Ryan
Abstract:
We introduce a symbolic model checking procedure for Probabilistic
Computation Tree Logic (PCTL) and its generalization PCTL^* over
labelled Markov chains as models. Model checking for probabilistic
logics typically involves solving linear equation systems as means
to ascertain the probability of a given (non-probabilistic) formula
holding in a state. Our algorithm is based on the idea of
representing the matrices used in the linear equation systems in
terms of an appropriate generalization of Binary Decision Diagrams
(BDDs) called Multi-Terminal Binary Decision Diagrams (MTBDDs)
introduced in Clarke et al [CFGMYZ93], and combining that
with BDD-based model checking. To enable precise probability
calculation for path formulas we use the standard construction of
the Rabin automaton (via Vardi \& Wolper construction of the Buchi
automaton followed by Safra's determinization) for an LTL formula
and, as suggested by de Alfaro [Alf96], construct the product
of thus obtained automaton with the Markov chain model. Expressing
these standard constructions as BDDs avoids explicit state space
construction, thus yielding an efficient model checking procedure.
On-line Routing in All-Optical Networks
by Yair Bartal, Stefano Leonardi
Abstract:
The paper deals with on-line routing in WDM optical networks.
A sequence of communication requests arrives over time, each is
a pair of nodes to be connected by a path.
The problem is to assign a wavelength and a path to each pair, so that
no two paths sharing a link are assigned the same wavelength.
The goal is to minimize the number of wavelengths necessary to establish all
connections.
We consider trees, trees of rings, and meshes topologies, previously
studied in the off-line case. We give on-line algorithms with logarithmic
competitive ratio for all these topologies. We give a matching lower
bound for meshes, and an almost matching lower bound for trees.
Minimizing Diameters of Dynamic Trees
by Stephen Alstrup, Jacob Holm, Kristian de Lichtenberg, Mikkel Thorup
Abstract:
In this paper we consider an on-line problem related to minimizing
the diameter of a dynamic tree T. A new edge f is presented, and our
task is to delete the edge e of the induced cycle so as to minimize
the diameter of the resulting tree. Starting with a tree with n
nodes, we show how each such best swap can be found in worst-case
O(log^2 n) time. The problem was raised by Italiano and Ramaswami in
ICALP'94, where they gave an algorithm which use O(n)-time to
process each new edge.
Checking Properties of Polynomials
by Bruno Codenotti, Funda Ergun, Peter S. Gemmell, S. Ravi Kumar
Abstract:
In this paper we show how to construct efficient checkers for
programs that claim to compute properties of polynomials.
The properties we consider are roots, norms, and other
analytic/algebraic functions of polynomials.
We show how to check a program \Pi that computes a
specific root (e.g., the largest) or a subset of roots
of a given polynomial p which is available as a black box.
The checkers, in addition to never computing the root(s)
themselves, strive to minimize both the running time
(preferably o(deg^2 (p))) and the number of black box
evaluations of p (preferably o(deg (p))).
We obtain deterministic checkers when a separation bound
between the roots is known and probabilistic checkers
when the roots can be arbitrarily close.
We then extend the checkers to handle the situations when
the program \Pi returns an approximation to the root and
when the evaluation of the polynomial p is approximate.
Our results translate into efficient checkers for
matrix spectra computations both in the exact and approximate
settings, operating in the library model of [BLR93].
Next we show that the usual characterization
of norms using the triangle inequality
is not suited for self-testing in the exact case, but
surprisingly, could be used in the approximate case.
Our results are complementary to most of the existing
results on testing polynomials.
The testers in the latter have the goal of determining
whether a program computes a polynomial of given degree,
whereas we are interested in checking the properties of a
given polynomial. The nature of these properties
entails the use of techniques that are new to checking from
several disciplines like numerical and complex analysis,
geometry, and in particular, geometry of polynomials.
Exact Analysis of Dodgson Elections: Lewis Carroll's 1876 Voting System is Complete for Parallel Access to NP
by Edith Hemaspaandra, Lane Hemaspaandra, Joerg Rothe
Abstract:
In 1876, Lewis Carroll proposed a voting system in which the
winner is the candidate who with the fewest changes in voters'
preferences becomes a Condorcet winner---a candidate who beats
all other candidates in pairwise majority-rule elections.
Bartholdi, Tovey, and Trick provided a
lower bound---NP-hardness---on the computational complexity of
determining the election winner in Carroll's system. We provide a
stronger lower bound and an upper bound that matches our lower bound.
In particular, determining the winner in Carroll's system is complete
for parallel access to NP, i.e., it is complete for the class
Theta_2, for which it becomes the most natural complete problem
known.
It follows that determining the winner in Carroll's elections
is not NP-complete unless the polynomial hierarchy collapses.
We note that, in contrast with our Theta_2 completeness result,
for each fixed integer k there is a polynomial-time algorithm
to compute the election winner, in Carroll's system, for elections
having at most k candidates.
The Equivalence Problem for Deterministic
Pushdown Automata is Decidable
by Geraud Senizergues
Abstract:
The equivalence problem for deterministic pushdown automata is shown to be decidable.
We exhibit a complete formal system for deducing equivalent pairs of
deterministic rational series on the alphabet associated with a dpda M.
Distributed Processes and Location Failures
by James Riely, Matthew Hennessy
Abstract:
Site failure is an essential aspect of distributed systems;
nonetheless its effect on programming language semantics remains
poorly understood. To model such systems, we define a process
calculus in which processes are run at distributed locations. The
language provides operators to kill locations, to test the status
(dead or alive) of locations, and to spawn processes at remote
locations. Using a variation of bisimulation, we provide alternative
characterizations of strong and weak barbed congruence for this
language, based on an operational semantics that uses configurations
to record the status of locations. We then derive a second, symbolic
characterization in which configurations are replaced by logical
formulae. In the strong case the formulae come from a standard
propositional logic, while in the weak case a temporal logic with past
time modalities is required. The symbolic characterization
establishes that, in principle, barbed congruence for such languages
can be checked efficiently using existing techniques.
Efficient Array Partitioning
by Sanjeev Khanna, S. Muthukrishnan, Steven Skiena
Abstract:
We consider the problem of partitioning an array of n items
into p intervals so that the maximum weight
of the intervals is minimized. The currently best known bound
for this problem is O(np). In this paper, we
present two improved algorithms for this problem:
one runs in time O(n+p^2(\log n)^2) and the other runs in
time O(n\log n). The former is optimal whenever p \leq \sqrt{n}/\log n,
and the latter is near-optimal for arbitrary p.
We consider the natural generalization of this partitioning
to two dimensions, where an n \times n array of items is to be
partitioned into p^2 blocks by partitioning the rows and columns
into p intervals each and considering the blocks induced
by this partition. The problem is to find that partition
which minimizes the maximum weight among the resulting blocks.
This problem is known to be NP-Complete. Here we provide a
simple, polynomial time algorithm that determines a solution
at most O(1) times the optimum; the previously best approximation
ratio was O(\sqrt{p}).
Both the results above are proved for the case when the weight of
an interval or block is the sum of the elements in it. These problems
arise in load balancing for parallel machines and data partitioning
in parallel languages. Applications in motion estimation by block
matching in video and image compression, give rise to the dual problem,
that of minimizing the number of dividers p so that the maximum
weight of a block is at most \delta. We give an O(\log n)
approximation algorithm for this problem.
All our results for two dimensional array partitioning extend
to any higher fixed dimension.
Dynamic algorithms for graphs of bounded treewidth
by Torben Hagerup
Abstract:
The formalism of monadic second-order (MS) logic has been
very successful in unifying a large number of
algorithms on graphs of bounded treewidth.
We extend the elegant framework of MS
logic from static problems to dynamic problems, in
which queries about MS properties of a graph of
bounded treewidth are interspersed with updates
of vertex and edge labels.
This allows us to unify and occasionally strengthen
a number of scattered previous results obtained in
an ad-hoc manner and to enable solutions to a
wide range of additional problems to be
derived automatically.
As an auxiliary result of independent interest, we dynamize
a data structure of Chazelle and Alon and Schieber for
answering queries about sums of labels along paths in a
tree with vertices labeled by elements of a semigroup.
Constructing Big Trees from Short Sequences
by Peter Erdos, Michael Steel, Laszlo Szekely, Tandy Warnow
Abstract:
The construction of evolutionary trees is a fundamental problem in
biology. All current polynomial time methods (to our knowledge) are
unlikely to recover the topology of the true tree from sequences of
realistic lengths (bounded, perhaps, by 10,000 nucleotides) for large
sets of widely divergent sequences, even if such methods are known to be
able to reconstruct the correct topology given long enough sequences.
Therefore, all current polynomial time methods may be incapable of
addressing difficult and important data sets such as the {\em Tree of
Life}. We address this problem and present a new polynomial time
algorithm for reconstructing evolutionary trees called the {\em Short
Quartets Method}. Our method computes topologies of subtrees induced by
``short" quartets, which avoid the problematic influence of pairs of
widely divergent sequences. We reconstruct a tree on the entire set of
sequences from these inferred subtrees when they are compatible. We
describe a simple implementation of this method based upon applying the
four-point method to derived distances, and study its convergence rate
for the Cavender-Farris model of evolution. We show that for each fixed
range of the mutation probabilities associated to the model tree, our
method has a convergence rate that is at worst polynomial, while it is
polylogarithmic for typical trees. In STOC '96, Farach and Kannan
studied a method for inferring Cavender-Farris trees, and have recently
extended their analysis to include the convergence rate for the topology
estimation; their analysis shows that their method may require
superpolynomial length sequences to obtain accurate topologies with high
probability. Consequently, this indicates that our method will produce
the correct topology from shorter sequences than can be guaranteed for
the Farach-Kannan method. We also compare our method to an exact
algorithm for the L_{\infty}-nearest tree, and show that on any input
for which the exact algorithm can be guaranteed to be correct, so can
ours. We use the probabilistic method and show that our method requires
only polylogarithmic length sequences for random trees, for each fixed
range of the mutation rates. We extend our method to the reconstruction
of more general r-state model trees. \old{ In recent years, the focus
of attention in the mathematical side of systematic biology has switched
from proofs of consistency to studies of the convergence rates of
different methods with respect to accurate estimation of the topology of
the model tree, under the assumption that the sequences are generated by
a stochastic process on an evolutionary tree, and assuming that each
site evolves i.i.d. Because of the connection between these stochastic
models of evolution and additive trees, all reasonable distance based
methods are consistent (i.e. converge on the true tree given adequate
length sequences). In this paper we study the performance of two
distance-based methods, the 3-approximation algorithm of Agarwala et
al (SODA '96) for the L_{\infty}-nearest tree, and a new method we
propose, Short Quartet Method. We derive analytical bounds for
these methods on the rate at which the sequence length must grow as a
function of the number of species and the associated parameters of the
tree in order to obtain the topology of the model tree with high
probability. Almost all previous studies have been empirical, so that
our results are the first to directly provide analytical guarantees of
the topology estimation. However, we will also discuss the extensions
of Farach and Kannan to their analysis of the 3-approximation
algorithm of Agarwala et al, since their original study was concerned
with a related but different problem (the convergence of their method in
terms of the variational distance). For each method, we provide a
formula establishing an upper bound on the sequence length needed to
obtain a correct estimation of the topology, so that each formula
involves only the number of sequences, the range within which the
mutation rates on the edges must lie, and either the depth or the
diameter of the model tree. We use the probabilistic method to
show that the depth of almost all trees (under either the uniform or the
Yule-Harding distributions) grows significantly slower than the
diameter. This establishes that the Short Quartets Method requires
significantly shorter sequences than the other methods to obtain the
topology of the model tree with high probability, independent of the
mutation probabilities on the edges. In particular, we show that the
sequence length needed by the Short Quartets Method is always at worst
polynomial, while the length needed by the other two methods can be
superpolynomial. We show briefly how our analysis can be extended to
the more biologically relevant a-state models. ae establish a lower
bound of \log n on the sequence length needed to obtain an accurate
topology estimation using any method (deterministic or randomized). It
follows that for many model trees, the upper bounds we obtain on the
sequence lengths needed for recovering the topology of almost all trees
(independent of edge mutation probabilities) are polynomial in the lower
bound.
On Characterization of Escrow Encryption Schemes
by Yair Frankel, Moti Yung
Abstract:
Designing escrow encryption schemes is an area of much recent
interest. Due to its importance to governments and business
organizations it may become crucial to the design and deployment of
cryptographic systems. However, the basic design issues,
characterizations and difficulties of escrow systems are not fully
understood or specified at the moment. Escrow encryption differs from
a typical encryption system since: (1) there are two potential
receivers (the actual receiver and law enforcement which may
selectively open encrypted message for crime prevention); (2) there
is, therefore, a need for ``on-line assurance (verification) of
'compliance' with the specification'' to guarantee that a potential
law-enforcement receiver will indeed get the message upon escrow; and
(3) there is a need to assure compliance of law enforcement (e.g., to
avoid framing of honest users).
Without a theoretical underpinning of the concept, we are not well
equipped to design solid escrow schemes or to understand where are the
difficulties. The goal of this paper is to characterize one aspect of
escrow encryption schemes, namely their requirements of
``compliance''. We model public-key escrow systems formally, based on
available requirements. We then show that basic escrow schemes with
certain necessary requirements are related (in fact, equivalent) to
schemes with certain security levels that have been extensively
investigated by theoretical cryptography.
We show that the combination of on-line compliance assurance by the
sender and two different receivers (intended receiver and potential
law-enforcement) in the context of escrow encryption system is
equivalent to a ``chosen ciphertext secure public-key system'' (i.e.,
one secure against an adversary who has access to a decryption oracle
before trying to decipher a target ciphertext). If we further add
measures that will force law enforcement to comply as well -- namely,
if we require messages to be associated with a ``context'' (which may
include the sender, the receiver and a time unit), as well as opening
of messages to law enforcement only within a given (authorized)
context, then we obtain a system equivalent to what is known as
``non-malleable encryption schemes''. The reductions of the known
secure schemes to their escrow-related schemes are quite direct and
reduce the ``security parameter'' in a linear factor at most, showing
a strong proximity between the systems. The characterization also
leads us to design schemes that exploit or emanate from the
equivalence.
key words: Escrow encryption system models, compliance and compliance
verification, law enforcement, secure public-key schemes, chosen
ciphertext security, non-malleable security, characterization of
cryptosystems.
Tilings and quasiperiodicity
by Bruno Durand
Abstract:
Quasiperiodic tilings are those tilings in which finite patterns appear
regularly in the plane. This property is a generalization of the
periodicity; it was introduced for representing quasicrystals and it
is also motivated by the study of quasiperiodic words. We prove that
if a tile set can tile the plane, then it can tile the plane
quasiperiodically ---a surprising result that does not hold for
periodicity. In order to compare the regularity of quasiperiodic
tilings, we introduce and study a quasiperiodicity function and
prove that it is of the form x \mapsto x +c if and only if the
considered tiling is periodic. At last, we prove that if a tile set
can be used to form a quasiperiodic tiling which is \emph{not}
periodic, then it can form an uncountable number of tilings.
On Explicit Substitution and Names
by Eike Ritter, Valeria de Paiva
Abstract:
Calculi with explicit substitutions have found widespread acceptance
as a basis for abstract machines for functional languages.
In this paper we investigate the relations between variants with de
Bruijn-numbers, with variable names, with reduction based on raw
expressions and calculi with equational judgements.
We show the equivalence between these variants, which is crucial
in establishing the correspondence between the semantics of the
calculus and its implementations.
Efficient Parallel Graph Algorithms For Coarse Grained Multicomputers and BSP
by Edson Caceres, Frank Dehne, Afonso Ferreira, Paola Flocchini, Ingo Rieping, Allesandro Roncato, Nicola Santoro, Siang Song
Abstract:
In this paper, we present deterministic parallel algorithms for the
coarse grained multicomputer (CGM) and BSP which solve the following
well known graph problems:
(1) list ranking,
(2) Euler tour construction,
(3) computing the connected components and spannig forest,
(4) lowest common ancestor preprocessing,
(5) tree contraction and expression tree evaluation,
(6) computing an ear decomposition or open ear decomposition,
(7) 2-edge connectivity and biconnectivity (testing and component computation), and
(8) cordal graph recognition (finding a perfect elimination ordering).
The algorithms for Problems 1-7 require O(log p) communication rounds
and linear sequential work per round.
Our results for Problems 1 and 2 hold for arbitrary ratios n/p, i.e. they
are fully scalable, and for Problems 3-7 it is assumed that n/p >=
p^\epsilon, \epsilon > 0, which is true for all commercially available
multiprocessors.
We view the algorithms presented as an important step towards the final goal of
O(1) communication rounds. Note that, the number of communication rounds obtained
in this paper is independent of n and grows only very slowly with respect to p.
Hence, for most practical purposes, the number of communication rounds can be
considered as constant.
The result for Problem 1 is a considerable improvement over those previously
reported. The algorithms for Problems 2-7 are the first practically relevant parallel
algorithms for these problems for commercially available coarse grained parallel
machines.
An abstract data type for real numbers
by Pietro Di Gianantonio
Abstract:
We present a calculus having real numbers as a basic data type.
The calculus is defined by its denotational semantics. We prove the
universality of the calculus. We show how the definition of an
operational semantics is problematic. We discuss this problem and
present a possible solution.
keywords: real number computability, domain theory,
denotational and operational semantics, abstract data types
Molecular Computing, Bounded Nondeterminism, and Efficient Recursion
by Richard Beigel, Bin Fu
Abstract:
The maximum number of strands used is an important measure of a
molecular algorithm's complexity. This measure is also called the
space used by the algorithm. We show that every NP problem that
can be solved with b(n) bits of nondeterminism can be solved by
molecular computation in a polynomial number of steps, with four test
tubes, in space 2^{b(n)}. In addition, we identify a large class of
recursive algorithms that can be implemented using bounded
nondeterminism. This yields improved molecular algorithms for
important problems like 3-SAT, independent set, and 3-colorability.
The Geometry of Orthogonal Reduction Spaces
by Zurab Khasidashvili, John Glauert
Abstract:
We investigate mutual dependencies of subexpressions of a
computable expression, in orthogonal rewrite systems, and identify
conditions for their independent computation, concurrently. To this
end, we introduce concepts familiar from ordinary Euclidean Geometry
(such as basis, projection, distance, etc.) for reduction spaces. We
show how a basis at an expression can be constructed so that any
reduction starting from that expression can be decomposed as the sum
of its projections on the axes of the basis. To make the concepts
computationally more relevant, we relativize them w.r.t. stable sets
S of results (such as the set of normal forms, head-normal forms, and
weak-head-normal forms, in the Lambda Calculus), and show that an
optimal concurrent computation of an expression w.r.t. S consists of
optimal computations of its S-independent subexpressions. All
these results are obtained for Stable Deterministic Residual
Structures, which are Abstract Reduction Systems with an axiomatized
residual relation, and model all orthogonal rewrite systems.
Discrete-time control for rectangular hybrid automata
by Thomas A. Henzinger, Peter W. Kopke
Abstract:
The main contribution of this paper is the realization that the discrete-time
control of continuous-time systems not only is often a more adequate model of
reality than continuous-time control, but is amenable to algorithmic methods
for a much wider class of systems.
While continuous-time controllers can be synthesized automatically only for
timed automata, we show that discrete-time, or sampling, controllers can be
synthesized automatically for all rectangular hybrid automata.
Rectangular hybrid automata model digital control programs of analog plant
environments.
We study rectangular hybrid automata where the plant state evolves
continuously in real-numbered time, but the controller samples the plant
state and changes the control state discretely, only at the integer points in
time.
We prove that rectangular hybrid automata have finite bisimilarity quotients
when all control transitions happen at integer times, even if the contraints
on the derivatives of the variables vary between control states.
This is sharply in contrast with the conventional model where control
transitions may happen at any real time, and already the reachability
problem is undecidable.
Based on the finite bisimilarity quotients, we give an exponential algorithm
for the symbolic sampling-controller synthesis of rectangular automata.
We show our algorithm to be optimal by proving the problem to be
EXPTIME-hard.
We also show that rectangular automata form a maximal class of systems for
which the sampling-controller synthesis problem can be solved
algorithmically.
The Theory of Vaccines
by Massimo Marchiori
Abstract:
Despite the major role that modularity occupies in computer science, all
the known results on modular analysis only treat particular problems, and
there is no general unifying theory. In this paper we provide such a
general theory of modularity. First, we study the space of the criteria for
modularity (the so-called modularity space), and give results on its
complexity. Then, we introduce the notion of vaccine and show how it can
be used to completely analyze the modular space. It is also shown how
vaccines can be effectively used to solve a variety of other modularity
problems, providing the best solutions. As an application, we successfully
apply the theory to the study of modularity for term rewriting, giving for
the first time optimality results, and completely solving the modularity
problem for the major properties of rewriting.
KEYWORDS: term rewriting, modularity, optimality, program verification,
program development.
Refining and Compressing Abstract Domains
by Roberto Giacobazzi, Francesco Ranzato
Abstract:
In the context of Cousot and Cousot's abstract interpretation theory,
we present a general framework to define, study and handle operators
modifying abstract domains. In particular, we introduce the notions
of operators of refinement and compression of abstract domains: A
refinement enhances the precision of an abstract domain; a compression
operator (compressor) can exist relatively to a given refinement,
and it simplifies in the best possible way a domain for that refinemen
t.
The adequateness of our notions is shown by the fact that most of the
existing operators on abstract domains falls in our framework (e.g.,
the well-known refinement of reduced product and the corresponding
compressor of complementation). A precise relationship of adjunction
between refinements and compressors is also given, justifying why
compressors can be understood as inverses of refinements.
Constrained Bipartite Edge Coloring with Applications to Wavelength Routing
by Christos Kaklamanis, Pino Persiano, Thomas Erlebach,
Klaus Jansen
Abstract:
Motivated by the problem of efficient routing in all-optical
networks, we study a constrained version of
the bipartite edge coloring problem. We show that if the
edges adjacent to a pair of opposite vertices
of an L-regular bipartite graph are already colored with
\alpha L different colors, then the rest of the edges
can be colored using at most (1+\alpha/2)L colors.
We also show that this bound is tight by constructing instances
in which (1+\alpha/2)L colors are indeed necessary.
We also obtain tight bounds on the number of colors
that each pair of opposite vertices can see.
Using the above results, we obtain a polynomial time greedy algorithm that
assigns proper wavelengths to a set of requests of maximum load L
per directed fiber link on a directed fiber tree using at most
5/3L wavelengths. This improves previous results of [ragh,focs,esa,eric].
We also obtain that no greedy algorithm can in general use less than
5/3L wavelengths for a set of requests of load L in a directed fiber tree,
and thus that our algorithm is optimal in the class of greedy algorithms
which includes the algorithms presented in [ragh,focs,esa,eric].
Periodic and Non-periodic Min-Max Equations
by Uwe Schwiegelshohn, Lothar Thiele
Abstract:
In this paper we address min-max equations for periodic and
non-periodic problems. In the non-periodic case a simple algorithm is
presented to determine whether a graph has a potential satisfying the
min-max equations. This method can also be used to solve a more
general min-max problem on periodic graphs. Also some results
regarding the uniqueness of solutions in the periodic case are
given. Finally, we address a more general quasi periodic problem and
provide an algorithm for its solution.
Bisimulation equivalence is decidable for one-counter processes
by Petr Jancar
Abstract:
It is shown that bisimulation equivalence is decidable
for the processes generated by (nondeterministic)
pushdown automata where the pushdown behaves like
a counter, in fact. Also regularity, i.e. bisimulation
equivalence with some finite-state process, is shown to
be decidable for the mentioned processes.
Game Theoretic Analysis of Call-by-value Computation
by Kohei Honda, Nobuko Yoshida
Abstract:
We present a general semantic universe of call-by-value computation
based on elements of game semantics, and validate its appropriateness
as a semantic universe by the full abstraction result for call-by-value
PCF, a generic typed programming language with call-by-value
evaluation. The key idea is to consider the distinction between
call-by-name and call-by-value as that of the structure of information
flow, which determines the basic form of games. In this way the
call-by-name computation and call-by-value computation arise as two
independent instances of sequential functional computation with
distinct algebraic structures. We elucidate the type structures of the
universe following the standard categorical framework developed in the
context of domain theory. Mutual relationship between the presented
category of games and the corresponding call-by-name universe is also
established.
A Complete Characterization of the Path Layout Construction Problem for ATM Networks with Given Hop Count and Load
by Tamar Eilam, Michele Flammini, Shmuel Zaks
Abstract:
We investigate the time complexity of deciding the existence of
layouts of virtual paths in high-speed networks, that enable a connection
from one vertex to all others, that have maximum hop count h
and maximum edge load l, for a stretch factor of one.
We prove that the problem of determining the existance of such layouts is
NP-complete for every given values of h and l except for the
cases h=2, l=1 and h=1, any l, for which we give polinomial-time layout constructions.
Extensions for cases of a stretch factor greater than one are also
discussed.
Randomness-Efficient Non-Interactive Zero-Knowledge
by Alfredo De Santis, Giovanni Di Crescenzo, Giuseppe Persiano
Abstract:
The model of Non-Interactive Zero-Knowledge allows to obtain minimal
interaction between prover and verifier in a zero-knowledge proof if
a public random string is available to both parties. In this paper
we investigate upper bounds for the length of the random string
for proving one and many statements, obtaining the following results:
\begin{itemize}
\item We show how to prove in non-interactive perfect zero-knowledge
any polynomial number of statements using a random string of fixed length,
that is, not depending on the number of statements.
Previously, such a result was known only in the case of computational
zero-knowledge, and for all languages in NP.
First we give a protocol for the language of quadratic non residuosity,
and then a protocol for a general class of languages, which we call
Simulator-Rankable languages, which we show to include all
known languages having a non-interactive perfect zero-knowledge proof system.
\item Under the quadratic residuosity assumption, we show how to prove
any NP statement in non-interactive zero-knowledge on a random string of length
O(nk), where n is the size of the statement and k is the security
parameter, which improves the previous best construction by a factor of O(k).
\end{itemize}
On the Dynamics of Sharing Graphs
by Andrea Asperti, Cosimo Laneve
Abstract:
It has been recently conjectured by several authors
that the number of fan-annihilations in Lamping's optimal
reduction algorithm can provide a reasonable lower bound
to the ``intrinsic'' computational complexity of lambda-terms.
In this paper, we show how to describe fan annihilations in
terms of statics, giving a precise and simple description of
these operations as suitable paths in the initial term. As a
corollary, we bring more evidence to the above mentioned
conjecture, proving that the complexity of a typical environment
machine (Krivine machine) cannot be better than the number
of fan-annihilations in Lamping's algorithm.
Symbolic Reachability Analysis of FIFO Channel Systems with Nonregular Sets of Configurations
by Ahmed Bouajjani, Peter Habermehl
Abstract:
We address the verification problem of FIFO-channel systems.
We apply the symbolic analysis principle to these systems.
We represent their sets of states (configurations)
using structures called CQDD's combining finite-state automata with
constraints on number of occurrences of symbols.
These constraints are expressed in Presburger arithmetics.
We show that CQDD's allow forward and backward
reachability analysis of systems with nonregular sets of configurations.
Moreover, we prove that CQDD's allow to compute the exact
effect of the repeated execution of any fixed cycle
in the transition graph of a system.
We use this fact to define a generic reachability
analysis semi-algorithm parametrized by a set of cycle \Theta.
Given a set of configurations,
this semi-algorithm performs a least fixpoint calculation
to construct the set of its successors (or predecessors).
At each step, this calculation is accelerated
by considering the cycles in \Theta as additional ``meta-transitions''
in the transition graph, generalizing the approach adopted in [BG96].
Efficient Splitting and Merging Algorithms for
Order Decomposable Problems
by Roberto Grossi, Giuseppe F. Italiano
Abstract:
Let S be a set whose items are sorted with respect to d>1 total
orders \prec_1, \ldots, \prec_d, and which is subject to dynamic
operations, such as insertions of a single item, deletions of a single
item, split and concatenate operations performed according to any
chosen order \prec_i (1 \leq i \leq d). This generalizes to
dimension d>1 the notion of concatenable data structures, such as
2--3--trees, which support splits and concatenates under a single
total order. The main contribution of this paper is a general and
novel technique for solving order decomposable problems on S, which
yields new and efficient concatenable data structures for dimension
d>1. By using our technique we maintain S with the following time
bounds: O(\log p) for the insertion or the deletion of a single
item, where p is the number of items currently in S;
O(p^{1-1/d}) for splits and concatenates along any order;
O(p^{1-1/d}) plus an output sensitive cost for rectangular range
queries. The space required is O(p).
We provide several applications of our technique. First, we present
new multidimensional data structures implementing two--dimensional
priority queues, two--dimensional search trees, and concatenable
interval trees: these data structures allow us to improve many
previously known results on decomposable problems under split and
concatenate operations, such as membership query, minimum--weight
item, range query, and convex hulls. Second, and perhaps a bit
surprisingly, we reduce some dynamic graph problems to order
decomposable problems. Finally, we show how to make our technique for
decomposable problems suitable for efficient external memory
implementation.
Basic Observables for Processes
by Michele Boreale, Rocco De Nicola, Rosario Pugliese
Abstract:
A general framework is proposed that can be used to define
behavioural preorders over processes by considering the context
closures with respect to basic observables. These provide
information about the initial communication capabilities of processes
and about their possibility of engaging in an infinite internal
chattering. After presenting the general framework, we concentrate
on a CCS-like language and show that some of the observables-based
precongruences do correspond to behavioural preorders longly studied
in the literature. In particular, we exhibit alternative
(observables-based) characterizations for the must preorder of
De Nicola and Hennessy and for the fair must preorder introduced by
Brinksma, Rensink and Vogler and by Cleaveland and Natarajan.
The coincidence proofs shed light on the differences between must and
fair must preorder and on the role played in their definition by tests
for internal chattering.
The expressive power of unique total stable model semantics
by Francesco Buccafurri, Sergio Greco, Domenico Sacca
Abstract:
Total stable (T-stable) models give a simple, yet powerful semantics to logic
programming with negation. A peculiarity of T-stable models is
their multiplicity and whether this is an
advantage or a disadvantage is matter of long discussions. In this paper we
show what is the precise expressive power of T-stable models when uniqueness is
required, i.e., a query give an answer if and only if there exists a unique
T-stable model. Our main result is that DATALOG queries with unique T-stable model
semantics is able to express exactly all decision problems with unique solutions.
Next, we analize the expressive powers of subclasses of DATALOG queries for which
there exists at most one T-stable model. A surprising result
is that any practical language for such queries has the same power as
DATALOG with stratified negation.
Monadic simultaneous rigid E-unification and related problems.
by Yuri Gurevich, Andrei Voronkov
Abstract:
We study the monadic case of a decision problem know as simultaneous
rigid E-unification. We show its equivalence to an extension of word
equations. We prove decidability and complexity results for special
cases of this problem.
Labelled Reductions, Runtime Errors, and Operational Subsumption
by Laurent DAMI
Abstract:
We propose a general framework for the operational semantics of
errors. Errors are represented explicitly by a specific constant \err
and reduction rules are completed in such a way that all ``wrong'' or
``stuck'' terms reduce to \err. As any other value, \err\ can be
passed around, sometimes in a lazy way, and therefore an error
occurring inside a term is not necessarily propagated to the top
level; a term is considered ``erroneous'' if and only if it always
generates \err. However, we require that errors are ``absorbing'',
i.e. any attempt to interact with an error yields an error again
(there is no way to recover from an error). Because of these
requirements, \err\ naturally becomes the TOP element in the
semantics. Hence \err\ has a special status, distinct from other,
usual constants, and also distinct from divergence. There are many
advantages to this approach:
- the resulting semantic structure is a lattice, which is much
simpler that the various forms of cpo models which are
currently prevalent in semantics, and comes back to the
original structures of Scott.
- by comparing terms on their error-generation behaviour, we
define an operational ordering called SUBSUMPTION
which gives a precise meaning to the notion of ``substitutability''
or ``safe replacement'' often used informally in the
object-oriented literature.
- we build a type system including a type \top which CONTAINS the
error element}. This allows us to type more terms:
unification of some type variables with \top yields valid
typings for some programs that would be rejected by most other
type systems.
For the technical development below we make heavy use LABELLED
REDUCTIONS, an old idea used in the lambda-calculus to restrict the
interaction behaviour of a term to a finite number of steps. Here this
is generalised in an abstract way to other reduction systems.
Labelled reductions allow us to classify both terms and contexts
according to the number of interaction steps they can perform, and
therefore introduce an operational notion of finite approximation.
This in turn can be used as an alternative to the embedding-projection
pairs of domain theory for solving recursive type equations.
A proof theoretical approach to communication
by Yuxi Fu
Abstract:
The paper investigates a concurrent computation model in which
communications resemble cut eliminations for classical proofs. The
algebraic properties of the model are studied. Its relationship
to sequential computations is illustrated by showing that it
incorporates the operational semantics of the call-by-name
\lambda-calculus. Practically the model has \pi-calculus as a submodel.
Computability on the Probability Measures on the Borel Sets of the Unit Interval
by Klaus Weihrauch
Abstract:
Not avalaible