- Semantics:
-
This initial phase we will be devoted to a systematisation
of our previous work on probabilistic (denotational) semantics
[21, 24, 22].
We will consider extensions of our approach towards the
semantics of Probabilistic Concurrent Constraint Programming
to other programming paradigms (imperative, functional, etc).
In this phase we will extensively study linear structures
(e.g. Banach lattices, Hilbert spaces, etc.) and apply this
well-established mathematical theory to the semantics of
programming languages.
This phase will last for about 6 to 12 months.
- Data Flow Analysis:
-
Based on the semantical study undertaken in the first phase
we develop a data flow analysis methodology by extending
the accumulating semantics to allow the incorporation of
probabilistic information (probability distributions).
The development of a probabilistic data flow analysis
framework for (Probabilistic) Concurrent Constraint
Programming languages will benefit from existing work
on the classical framework,
e.g. [26, 29, 8].
For other programming paradigms we will base our
investigations on previous work like
[17, 33, 53, 39, 13, 4, 40].
This work will take between 12 and 18 months.
- Cousot Approach:
-
In this phase we will start from the Cousot semantics based
framework. We will look at methods for characterising the
abstraction and concretisation functions between probabilistic
domains.
This will involve the generalisation of the notion of a
Galois connection in order to appropriately deal with
probability distributions. We expect that this task will
benefit from a category-theoretic treatment, in particular
enriched categories [44, 66, 12].
This phase will partly run in parallel with the second
phase and last for 12 to 18 months.
- Application:
-
In the last phase of the project we will concentrate on
the application of the previously developed methodologies.
An area where these methods can have interesting applications
is real-time systems. An important property of such systems
is for example schedulability. To prove it by deterministic
methods corresponds to a worst-case analysis. Our approach
would allow for a more realistic average-case analysis by
providing a measure for the probability of schedulability.
Various long-run average properties, like system performance
and reliability, also require a probabilistic analysis
[19]. Our probabilistic semantics allows us
to capture a particular notion of (ergodic) observables
which differs from the classical input/output behaviour and
corresponds exactly to the above mentioned properties.
The abstract interpretation methodology we propose and which
is based on this semantics seems therefore well suited for
analysing such properties.
As already pointed out, for example in
[47, 65], the
realistic analysis of security protocols should also be
based on probabilistic concepts. The analysis of cryptographic
programs or systems using abstract interpretation methods
would benefit from our probabilistic approach
[2, 3, 34, 45, 35].
In fact, this
allows us to define for example the entropy of the system as
a measure of its security [5]: the more
chaotic a system is the harder it is to break it by
statistical methods.
Another application is to extend existing tools and
analysers (e.g. for constraint based programming languages
[7, 8]) to accommodate statistical
information.
We propose about 12 months for carrying out this last phase.
A follow-up project could be necessary to complete work
on all possible application areas.