ANR PML

De Wiki du LAMA (UMR 5127)
Aller à la navigation Aller à la recherche

Context and positioning of the proposal

Ever since FORTRAN appeared fifty years ago, Programming languages have been evolving rapidly. These languages now include more and more sophisticated concepts like "objects", "type inference", "modules"... This richness makes it more and more complex to train programmers and makes it difficult for them to keep up with the innovations and changes in programming languages.

Another phenomenon is the emergence of formal methods used in conjunction with various programming languages. This generally introduces another layer to languages in order to write specifications, and sometimes yet another one for proofs. Learning a programming language together with the associated specification/proof languages can take an important effort.

Projects such as ACL2, the successor of nqthm, the Boyer-Moore theorem prover uses a rather simple language (namely LISP) both as a programming language and specification language, allowing to keep a unity in the system. Unfortunately, LISP is somewhat limited both as a programming language (no good treatment of sum type, no module system) and specification language (very limited quantification). Moreover, LISP has no compile-time type-checking, which has proved very useful to detect bugs and automatically assert properties.

The aim of this project is to build a very powerful language (with no loss of expressive power compared to state of the art languages) based on a very small number of simple features. We think this will be possible thanks to recent progress both in the semantics of programming languages and the apparition of new algorithms for type inference based on constraints-solving. In fact, we propose in [RAF10b] an innovative concept derived from the later to enable this: constraints-checking.

Moreover, our language will be used not only as a programming language and as a specification language (like in ACL2), but also as a proof language. This is natural for an ML-like language because, for instance, pattern-matching is a natural and powerful way to make a proof by case analysis. This also means that our tool will follow the KISS philosophy, with relatively few (unified) features, yet powerful.

The idea of a new language arose from the discovery of a new typing algorithm [RAF10b] whose implementation gave birth to a first implementation of PML (Proved ML) by Christophe Raffalli. This implementation is already available from the web page of the language. However, turning PML into a real tool requires a lot of research and implementation work and this is why we request the help of the ANR. Here are some of them (more details are given in the next sections).

PML aproach to proofs of specification is unique PML has no proof language, buy still the user write proofs ! The existing programming language supporting specifications uses to alternative: automated proofs (ACL2, Why) or proof obligation that the user can prove using specific language (Coq extraction, Focalize, Why). A lot of systems provide both, the manul proofs being used only when automation fails. PML is very different, it uses the scissors symbol 8< to express that a position in the program is dead which means that it can not be reached during evaluation. This is checked by a terminating variant of Knuth-Bendix completion algorithm. This check is very simple (and therefore easier to trust) than modern decision procedure, and only solves trivial cases. To write complex proofs, the user just us the syntax of the programming language to do case analisys or induction (induction are just recursive definitions). This means that the user do not need to learn a specific proof language and this imples that PML is easier to learn and probably more adapted to industry that previous solutions.

PML logic is in fact the equational theory of its programming language and we use variants of Knuth-Bendix completion as a proof-checker. The first experiments with the current implementation are promising. However, Knuth-Bendix procedure in the context of ML is a complex and new research problem. It needs a lot of work, for instance to handle proof in arithmetic which are often needed. Here is an example of a proof in arithmetic, checked in the current version, which is not completely satisfactory (because too hard to write), but shows the use of a language for both proofs and programs and the use of recursive functions for inductive proofs:

 val rec mul_associative x y z |- (x * y) * z == x * (y * z)
   proof match x with
     Z[] -> 8< (* scissors meaning the end of a proof branch *)
   | S[x'] ->
     let _ = mul_associative x' y z in (* There is a syntactic suger "use mul_associative x' y z;" *)
     let _ = mul_right_distributive y (x' * y) z in
     8<

Moreover, this style of proof is declarative and readable (like Mizar proofs) but concise, which is very important when you want to maintain large developments.

PML is different from the other modern programming languages because its design focuses on a few powerful concepts. One consequence is that it is more difficult for a compiler to produce efficient code. In particular, since PML unifies several notions of products (modules, tuples and records), there is no simple way to choose the internal representation of a product, especially with implicit subtyping. Writing a good compiler for PML will require more complex optimization schemes (probably driven by typing) than languages like OCaml or Haskell. A Polish student (Wojciech Matyjewicz) is just starting a PhD on this very topic.

Here is a simple example, accepted by the current version of PML, demonstrating product type, sum type and subtyping. We define ternary trees as an extension of binary trees with an implicit subtyping relation (all function accepting binary_trees will accept ternary trees, by ignoring the middle_son):

type rec binary_tree (A) =
  [ Nil[] | Node[A with val left_son : binary_tree(A) val right_son : binary_tree(A)] ]
type rec ternary_tree(A) =
  binary_tree({ A with val middle_son : ternary_tree(A)})


PML requires a termination criterion because a proof by induction will just be a normal recursive program. Such a program has to be well-founded in order to correspond to a valid proof. A subset of Haskell can now use the Aprove tool to establish termination for simple programs. However, we want the test to be fully integrated with the language, and be compatible with very modular programs. At the moment, these goals seem difficult to achieve with Aprove or other external termination checkers. A first termination criterion based on Lee, Jones & Ben-Amram's "size-change termination principle" was implemented by Pierre Hyvernat [Hyv10b]. While this test is quite powerful, it is necessarily incomplete, and quite some work is required to make termination proofs of complex programs tractable.

Scientific and technical description

Background, state of the art

Programming language

The ML programming language, created by Robin Milner et al in the 80th has two major distinctive features:

  • Algebraic data-types and pattern matching: data types are basically all constructed using fixpoint, cartesian product (product types) and disjoint union (sum types).
  • Static type inference: the type of every piece of code is automatically inferred using Hindley-Milner algorithm (HM) and by construction, once compiled, an ML program can not crash (no segmentation fault). More precisely, when we do not use unsafe features of the language (like interface with unsafe libraries written in C), if an ML program produces a seg-fault, then there is a bug in the type-checker or the compiler.

Recent progress in type inference algorithm uses constraint solving. This means that the type system can be described in first-order predicate logic in such a way that a type inference problem is a formula written in a decidable fragment of first-order predicate logic (often the purely existential fragment). Hence, any constraint solver can be turned into a type-check for ML. These approach is known as HM(X) (see [SOW97]).

There are two problems with this approach:

  • The complexity of constraint solving can be too high for practical use, especially when using general purpose constraint solver. To our knowledge, there are currently no mainstream implementation of ML based on HM(X).
  • HM(X) does not completely solve all problems of subtyping. The language to express the types constructed by the algorithm is the same as the language used by programmers to write types. Hence, if we have the constraints and for three types , and , then the most natural solution would be . This implies that intersection needs to be part of the language for types. This means that constraints like may also appear and they are problematic to deal with. Moreover, similar reasoning shows that constraints of the form may appear, increasing the complexity of constraint solving by an exponential factor.

PML's approach is to replace type-inference by just constraint checking: we just check that a solution exists for the constraints in some model. Since we do not need to write solutions for the constraints, the language for types remains relatively simple. Type-checking in the current implementation of PML can be seen as a black box ensuring that nothing can go wrong during execution. Furthermore, the types written by the programmer are only syntactic sugar for the projection operator onto the intended type (giving for free nice feature like higher-order parametric types, that is types with parameters which may be themselves types with parameters).

With this approach, we loose type-inference and the ability to display types in error messages. Nevertheless, in case of error, we display three positions in the code and an error message like this error at position 1, label "id" projected at position 2 do not appear in the value constructed at position 3. This kind of error message is in fact of bounded length and often more useful than OCaml or SML messages. We can understand this as showing three points in the slice of the error, which is introduced by Joe Wells in [HW04]

Proof assistant

Proof assistants have evolved rapidly since Automath in the 70th. Two main trends exist: automated proof assistants such as ACL2, PVS and safe ones such as Coq, Isabelle, PhoX, Lego, HOL, Matita, etc. The former incorporate very sophisticated automated deduction strategies, with no certificate for the validity of the proof, while the later require all proofs to be done in a specific framework (like natural deduction or type theory) allowing for a simple check of the proof. The gap between the two approaches tend to be reduced, by the emergence of complex tactics (for Coq or Isabelle mainly) that build proofs for you. For instance Zenon is an almost state of the art automated first-order theorem prover that outputs a Coq proof.

The common defect of all these proof assistants is that a proof can not be written nor understood without running the proof assistant. Some proof assistants do not follow exactly the above scheme: for instance Mizar and Alf. Mizar has a declarative style for proof which is (in theory) readable by a human and checked by a limited checker. This proof style has been adapted to Coq and Isabelle. Unfortunately, there is no formal description of the Mizar proof checker. Alf on the other hand is based on proof theory and requires the user to basically write the complete proof tree just leaving out a few details. The logic is very well formalized and simple, but writing proof is tedious and not similar to the usual practice of informal mathematics.

This pictures of the world of proof assistants shows that some fundamental work should be done here. In the current version of PML, the logic is just an equational theory of the underlying programming language which is easily described formally. And the proof engine is, like in Mizar, a limited automated checker inspired by the Knuth-Bendix completion algorithm (KB). But KB has been adapted to the higher-order constructs of ML-like languages and restricted to ensure termination and an acceptable complexity. The current limitation is much too strong (it is limited to closed terms and can not use universal theorems automatically as in the first example of section 1 where one has to make precise how we use distributivity).

Nevertheless, preliminary examples in the current version shows that the approach is worth trying, because proofs are concise and readable (when you know the language). A very encouraging point is that all examples where written without interface. This really means that proofs are readable without the help of a computer.

Rationale highlighting the originality and novelty of the proposal

The final objective of our project would be a full PML compiler, bootstrapped and completely proved with itself (full bootstrap). This does not exist for any language and is far too ambitious for a four years project. We plan to produce in four years (only) a compiler for PML, written in PML, but not proved in PML yet. On the other hand, we want to develop proof-checking to allow very elegant proofs, supporting the feasibility of a full bootstrap by various examples, some of them being near to industrial application, some others being algorithms coming from implementation of programming languages.

We have focused the existing development on the quality of the language both for proofs and programs. By quality, we mean easy to understand and write (and therefore, easy to learn). We think that using the programming language as a proof language could make formal methods more attractive to the industry without the defect of systems like PVS and ACL2 where the automated tactics replace the need for a proof language, but are sometimes hard to control and use. For instance, finding the right lemmas to make a proof possible in ACL2 is quite difficult.

Comparison with other proof systems devoted to programming: Many other proof systems have been used or specifically developed to allow the production of certified code: extraction in Coq, Why with its automated prover Who from the Proval project, Focalize, PVS, ATS, ... None of these system uses the programming language as a proof language. They all have a dedicated language for proofs and even if some of them like Focalize or Coq extraction using a Mizar style mode for proof, have readable proofs, learning the proof language is never trivial. Other system like PVS, ATS, Why using Who relies on automated deduction ... But the behavior of the automated prover is always hard to predict.

Another originality, is that the logic and programming language are fully integrated. This is not a two level systems like most systems (but not all, ACL2 for instance is fully integrated). In PML, the statements of theorems and their proofs are expressions at the same level than programs. This means that a program can contain specifications that contains program definitions in their statement or proof and so on. This is generally not possible (even in ACL2) but make it possible to write module with specifications

On of the key idea in this project is that an ML like programming language has all the features needed for a proof language: case analysis via pattern matching and exception handling, induction, calling previously defined program/theorems. This means that it is natural to explore this direction.

Comparison with other programming languages: all the system mentioned above allow to prove program in limited subsets of ML or Haskell like languages. Even when the language are not so limited, the considered language is well understood introducing no new feature.

Another key idea in PML is to develop the language and its proof-checker together. This has a great impact on the design of PML. Let's illustrate this with a concrete example: exception handling. In ML, there is a try P with e -> R but, this is not sufficient to do case analysis on the fact that the program P raises or not an exception. R can be the proof in case P raises an exception, but there is no place holder for the normal case. This is why we have to introduce a let try x = P in Q with e -> R where R is evaluated only when P evaluates to a value.

A great number of decisions on the language design comes from the interaction between the development of the programming language and its proof-checker. Another key feature of PML, which makes the project original even as a programming language compared to many other projects of programming language research (GALLIUM, Haskell, Scala, ...) is the use of constraint checking. This choice arises from the fact that we want a language as small as possible, because a proof-checker is complex and therefore, we want to fully unify all sorts of Cartesian product including modules, records, tuples and variant with multiple argument. This is already achieved in the current implementation of PML and to my knowledge, no ML like language have a unique but still powerful notion of Cartesian product.

Scientific and technical program, project management

Specific aims of the proposal

As said in the previous section, the final objective would be to have a fully bootstrapped PML language: this would mean that PML is entirely written and proved in PML. This would be too ambitious at first, and we chose to focus here on the design of the language plus a proof of concept, that is compilation and proof of various examples, searching to do our best on the following points:

  • Natural way of writing programs (Task 1)
  • Efficiency of the code generated by the compiler, despite the heavily use of subtyping (Task 3)
  • Readable and short proofs (Task 1, Task 4)
  • Efficiency of type-checking and compilation (Task 2 and 6)
  • Efficiency of proof-checking (Task 4 and 6)
  • All of the above points need testing, and we created another transverse (Task 5) for that.

Project management

We plan to have one 3 days workshop per year with all the members of the project, invited speakers and interested outsiders. We think these meetings are fundamental to keep the project running, inform everybody of the project progress and problems. We already have the agreement of some people to participate in such meetings: Joe Wells, Assia abouby, Andras Abel, ...

We will also organize two project meeting per year, just to keep track of progress, share idea and organize the above workshop and other invitation.

We want also to organize visits of one or two members of the project to visit researcher on similar topic (typically a member of the project could visit one of the invited speaker of our workshop). Members of the project should also travel to conference on the subject like POPL, LICS, CSL, TYPES or the recently created CPP (first conference in 2011).


The structure with only one partner with a coordinator devoting most of its time to the project (10 month per year), should simplify the management issues a lot.

Detailed description of the work organized by tasks

Task 1 - theoretical work, design of the language

Coordinator: Christophe Raffalli

Participants: Pierre Hyvernat, Alexandre Miquel, Tom Hirschowitz

1.a - Correctness of the constraint checking algorithm (delivered 09/2012): [RAF10b] already cover the correctness without polymorphism. A draft version of the correctness proof with polymorphism do exists but needs more work. The main open problem here is the interaction with the termination-check. The current work proves that when constraints are checked, the program can only loop via recursive definitions. Then, we would like to prove that the program are terminating if recursive definitions are accepted by the termination checker. However, this is non trivial.

This being one of the central piece of PML, it should be also one of the first test for PML in task 5. We could also prove this part of PML in Coq (in fact 2 provers proving themselves and each other correct is a much stronger warranty than one prover proving itself).

1.b - Consistency of proof-checking (beginning 09/2011, delivered before 09/2013 for the core of the language): This is essential for clearly defining the logic of PML and prove its consistency. This should not be too hard for the core of the language. However, this proof has to be extended to take into account all future extensions of the language and could be seen as a permanent task.

1.c - Adaptation of the uniqueness typing to the context of constraint checking (beginning 09/2012, delivered 09/2014): The current version of PML is a pure functional programming language, with no imperative feature. This is problematic, because input/output are necessary for real programs and affectations are required for efficiency especially when using large arrays. We plan to adapt the approach of the Clean language [AGR92], [AcP95], [AcP97], [VPA07]. In Clean, all programs can be analyzed as purely functional programs, but the type system can check that some data are not used any more and reuse the place in memory for other data (allowing affectation). For instance, in such a context writing in a file f can be written as let f' = write f str in ..., but the compiler must check that f will not be used anymore implying the equivalence between the standard imperative semantics of writing to file and the purely functional semantics used by proofs.


1.d - Private, abstract and existential types. (beginning 09/2011, delivered 09/2012 for private type, beginning 09/2012, delivered 09/2013 for existential type and beginning 09/2013, delivered 09/2015 for abstract types)

Abstract data types hides the definition of a data type and allow the user of a library to be sure that his code will continue to work event if the internal representation of data are changed. In the context of constraint-checking in PML, adding abstract data types seems to be a challenging task. Moreover, abstract data-types is a form of existential quantification over types and could raise some consistency issues. We hope to find a way to incorporate abstract types in PML without loosing coherence.

A first step would be private data types. They are a very simple yet very powerful mechanism for easily ensuring invariants on all the values of a data type. This mechanism is as follows: the compiler simply checks that the constructors of a data type are not used for constructing values. Values are constructed by using construction functions like with abstract data types. However, unlike with abstract data types, constructors can still be used as patterns for defining functions by pattern-matching. Hence, a library on a private data type is not closed but can be extended easily. Private data types are therefore an important and very useful feature for defining data structures with complex invariants and proving their correctness more easily. They have been implemented in OCaml by Pierre Weis and are described in Frédéric Blanqui, Thérèse Hardin and Pierre Weis ESOP'07 paper [BHW07].

A second step would be existential types, which are very similar to abstract types, but with no free name for the abstract type. On a logical level, they do imply an existential quantification of type which has to be limited to ensure consitency. However existential types do not require the type to have a free name, which corresponds in logic to a definite description operater (similar to the Hilbert epsilon operator) which being connected to the axiom of choice (over types) may be really problematic for consitency. Some work related to this exists in the phd thesis of F. Ruyer a former student of C. Raffalli [Ruy06].

Task 2 - termination criterion

Coordinator: Pierre Hyvernat

Participants: Christophe Raffalli, Andreas Abel, Frédéric Blanqui

Remark: essential task, running during the 4 years: In this field we will always find embarrassing examples that do not work, but could work... both for the core and auxiliary criterion (see below for the distinction) meaning that this research field will probably remain open forever.

Even if it might be counter-intuitive at first, it is often necessary to write programs whose execution can be infinite. For example, any kind of "server", or almost any interactive program might have infinite executions. Even in purely mathematical programming, it can be interesting to have intermediary non-terminating functions. Consider a function outputting the stream of prime numbers : even if this function is non-terminating, we might use it in a terminating manner in a proof by requesting the n first prime numbers.

Since PML uses full recursion (keyword rec), writing such programs is easy. On the other hand, the notion of "terminating", or "well-founded" recursive function isn't part of the core of PML: such programs are just special cases of recursive programs. The user will have to specify which functions are in fact terminating and might have to prove it himself.

Proofs of specifications are just PML programs, but those cannot be allowed to run infinitely. In fact, a proof will not run at all: they are only required to be well-founded as the consistency of PML relies on this. In order to relieve the user from proving that all proofs are in fact terminating, PML should offer a way to check automatically that (some) functions are terminating. Because the halting problem is undecidable, it is hopeless to do that in all generality, but this generality is seldom necessary. PML should only work for most of the functions, most of the time (rather than work for all the functions, all the time...)

Technically speaking, PML can infer an error called Loop when it encounters a program which, it thinks, may not terminate. Such an error cannot be captured: this is an error rather than an exception. The property we need to guarantee is that if PML doesn't infer the error Loop possible, then the program in question does indeed terminate. If the error Loop is possible for a terminating function, the user can still provide PML with a proof that this error is never raised. PML current syntax for that is [p proof ... ] where p is a term and ... is a proof that p doesn't raise any exception nor error (this is the desired property for proofs).

On supprime ?
2.a - Core termination criterion (delivered 09/2010)

This first test will be part of PML proof-checker hence the consistency of PML
will depend upon the correctness of its implementation. We could just accept
primitive recursive functions as terminating, but this is just not strong
enough for any practical use. (Users of the Coq proof system know that using
structural ordering on a single argument isn't practical...). Currently, there
has been little work on this aspect of PML. A simple yet effective termination
checker has been implemented, but this test isn't plugged to the rest of PML.
The termination criterion used is based on the "size change principle" of Lee,
Jones and Ben-Amram [LJ01]. This test successfully checks termination for
primitive recursion, lexicographic ordering and permutation of arguments. This
covers most simple practical cases.

Once this is totally integrated in PML, several things need to be done:
à revoir
2.b - Test and improvement of the core termination criterion (beginning
09/2010, delivered 03/2012)

We should test and improve the first criterion: in particular, the size change
principle depends on a notion of size of terms. Two natural choices are
the size (approximately the total number of constructors) and the
depth. Those two notions are not equivalent as far as the criterion is
concerned: the size of an argument may decrease while its depth increases, and
vice and versa. It is not obvious which one is best, and it might be possible
for the user to provide his own notion of size. Meanwhile, there is a
practical workaround which consists in adding a "dummy" integer argument to
our functions which will act as the overall size of the other arguments.

Here, we plan to adapt to PML a termination criterion based on the technique
of type-based termination, which allow for recursive calls through size
preserving functions such as List.map. There are several
possibilities, ranging from very simple systems such as the one developed by
Abel (RAIRO'04) [Abel04], Barthe et al (MSCS'04) [Bar04] or Blanqui (RTA'04,
CSL'05) [Bla04, Bla05c], to the very rich system of Blanqui and Riba (LPAR'06)
[BlR06]. In the latter, given for each function some formula in Presburger
arithmetic describing how the size of the output is related to the size of the
inputs (the correctness of which can be checked automatically), the
termination follows from the fact that recursive calls are done on strictly
decreasing arguments, using for instance lexicographic or multiset comparisons
together with linear combinations of the arguments. Intermediate systems, such
as the one of Barthe, Grégoire and Riba (CSL'08) [BGR08] which is powerful but
with a lower complexity than Presburger arithmetic, have also to be
considered.
However, the above test might be considered too error prone for their
integration in the core criterion. We hope to be able to stabilized here the
core criterion in spring 2012 using a compromise between power and simplicity
(which implies security), but all rejected criterion could then be reused in
the next task:
toujours d'actualité ?
2.c - develop external termination criterion (beginning 03/2012,
delivered 09/2013)

Here, we want to be able to use any possible kind of algorithm, but we must
find a way to output from the success of the criterion, a modified program
(for instance with extra arguments, or adding an explicit termination proof or
a combination of both) that can be proved terminating using only the core
criterion. To do this, we propose to extend the CoLoR
and Coccinelle  platforms which basically do a
similar task for Coq. This adaptation might be
difficult mainly because the proof languages of PML and Coq are very
different. However, the fact that we can use the core criteria, rewrite
programs, produce proofs... offers many possibility that are not present in
Coq. Moreover, the equational nature of PML's proof-checking may be well
adapted for this task.

Among the possible tests we are planning we can cite:
* The higher-order recursive path ordering, developed by Jouannaud, Rubio and
  Blanqui (LPAR'07) [BJR07] for automatically proving the termination of
  higher-order rewrite systems.

* The techniques developed by Giesl et al for automatically proving the
  termination of Haskell programs by using the powerful state-of-the-art
  termination techniques developed for rewrite systems (RTA'06) [Gie06] and
  implemented in Giesl et al automated termination prover
  Aprove (winner of the last
  [http://termination-portal.org/wiki/Termination_Competition termination
  competition]).

Task 3 - compilation

Coordinator: Christophe Raffalli

Participants: Pierre Hyvernat, Wojciech Matyjewicz, Tom Hirschowitz

3.a - A first compiler using LLVM (beginning 09/2009, delivered 09/2010) LLVM is a compiler infrastructure providing many tools: compilation strategy, virtual instruction set, compiler infrastructure, tools to write high level virtual machines, etc. LLVM is very attractive, because it is rather simple to use (it even has an OCaml interface) and can compile for a bytecode interpreter, can be used as a JIT compiler or a standard compiler. Moreover, it support a lot of platforms. It also provide some optimizations, which is important. We think that writing a compiler, with no optimization, for PML using LLVM should not be too hard (this is important that this task be easy, because this is not really research ...)

Then, we would like to improve our compiler in various direction. We mention here the ones that are innovative in this domain (we should also consider more standard optimization, but we do not mention them specifically).

3.b - Representation of cartesian product and disjoint sum (beginning 09/2010, delivered 09/2011 for product and 03/2012 for sum types) PML allows only one kind of cartesian product which in general (because of multiple inheritance and implicit subtyping) must be represented as a table (hash-table or maps based on binary search trees). These can impact performance. We plan to generate extra constraints for each occurrence of a constructor of a cartesian product in a program. Then, solving this constraint in a way that maximize speed (or size) should allow for a representation of cartesian product that is more efficient than using, for instance, OCaml. The same kind of optimization (with a different optimization criterion) should be done for sum types and the implementation of pattern matching.

3.c - Unboxing (depends on some parts of 3.b, beginning 09/2011, delivered 09/2012) In general, 32 bits integer and floating point number are boxed (that is represented by a pointer). This allows a more elegant language. This can lead to major impact on performance especially when arrays are involved. We think that constraint-checking is a good framework to propagate type information and allow efficient unboxing. Nevertheless, doing enough unboxing to try to match the performance of low level languages like C is very hard. We hope that we can reuse some of the work of task 3.b, because unboxing can be seen also as the optimization of the representation of a cartesian product with only one field.

3.d - compilation of affectation (reference and arrays) and IO (depend upon 1.c, beginning 03/2012, delivered 09/2013) After adapting uniqueness typing to PML (task 1.c), we will be able to compile affectation and IO imperatively as in any imperative programming language.

à revoir
3.e - How to unify the mathematical hierarchy of numbers with the current
practice in computer science (depend upon 3.c, beginning 09/2012, delivered
09/2013) This task can be seen as a generalization of unboxing (task 3.c). One
solution would be to propose only one type for integer, and let the compiler
choose the representation, possibly using specifications bounding values. This
means that we should be able to propagate typing information about the size of
numbers to use efficient representations as much as possible. This seems to be
a very challenging task and will probably not work well enough because the
propagated bound will be based on the worst case. Nevertheless, we think it is
worth trying to see how far we can go.


Task 4 - equational reasoning

Coordinator: Christophe Raffalli

Participants: Frédéric Blanqui

The kernel of the proof engine will be based on completion techniques. Knuth-Bendix completion tries to transform a set of unoriented equations into a set of (inter-reduced and) convergent, that is, terminating and confluent, set of rewrite rules. It can therefore be used for proving that some equality is the equational consequence of some equational theory. Indeed, when an equational theory can be completed into a convergent rewrite system, two terms are equivalent in this equational theory if their normal form in the convergent rewrite systems are equal.

4.a - Adaptation of the Knuth-Bendix completion algorithm to PML (delivered 09/2013) As explained just before, Knuth-Bendix completion is based on rewriting. However, in PML or in Moca (see task 4.b), programs are not rewriting systems. We therefore need to adapt Knuth-Bendix completion. We need both generalization, to use the notion of constructor present in ML and take care of the higher-order nature of ML (even if we can not hope much here). As said above, something is already implemented, but it is trivial because completely restricted to closed terms. A first version should be able at least to rewrite a closed term modulo some simple equational (and universal) theory. An important point here is to ensure termination of this algorithm and even a low complexity. The price to pay, will be incompleteness. A lot of care should also be devoted to the correctness of the implementation, because like for task 3.a, the consistency of PML relies on it.

This task in one of the major task for PML and a first version already exist. We will have to improve this during the four year either to treat new cases, improve performance, and make the code safer regarding correctness. This is why we consider this to be a priori a permanent task of this project.

Moreover, we include other task not only related to PML, but that concern equational reasoning and that could be directly or indirectly related to task 4.a.

4.b - Development of a stand-alone tool for completion modulo associative and commutative symbols (beginning 09/2009, delivered 09/2011), like it is the case with addition and multiplication on integers. We want to develop a stand-alone tool so that it can be easily reused by other people in other projects or tools. Apart from PML, it will be useful for Moca, a generator of construction functions for private data types with algebraic invariants, and Europa, a proof assistant based on the lambda-pi-calculus modulo rewriting developed by Gilles Dowek and Mathieu Boespflug. Note that, for implementing this tool, we could start from the CiME library.

4.c - Improvement of Moca and adaptation to PML (depend upon task 1.d and 4.b, beginning 09/2011, delivered 09/2013) Completion is at the heart of Moca, it is used for automatically generating construction functions for private data types with algebraic properties as described in Frédéric Blanqui, Thérèse Hardin and Pierre Weis ESOP'07 paper [BHW07]. As an example, consider a type with a constructor taking as argument a list. If this constructor is declared "commutative", then Moca will generate construction functions for this type that will guarantee that the list argument is always sorted (using a user-defined comparison function).

Moca also suffer from some limitation because it really works for commutative and associative symbols and a few other specific equations (neutral element, inverse, ...). The task 4.b should allow to generalize the application of this tool.

Moca currently generates OCaml code without guarantee on its correctness. Using PML instead, Moca could also generate a proof of the correctness of the generated construction functions. Then, later, when trying to prove the correctness of a function defined on this private date type, one could use the invariants satisfied by the values of the private data type as assumptions, since these invariants are satisfied by construction.

Task 5 - validation (transverse task)

Coordinator: Christophe Raffalli

Participants: Pierre Hyvernat, ???

The validation requires to write numerous examples to check that we fulfill our main goal, which is that all programs (with or without proof) are written in the best possible way. This work being research, we think that it is important that any piece of code written in PML that does not look right is carefully examined to check that this is not due to a defect in language design.

on garde un blabla similaire?
Marianne Simonot, Maria-Virginia Aponte, Olivier Pons, Julien Forest are all
members of the team CPR
whose thematic is to make it possible to integrate proof assistant in the
industrial process of software conception. They will try to check if this is
possible and even better with PML than with other tool. We think that the main
goal of our project is to obtain an important progress in this respect and any
failure here should be carefully analyzed and possibly fixed.

This task should deliver between 10.000 and 100.000 lines of PML code to have
a sufficient corpus to say in which respect we fulfilled our goals.

Task 6 - optimisation of PML (transverse task)

Planning of tasks, deliverables and milestones

The following diagram represents the summary of the tasks and subtasks, together with the intended planning:

à refaire
tasks


Data management, data sharing, intellectual property and results exploitation

Results in each of the tasks will be published in journals (APAL, TCS, ...) and international conferences as usual (LICS, TLCA, CSL, CIE, ...).

PML language is already distributed as open source software under the Cecill-B license. We think that for such a research platform, this is the only possible way to ensure that people will try it. As soon as a first compiler is available, we plan to produce easy-to-install packages, at least for some well-known Linux distributions (Debian and its derivatives seem a good choice).

Consortium organization and description

Relevance of the partner for the project

This project involve only one partner, the LAMA (UMR 5127), where the coordinator of the project already developed the proof assistant PhoX. The main characteristic of PhoX is to be rather simple to use due to a set of tactics which is limited (less than 20 for the principal ones), but powerful. Moreover, tactics are extensible by incorporating theorems inside the tactics.

Clearly, this means that efficiency was the main goal of PhoX. Unfortunately, like all tactical theorem prover, PhoX proofs are not readable. After a few attempts with a Mizar-like mode for PhoX, Christophe Raffalli decided to move to a new theorem prover, starting from scratch.

on garde ?
Moreover, many new colleagues arrived recently in the laboratory: Pierre
Hyvernat, Lionel Vaux, Laurent Vuillon, Jacques-Olivier Lachaud, Guillaume
Theyssier (only the first two are members of this ANR, but we have frequent
discussions with the others about PML, in particular with Guillaume
Theyssier). This created a strong dynamic very suitable to host such a
project.


Qualification of the project coordinator and members

The coordinator and various members of the project comes from various horizon (see section 7), but they have a common background around the use and development of programming language and/or formal methods. We think that this variety, the number of members, should allow for good communication and should be very fruitful.

We think, that compared with the development of PML by Christophe Raffalli alone, such a team should speed up the development of PML approximately by a factor 5, making it possible to deliver a very innovative and useful tool at the end of the project. The lack of support for such a team would certainly limit the tool to an experimental toy with many development only partially (or even not at all) integrated in the project.

Christophe Raffalli will also ask for delegations during the project to be able to spend even more time on it.


Scientific justification of requested budget

====Meetings (total 86800€, considered as a coordination cost for task 0 in document A)====

Four meetings per year, 3 days each, for 10 people. Each meeting costs
1 train ticket in France, 3 days and 2 nights:
10*(300 + 3*20 + 2*80) = 5200€. Then for all meetings in 4 years, this gives 83200€.

Moreover, Frederic Blanqui being in China, his travel costs
900€ instead of 300€. If we consider that due to the distance, he comes only
to 6 meetings, this is an over cost of 3600€.
====Visits (total 52800€)====

Visits in France : We plan for 8 visits of one week for each task (2 visits
per year and per task, 40 visits for the whole project).
The cost per visit is: 300 + 8*20 + 7*80 = 1020 (train ticket, 8 days, 7
nights). The total cost is 40800€ (8160€ per task).

Visits in China: We plan 4 one-month visits for Frederic Blanqui (Pierre Weis,
Christophe Raffalli and twice Cody Roux).
The cost of such a visit is 900€ for the travel, 1500€ for the hotel and 600€
for the meals. This means 3000€ per visit and a total of
12000€ (6000€ for task 2 and 4 involving Frederic Blanqui).
====Conferences (total 18000€)====

The members of the ANR will submit papers to international conferences and
journals (for an average of two publications per man per year, this gives 20
publications). We estimate around 12 of these publications to be in
international conferences with an average cost of 1500€. This gives a total of
18000€ (3600€ per task).
====Master internships (total 12000€)====

We plan to host around four master internships in good conditions (possibly
followed by PhD studentships not financed by the ANR):
* one of them in China with a cost of 7500€ (900€ for the travel and 1100€ per
  month for accommodation, for task 4);
* the other three in France, within tasks 1, 2 and 3, with a total cost of
  4500€ (1500€ each: 300€ for travel and 200€ per month for an accommodation at
  the CROUS).
====Computers (total 17500€)====

We consider that the ANR project has to participate in the renewal of the
computers of its participants. The lifetime of a computer being 3 years and
the total number of month for permanent members of the project being 120, we
think that we should pay for 4 computers with an average value of 2500€ each
(we need powerful computers and laptops, because automated reasoning requires
a lot of computations and memory).

We also include 3 computers for non permanent members for the same price.
Hence the total of 7*2500 = 17500€
====Human resources financed by the ANR (1 PhD student and 2 postdocs)====

This project involves many tasks and a lot of members. However, all members
apart from the coordinator can only devote 2-3 month per year to this project
(they are involved in other ongoing research). Although this is far from
negligible, we think that we will need more human power: we estimate that 1
PhD and 2 postdocs are reasonable. This would give 124 months (40 for the
coordinator, 36 for the PhD and 48 for the 2 postdocs) by researchers spending
most of their time on the project and 112 (152-40) month by people with 2 or 3
month per year. We consider that this is a good balance.

Human resources not financed by the ANR

task repartition

Annexes

CV, Resume of all project members

Christophe Raffalli (project coordinator)

Age 41, MCF at the LAMA (UMR 5127) since September 1997.

After his PhD in Paris VII (defended in February 1994), Christophe Raffalli spent 1 and a half year at the LFCS in Edinburgh, 2 years at Chalmers university in Göteborg (post-doc of the TYPES European project) and 1 year as ATER in Créteil University.

Research administration: For ten years, the LAMA was sub-site of the Paris VII site inside the TYPES project which was renewed several times and Christophe Raffalli was the coordinator for this sub-site. Currently the project is not financed by the E.U. Nevertheless, Christophe Raffalli is in charge of the organization of the next TYPES meeting in Aussois in May 2009.

His research interests include:

  • theory and implementation of proof assistants,
  • proof theory,
  • implementation of programming languages (especially type-systems).


Selected publications

  • [Raf10b] Realizability for programming languages (submitted, available as hal-00474043)
  • [Raf08a] PML and strong normalisation conference at the Types workshop, April 2008, Turino, Italy
  • [Raf07b] PML: a new proof assistant conference at the Types workshop, may 2007, Cividale del Friuli (Udine), Italy
  • [Raf06a] Realizability of the axiom of choice in HOL (An analysis of Krivines's work) with Frédéric Ruyer in Fundamenta Informaticae (2006)
  • [Raf05a] PhoX with Paul Rozière in The seventeen provers of the World, Freek Wiedijk (editor) LNAI 3600 pages 67-71
  • [Raf03b] System ST Schedae Informatica, proceedings of the Chambery-Krawow-Lyon Workshop, Vol. 12, pages 97-112 (June 2003)
  • [Raf02c] Getting results from programs extracted from classical proofs, TCS 2004, volume 323, pages 49-70
  • [Raf02b] System ST, beta-reduction and completeness, presented at LICS 2003, publication IEEE, pages 21-32
  • [Raf02a] Computer Assisted Teaching in Mathematics, with René David, to appear in the proceedings of the Workshop on 35 years of Automath (April 2002, Edingurgh)
  • [Raf01d] System ST, towards a Type System for Extraction and Proof of Programs, Archive for Pure and Applied Logic, 2003, volume 122, pages 107-130
  • [Raf01c] Apprentissage du raisonnement assité par ordinateur, avec René David, Quadrature numéro 45, printemps 2002, pages 25-36). Version courte parue dans la gazette de la SMF, avril 2002, numéro 92, pages 48-56

Frederic Blanqui (INRIA, Rocquencourt)

Age 38, permanent full-time researcher at INRIA.

Frederic Blanqui is expert in the following areas:

  • type theory,
  • rewriting theory,
  • termination,
  • functional programming,
  • proof assistants,
  • and formal certification of program properties.

Since September 2008, he is INRIA researcher at LIAMA, the Sino-French Laboratory in Computer Science, Automation and Applied Mathematics.

Between 2003 and 2008, he was INRIA researcher at LORIA, Nancy, in the Protheo research team led by Pr Claude Kirchner, focusing on the use of rewriting techniques for programming, as well as specifying and proving program properties.

Since 2004, he is leading the CoLoR project which aims at providing tools for automatically certifying the termination of programs. Since 2007, CoLoR is the best certification back-end in the international competition on certified termination provers.

In 2007 and 2008, he led the INRIA Quotient project which aims at extending the OCaml programming language with types whose values automatically satisfy equational invariants using the Moca tool.

He supervised 2 master thesis and 3 PhD students on the extension of type theory with decision procedures, the termination of typed higher-order rule-based programs, and the certification of termination proofs.

He did his PhD with Pr Jean-Pierre Jouannaud at University Paris Sud between October 1998 and September 2001 on the combination of type theory and rewriting theory.

Between October 2001 and August 2002, he worked on the certification of cryptographic protocols with Pr Larry Paulson at Cambridge, UK.

Between September 2002 and September 2003, he worked at Ecole Polytechnique in the Coq development team on the extension of the proof assistant Coq with rewriting.

More details on his activities and publications can be found on his web page and in his CV.


Selected publications

International journals with reading committee: 7

International conferences with reading committee: 15

Other publications (thesis, workshops, invited papers, reports, etc.): 15

Prizes: 2001 SPECIF Award for his PhD thesis by the French national society of teachers and researchers in computer science; and 2001 Kleene Award for the best young researcher paper at the IEEE Symposium on Logic in Computer Science (LICS).


Five most significant publications in the last 5 years:

  • [Bla11] F. Blanqui and A. Koprowski. CoLoR: a Coq library on well-founded rewrite relations and its application to the automated verification of termination certificates. To appear in Mathematical Structures in Computer Science.
  • [BRK10] F. Blanqui, C. Riba and C. Kirchner. On the confluence of lambda-calculus with conditional rewriting. Theoretical Computer Science 411(37), p. 3301-3327.
  • [BR09] F. Blanqui and C. Roux. On the relation between sized-types based termination and semantic labelling. CSL'09. LNCS 5771.
  • [BJS08] F. Blanqui, J.-P. Jouannaud and P.-Y. Strub. From formal proofs to mathematical proofs: a safe, incremental way for building in first-order decision procedures. TCS'08. IFIP 273.
  • [BHW07] F. Blanqui, Thérèse Hardin and Pierre Weis. On the Implementation of Construction Functions for Non-free Concrete Data Types. ESOP 2007: 95-109.

Pierre Hyvernat (LAMA, Chambéry)

Age 30, "maître de conférences" at the Université de Savoie (in Chambéry) since September 2006, member of the LAMA.

He obtained his PhD thesis in December 2005, under the supervision of Thierry Coquand (Chalmers, Göteborg, Sweden) and Thomas Ehrhard (at the time, IML, Luminy, Marseille, France)

His research interests relevant to PML include

  • denotational semantics,
  • type theory and constructive mathematics.


Selected publications

  • [Hyv10b] The Size-Change Termination Principle for Constructor Based Languages (hal-00547440, submitted)
  • [HHy06] with P. Hancock: Programming Interfaces and Basic Topology. "Annals of Pure and Applied Logic", volume 137, January 2006,
  • [Hyv05] Synchronous Games, Simulations and lambda-calculus, proceedings of the "GaLoP" workshop, ETAPS 2005. (journal version submitted to Annals of Pure and Applied Logic),
  • [Hyv04] Predicate Transformers and Linear Logic: yet another Denotational Model, Lecture Notes in Computer Science, vol. 3210, September 2004.

Alexandre Miquel (PPS, Paris 7)

Age 37, "maître de conférences" at the Université Paris-Diderot (Paris 7) since September 2003, member of PPS. Currently CNRS research associate ("délégation") in the Plume team at the LIP (ENS Lyon) since September 2008.

He obtained his PhD thesis in December 2001, under the supervision of Hugo Herbelin (INRIA/LIX) in the Coq team (now TypiCal).

From October 2001 to August 2002 he was postdoc in the Chalmers Institute of Technology (Göetborg, Sweden) under the supervision of Thierry Coquand, and from September 2002 to August 2003 he was "ATER" at the LRI (Orsay).

He is a specialist of the models of type theory (especially the calculus of constructions) and of the correspondence between set theory and type theory. His research interests also include:

  • logic, proof-theory, rewriting,
  • denotational semantics (set- and domain-theoretic),
  • realizability in classical logic.


Selected publications

His most significant publications are:

  • [Miq07] Classical program extraction in the calculus of constructions (CSL'07),
  • [Miq06] with A. Arbiser and A. Ríos. A lambda-calculus with constructors (RTA'06),
  • [Miq04] Lambda-Z: Zermelo's Set Theory as a PTS with 4 Sorts (TYPES'04),
  • [Miq03] A Strongly Normalising Curry-Howard Correspondence for IZF Set Theory (CSL'03),
  • [Miq00] A Model for Impredicative Type Systems with Universes, Intersection Types and Subtyping (LICS'00).


Relevant publications by non participants to the project

Here are some publication or resources of interest for the project:

Language design:

  • [SOW97] Martin Sulzmann, Martin Odersky, Martin Wehr, Type inference with constrained types, TAPOS, 1997.
  • [Ruy06] Frédéric Ruyer, Preuves, types et sous-type, phd 2006 directed by C. Raffalli.
  • [HW04] Christian Haack and J. B. Wells, Type error slicing in implicitly typed higher-order languages, Sci. Comput. Programming, 50:189-224, 2004.

Termination:

  • [LJ01] Lee, Jones, et al. The size-change principle for program termination - ACM SIGPLAN Notices - 2001
  • [Abel04] Andreas Abel, Termination Checking with Types ,Special Issue: Fixed Points in Computer Science (FICS'03 and RAIR'04)
  • [Bar04] G. Barthe, M. J. Frade and E. Giménez, L. Pinto and T. Uustalu, Type-Based Termination of Recursive Definitions, 2004, Mathematical Structures in Computer Science.
  • [Gie06] J. Giesl, S. Swiderski, P. Schneider-Kamp, and R. Thiemann Automated Termination Analysis for Haskell: From Term Rewriting to Programming Languages, Proceedings of the 17th International Conference on Rewriting Techniques and Applications (RTA-06), Lecture Notes in Computer Science 4098.

the Clean language:

  • [AGR92] Peter Achten, John van Groningen and Rinus Plasmeijer (1992). High-level specification of I/O in functional languages, Proc. of the Glasgow workshop on Functional programming, ed. J. Launchbury and P. Sansom, Ayr, Scotland, Springer-Verlag, Workshops in Computing, pp. 1-17.
  • [AcP95] Peter Achten and Rinus Plasmeijer (1995). The Ins and Outs of CONCURRENT CLEAN I/O, Journal of Functional Programming, 5, 1, pp. 81-110.
  • [AcP97] Peter Achten and Rinus Plasmeijer (1997). Interactive Functional Objects in CLEAN, Proc. of the 1997 Workshop on the Implementation of Functional Languages (IFL'97), ed. K. Hammond Davie, T., and Clack, C., St.Andrews, Scotland,
  • [VPA07] Edsko de Vries, Rinus Plasmeijer, and David M. Abrahamson, Uniqueness Typing Simplified, by Edsko de Vries,
  • the language report by Rinus Plasmeijer and Marko van Eekelen,
  • the language homepage.

Involvement of project participants to other grants, contracts, etc …

  • Frederic Blanqui is involved in the Sino-French ANR SIVES 2009-2011 on SImulation and Verification based platform for developing Embedded Systems.
  • Pierre Hyvernat and Lionel Vaux are members of the ANR CHOCO ("Curry Howard for COncurrency"), whose leader is Thomas Ehrhard.
  • Marianne Simonot ans Maria-Virginia Aponte are members of the ANR REVE ("Safe Reuse of Embedded components in heterogeneous enVironmEnts ") whose leader is Lionel Seinturier at INRIA-ADAM.
  • Julien Forest and Olivier Pons are involved in the ANR A3Pat whose leader is Xavier Urbain.
  • Olivier Pons is also a member of the ANR [Focal] (certified development, compiler and framework).