[MLton] SPACE'04 and POPL'04 Trip Report

Matthew Fluet fluet@cs.cornell.edu
Tue, 20 Jan 2004 15:37:46 -0500 (EST)


SPACE'04: Talks
--------------------
Stefan Monnier gave a long talk on "Typed Regions".  The presentation was
a bit rushed, but the essence is to give regions self-describing types,
rather than a fixed, static type.  This supports strong (type modifying)
updates.  The paper sketches how to implement a (simple) generation GC in
such a system.  I haven't read the paper in detail; one downside is that
the type language is the calculus of inductive constructions (CiC), so the
type-system occasionally needs to appeal to a CiC proof term.

Richard Bornat gave a long talk on "Local reasoning, separation and
aliasing"; it, along with the some of the other separation logic talks, is
pushing Hoare-style correctness proofs into the realm of pointer
structures.  This talk deals with representing DAGs and alias preserving
copying of DAGs.

I gave a short talk on "Monadic Regions" -- encoding Tofte-Talpin region
calculus into a System F with monadic region operations.  The inspiration
is the Haskell runST operation and the ST monad.

Ivana Ijajlovic gave a short talk on "Refinement in a Separation Context".
A separation context describes a client program that does not dereference
the internals of a module with which it interacts.  Hence, a client which
is a separation context for an abstract module is also a separation
context for any refinement (implementation) of that module.  Again,
separation logic is used to precisely describe the memory owned by the
module.

Don Syme gave a (short) talk on "Combining Generics, Pre-compilation and
Sharing Between Software-Based Processes" which discusses some of the
implementation details of adding parametric polymorphism to the CLR and
.NET framework.  The JIT implements generics by specialization (on
representations); it therefore becomes desirable to share such
specializations between processes in the same application domain.  This
leads to some GC style issues -- how and when to reclaim JITed code.  The
quantitative numbers haven't been taken yet, so it would be interesting to
see how things work out in practice.

Chris Hawblitzel gave a long talk on "Low-Level Linear Memory Management."
They have a suprisingly expressive system for taking a truly low-level
approach: memory consists of a single linear array of words.  He showed
how to build up to regions that can grow dynamically; the key idea seems
to be a "delayed" type, which allows the actual type of an expression to
be filled in later. Using these delayed types, they can build up type
describing functions; these are futher used to implement a simple copying
collector, where type-functions describe the data over successive
iterations of the GC.  The GC is quite low level (moreso than the one of
Dan and myself), in that it supports scanning Cheney queues and data
layout and tag information is explicit down to the last bit.  Still, there
are a lot of explicit coercions and the proof theory isn't clearly the
"Right Thing".

I gave a long talk on "Implementation and Performance Evaluation of a Safe
Runtime System in Cyclone" (joint work with Dan Wang).  We implemented a
copying GC for Scheme (along with a simple Scheme interpreter) in Cyclone.
The GC pushes on Cyclone's memory management; the interesting bit is
typing forwarding pointers.  For that, we introduced "dynamic region
sequences" which give an infinite sequence of regions, along with a linear
capability to generate successive regions.  This gets the right behavior
in terms of deallocating the From-space and preserving cycles in the heap.
We evaluated it against MzScheme, SISC (a Java Scheme interp), and a
Cyclone interpreter using BDW GC.  MzScheme wins out by a long shot, but
we were pleasantly surprised that the Cyclone GC was on par (and often
better) than the Java interp.  It's not quite an apples-to-apples
comparison, but I think it is good evidence that the whole line of
research in type-systems for GC could work out in practice once the theory
is powerful enough.

Steffen Jost gave a short talk on "lfd_infer: an Implementation of a
Static Inference on Heap Space Usage" which implements the techniques
described in his POPL03 paper, giving heap space bounds on first-order
functional programs.  Constraints are generated and sent to a LP solver.

Nicholas Nethercote gave a short talk on "Bounds-Checking Entire Programs
without Recompiling", which instruments binaries to check all pointer
operations.

David Detlefs gave a short talk on "Automatic Inference of Reference-Count
Invariants".  The context is Java/OO, where the notion of ownership is
used to determine when objects become unreachable (and thus ammenable to
explicit deallocation).  The technique attempts to find class-invariants
(i.e., invariants that hold when a class instance is created and preserved
by arbitrary class method calls) by abstract interpretation.  It's
preliminary work, but the idea seems plausible.

Oukseh Lee gave a short talk on "Experiments on the Effectiveness of an
Automatic Insertion of Safe Memory Reuses into ML-like Programs".  The
goal of the analysis is to transform:

fun insert i l =
  case l of
     [] => i::nil
   | h::t => if i < h then i::l
             else h::(insert i t)

to

fun insert i (l, b) =
  case l of
     [] => i::[]
   | h::t => if i < h then i::l
             else let val z = insert i (t,b)
                  in (free l when b; h::z)
                  end

The (free l when b; h::z) will reuse the heap cell l for the allocation
h::z when the boolean b indicates that l is not used after the call to
insert.  Oukseh didn't describe the analysis (it's described in some of
his previous work); he focused on some exerimental results, which showed
that peak memory usage could be greatly reduced.  Unforunately, there is
an associated runtime cost with the passing of dynamic flags, and reusing
cells can penalize a generational GC (they compile to OCaml and use the
OCaml runtime).  Unfortunately, I didn't get a chance to talk to Oukseh
afterwards; one of my principle questions is whether or not the
transformation could be used to specialize for the reuse/non-reuse cases
and dispatch to the right one at the call-sites.

Mike Hicks gave a short talk on "Combining Garbage Collection and Safe
Manual Memory Management" which describes Cyclone and it's memory
management.

Daniel Spoonhower gave a short talk on "Incremental Copying Collection
with Pinning", which implemented a mostly-copying collector for C#
providing support for pinned and large objects.  They (lazily) track page
residency, which is used to determine when to promote pages (i.e., move
the page to to-space instead of copying individual objects), which is the
"Right Thing" to do for large and pinned objects, but can also be used for
long-lived objects.


POPL'04: Talks
--------------------

"Incremental Execution of Transformation Specifications"
Program transformations (e.g., optimizations) can sometimes be described
in a declarative style, with a predicate analysis and rewrite rules.
Since analyses can be expensive to recompute, it is useful to
incrementally update the analysis after each transformation (enabling or
disabling future rewrites).  Constraints are generated from the
declarative specification and represented as BDDs.  They implemented their
techniques in Standard ML, using the MuDDy package to interface with the C
BuDDy system.  They do use MLton, but I imagine the bottleneck is really
the BDDs, not the SML.

"Formalization of Generics for the .NET Common Language Runtime"
The "Theory" to which the SPACE talk is the "Practice."  This handles the
notion of mixed-specialzation (i.e., specialization up to representation)
and sharing and efficient support for run-time types (rep types).

"Semantic Types: A Fresh Look at the Ideal Model for Types"
A very POPLesque technical talk, but I liked it.  Giving a generalization
of the ideal model for recursive polymorphic types, I liked the fact that
they prove the existence of a fixpoint by limits, rather than relying on
metric arguments.

"Polymorphic Typed Defunctionalization"
Francois Pottier presented this work.  I thought it was well presented; '
The basic problem is that the code for apply should be of the form:
/\a1./\a2.\f:[a1 -> a2].\arg:a1. case f of
                                    succ_lambda => arg + 1
                                  | not_lambda => not arg
                                  | ...
but the branches of the case aren't polymorphic enough.  The solution is
to use Hongwei Xi's guarded recursive datatypes, which essentially
constrain a constructor in a polymorphic datatype to ensure that the type
variable is instantiated by a particular ground type.  All of the
preservation goes through.  Some audience members seemed to be suggesting
that this could all be done in a system with typecase, but I don't think
that is the case.  At least, it's not a fair comparison, because typecase
requires passing types at runtime, while Pottier's target language (I
think) supports type-erasure.  To be sure, as Xi pointed out in his work,
the guarded datatypes can be used to give type representations.  As a very
cute final note, Pottier pointed out that Xi's printf using guarded
recursive datatypes corresponds to a hand defunctionalization of Danvy's
continuation-based approach to printf.

"Free Theorems in the Presence of seq"
Details on why the free theorems (used for deforesting and fusion
optimizations in lazy languages) break down in the presence of seq, how
the folklore solution doesn't work, and gives the right solution.
Essentially, one acheives inequalities, rather than equalites,
corresponding to the fact that with seq, one side could go to bottom
before the other.

"Parsing Expression Grammars: A Recognition-Based Syntactic Foundation"
The theory side of Bryan Ford's Packrat Parsing implementation of ICFP02.
Proposes Parsing Expression Grammars (PEGs) over CFGs, which may be better
suited for machine oriented languages.  The motivation is pretty nice --
parsing some non-context-free languages, "unified language definitions"
which avoid the need for both a lexer (RegExps) and parser (CFGs).  The
theorems and proofs read like basic Theory of Computation work, but is
nice.  I know that Bryan's previous implementation is in Haskell; I'm
curious about trying to implement it in SML.

"A Bisimulation for Dynamic Sealing"
An approach to the abstract datatype problem in open, dynamic settings.
Seals apparently coincide with perfect encryption, although this
connection wasn't discussed in the talk.  The real result appears to be
that the bisimulation technique can be extended to work with sealing in a
straightforward manner (cleaning up some of the results in the authors'
previous work).

"An Abstract Interpretation-Based Framework for Software Watermarking"
A new proposal for watermarking software; the essential idea is to use
abstract interpretation (under a non-standard semantics) to extract a
watermark from code.  (The instance the authors' give for a non-standard
semantics is arithmetic modulo some large prime.)  Because the added code
has some meaning in the execution semantics of the program, it should be
difficult to remove.  On the other hand, the process of watermarking the
program must preserve the (execution) semantics of the original program.
As a number of audience members pointed out, the technique posed in the
talk (insertion of source code) is not robust against an optimizing
compiler (because the watermarking variables do not contribute to the
final answer).  So, it does seem as though there are still practical
problems to be overcome; note, however, that the author and the audience
are in a sense both right -- the issue is that they are preserving the
program under different semantics.  For the abstract interpretation (and
it's corresponding execution) semantics, the value of every variable
contributes to the semantic state.  For the optimizing compiler, the
observable effects are the semantics that must be preserved, so it can
freely drop the useless variables, at the expense of changing the meaning
of the program relative to the abstract interpretation semantics.
As a separate issue that occured to me, it seems as though there is the
possibility that the watermark extraction technique could lead to false
positives.  I guess it depends on the context in which the watermarking is
used, but I could see arguments both for and against allowing false
positives.

"Abstract Non-Inteference: Parameterizing Non-Inteference by Abstract
Interpretation"
It's easiest to describe with an example:
  while xH do (xL := xL + 2; xH := xH - 1)
where xH is a high/secret variable and xL is a low/public variable.
Clearly there is an implicit flow from xH to xL, as changing the initial
value of xH will change the final value of xL.  However, if only the
parity of xL can be observed after the fact, then there is no
interference, as xL will have the same parity as it started, regardless of
the initial value of xH.  I was sitting next to Andrew Myers during the
talk, and if he thought they were on the mark, then it's probably pretty
good research.

"The Space Cost of Lazy Reference Counting"
Folklore suggests that reference counting avoids the pauses associated
with tracing GCs.  Experimental results (on pathological examples) show
that RC can exhibit longer pauses than tracing GCs.  An theoretical
analysis shows that RC with lazy deletion can require \Omega(R) space
overhead when each operation is required to take constant time, where R is
the ratio between the largest and smallest object.  Such a bound is
acheivable, but only in pathological cases, so the probably doesn't impact
applications in practice.  Allowing operations to perform O(n) work for an
allocation of size n largely alleviates the space overhead. Hans Boehm
gave a very understandable talk.

"Local Reasoning about a Copying Garbage Collector"
Another application of separation logic, this time to the problem of the
correctness of copying GC.  The speaker (Noah Torp-Smith) pointed out that
one novelty of this work over previous results is that the entire proof is
justified by the proof rules; i.e., no appeal to semantic arguments.
This means that the proof can be formally checked.  Another nice aspect of
the proof is that it makes use of local reasoning (e.g., the algorithm for
copying a single cell can be proofed correct with respect to a frame
condition that describes the rest of the heap), which leads to the an
understandable division in the proof.

"Separation and Information Hiding"
Very similar to Ivana Ijajlovic's SPACE talk, it deals with module/client
correctness.  I'm afraid I don't have the expertise to judge the two
results against one another, but both seem like steps in the right
direction for this line of research.

"Tridirectional Typechecking"
The type-checking problem for a CBV language with rich types (intersection
and union, and universal and existential dependent types (a la Xi)) but no
type annotations is inherently undecidable.  Extending the bidirectional
type-checking approach to a tridirectional type-checking (where sub-terms
in evaluation position can be checked before their context) system.  It
wasn't clear to me what motivates the necessity for such a system; a good
question raised at the end was whether or not there was a notion of
minimal annotations; the speaker said that was an area of future work.

"A Type System for Well-Founded Recursion"
Recursive modules for (S)ML would be nice, but we need the right static
checks.  Dreyer proposes a type-and-effect style system that tracks the
uses of recursive variables to ensure that they only appear in latent
effect positions (i.e., under a lambda).  This leads to the natural
back-patching semantics without requiring run-time checks.  Unfortunately,
in my opinion, Dreyer drops down to a core-ML like language for the
formalism, so some of the tricky issues with recursive modules (like
generativity of datatypes, recursive functors, mutual recursion between
structures and signatures) aren't dealt with, so this is far from a
proposal for Standard ML.

"Global Value Numbering using Random Interpretation"
This follows Gulwani and Necula's POPL03 paper, extending the notion of
random interpretation to uninterpreted function symbols.  The basic idea
is that if one associates a linear expression (over vectors) with each
function symbol, then abstract interpretation on random initial values
with random weighted merges at join points will preserve equality but will
(probably) not preserve inequality.  Hence, one can use this as a
probablistic analysis for global value numbering -- with very low
probability, two expressions that are inequal will be determined equal.
The advantage of admitting this unsoundness is that the anaysis is more
precise and faster than sound analyses.  Suresh, you mentioned your
compilers seminar; of all the POPL papers, this would be the most
interesting to that sort of audience.