Peyton-Jones
Matthew Fluet
fluet@cs.cornell.edu
Thu, 7 Feb 2002 18:11:08 -0500 (EST)
I'm with Suresh and Henry on this -- we shouldn't turn down a good
opportunity to highlight MLton. And, I think the limit check and
exceptions papers are a little further off than Steve claims. I, for one,
haven't put much thought into "better" exceptions than naively translating
HandlerPush/Pop. I know Steve has a bigger story for the initial
translation of exceptions, but we ought to have a good account of how we
can capitalize on it. Likewise with limit-checks; while I don't really
have any major hangups with the static or dynamic counts that we've got,
but, since the runtime effect is negligible, at best we're making a "it
feels better" claim, and that's probably offset a bit by the fact that
doing loop forests is certainly more expensive than the naive per-block or
even per extended basic block. Honestly, what I'd like to shoot for is an
insertion that chooses a statically minimal number of limit check points
and inserts them in a way that minimizes the number of dynamic checks;
then fix it up so that we don't hoist limit checks into non-allocating
loops. It would still probably be just a "feels better" result (as
opposed to a whopping runtime benefit), but at least it would be in a
direction that there aren't pathological examples right at hand.
> My thoughts as to the main approach of the paper are that it should
> neither be a formal paper with lots of ILs and transformations nor a
> benchmark paper with lots of static counts, dynamic counts, and
> analysis and run times. It should be a paper about how whole program
> compilation allows one to translate SML to a very simple IL, SSA, and
> do most optimizations there. I think the paper should show lots of
> example SML code, and the corresponding dot graph, and show how
> optimizations of SSA got us the optimizations we need.
>
> The only numbers that I think we need to show are some performance
> numbers (compile time, run time, code size) just to prove it all works
> and blows away other compilers. I would put those up front, as
> motivation to read the rest of the paper.
Methodology stuff too? I'm not quite sure what I mean by that; I guess
the property-list stuff goes in there. Certainly we should really bang on
the simplicity of the SSA IL; we don't have any hard numbers, but most of
the recent passes (commonBlock, commonSubexp, knownCase, localRef,
redundantTests) were all initially implemented in a couple of days and
pretty much bug free within a week.
History/evolution might be interesting as well. Just thinking about it,
since August we've completely revamped two IL's (CPS -> SSA, MACHINE ->
RSSA/MACHINE) which entailed porting all of the CPS passes (mostly
straightforward, but in the process we improved removeUnused, contify,
redundantTests and later flatten and multi (affecting constantProp)).
I'm sure there is a lot of history pre my involvement as well.
Is "micro-evolution" interesting? For example, I'm thinking about the
treatment of overflow arithmetic (if we can piece it together) which has
evolved from unsupported, through "hacked" into primitives, finally to a
distinguished transfer.