measurements of ML programs
Stephen Weeks
MLton@sourcelight.com
Fri, 13 Oct 2000 15:57:15 -0700 (PDT)
> So given these caveats, what do you think about going ahead with the
> measurements?
I think that such numbers can be interesting, especially when made on a large
sample of real programs. But I am sceptical as to how useful they will actually
be to compiler writers. Personally, I would be interested in the exception
numbers, since I haven't seen those for SML. The call counts and polymorphic
counts are less interesting from my perspective as a MLton compiler writer,
since they are so dependent on compilation strategy.
More interesting to me would be something akin to the discussion currently going
on on comp.lang.ml regarding the raytracing benchmark, but with even more detail
and across a wider range of benchmarks. I think all of the compiler writers
involved find it helpful to understand where the time is actually spent in
running programs, and to try to draw conclusions about what decisions in the
compiler (or library) are responsible.
Again, trying to stick with "real" numbers rather than source instrumentation, I
would find it more interesting to see an attempt to quantify the performance
costs of various language features (exceptions, polymorphism, loops, ref cells,
higher order functions, separate compilation) and library features (reals, IO)
across compilers.
I guess what I am trying to say is that I think source level measurements, even
dynamic instrumentation, often try to be too general and apply to too many
compilers, at the cost of helping no one. So, if I had my druthers, I would
prefer to see numbers more directly performance related.