[MLton-devel] streamIO
Matthew Fluet
fluet@CS.Cornell.EDU
Sat, 18 Jan 2003 17:20:56 -0500 (EST)
Here are the benchmark results:
MLton0 -- mlton-stable
MLton1 -- mlton.cvs.HEAD
compile time
benchmark MLton0 MLton1
wc-input1 4.09 4.78
wc-scanStream 4.18 5.09
run time
benchmark MLton0 MLton1
wc-input1 70.14 69.66
wc-scanStream 75.53 89.38
run time ratio
benchmark MLton1
wc-input1 0.99
wc-scanStream 1.18
size
benchmark MLton0 MLton1
wc-input1 87,749 91,389
wc-scanStream 88,485 94,069
So, much better than previously.
I'm still curious about one anomaly. With the old IO, I get:
[fluet@localhost stable.time]$ mlprof -thresh 5 -raw true wc-scanStream
mlmon.out
78.02 seconds of CPU time (2.12 seconds GC)
function cur raw stack raw GC raw
------------------ ----- -------- ----- -------- ---- -------
<Posix_IO_read> 18.0% (14.46s) 18.0% (14.46s) 0.0% (0.0s)
While with the new IO, I have:
[fluet@localhost cvs.HEAD.time]$ mlprof -thresh 5 -raw true wc-scanStream
mlmon.out
97.79 seconds of CPU time (4.85 seconds GC)
function cur raw stack raw GC raw
------------------ ----- -------- ----- -------- ---- -------
<Posix_IO_read> 25.5% (26.13s) 25.5% (26.13s) 0.0% (0.0s)
Now, the number of calls to Posix_IO_read and the arguments to it are the
same in both programs. The only thing I can think of is that if the new
IO is just a little slower, then the difference in time between calls is
enough to cause a difference in the system call response (caching, paging,
etc.).
-------------------------------------------------------
This SF.NET email is sponsored by: Thawte.com - A 128-bit supercerts will
allow you to extend the highest allowed 128 bit encryption to all your
clients even if they use browsers that are limited to 40 bit encryption.
Get a guide here:http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0030en
_______________________________________________
MLton-devel mailing list
MLton-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/mlton-devel