Fri, 24 Aug 2001 11:14:55 -0700 (PDT)
> > You could just use 64-bit bins, but then the space requirements double again
> > (8 bytes of bin for every byte of object code) and now the generated code at
> > each allocation point is a 32-bit add to a 64-bit counter. That seems to push
> > harder for calling a procedure which can do it and do the linking to save
> > space.
> The largest program we know of is MLton, whose text segment is currently just
> under 6M. I can live with 48M (since it's not in the MLton heap), although I
> agree that's pushing it.
But, that 48M competes with the heap for space. (Yes, we don't need to
look at it during a GC, but if we allocate the space before GC_init, it's
that much less for the heap; if we allocate the space after GC_init, we
might fail because the heap might be taking up too much. Or maybe it
doesn't matter -- we'll just end up swapping in and out sections of the
> Another possibility would be to coalesce the bins. How bad would it be to
> cut the number of bins in half?
It makes the computation of the right bin just a little more complicated.
It might also give "incorrect" results, where time profile ticks for
blocks get shifted forward or backwards. Probably o.k. for space
profiling, because there would be enough "buffer" space (i.e., setting
everything up for the allocation and the tick) between the start of a
block and the occurence of the tick, and likewise between the tick and the
end of the block.