Mon, 27 Aug 2001 16:10:52 -0700
> It sounds like you like all of my idea except for the run-time cost. I haven't
> actually produced a test of it yet (I will later today),
CVS is *much* more important.
> but my claim was that
> I can't believe that it would EVER be more than a factor of 4 times slower.
> Note that if the program isn't allocating much than the overhead is irrelevant,
> and if it is, then the temporary area and allocation bins are all going to
> be in L1 cache, which will make this very fast. I think that it will really
> not hurt much at all.
> I'll do some tests today hacking up some assembler code to see how much of a
> burden it is.
How about my idea of an array with a bin per allocation point? It seems to have
both low time and space overhead.
Also, I don't completely understand your linked list approach. How do you get
from a code address to the appropriate bucket? Is a pointer stored in the code?
Or is it like my approach with a per code point spot know at compile time? If
so, then the only difference between our approaches is that mine allocates all
the bins statically in an array, while yours allocates them dynamically. As
I've argued, the most we're gonna see is 60,000 bins, so why not allocate them