[MLton] RE: card/cross map in heap
Matthew Fluet
fluet at tti-c.org
Thu Jul 17 13:42:08 PDT 2008
On Thu, 17 Jul 2008, Nicolas Bertolotti wrote:
> I don't think there is a way to predict what mmap() would be able to
> allocate if we dump to disk without actually dumping.
Agreed; it's just that that is the information we'd like to know in order
to weight the cost/benefit of paging to disk.
>>> In order to solve the issue, I have added the following piece of code at
>>> the beginning of the remapHeap() function (see the attached patch, based
>>> on the one you previously sent):
>>> if (desiredSize > minSize && h->size == minSize)
>>> minSize = align(minSize + backoff, s->sysvals.pageSize);
>>> I am not sure it is the best thing to do, but it works fine in my case.
>>
>> This simply demands 'proper' growth from resizeHeap when the current heap
>> is minSize. As far as I can tell, this change will simply cause remapHeap
>> to bail out before attempting to mremap to the original minSize (= current
>> heap size). Depending on rounding, it might attempt a mremap at a size
>> that is a few pages smaller than desiredSize - 15*backoff, but it would be
>> unusual for one the to succeed and the other fail.
>
> Not just a few pages smaller !
>
> With the patch, when the heap is 2 GB and desiredSize is 3 GB,
> remapHeap() will fail if it is not able to remap to a 2.05 GB heap which
> is not that much.
O.k. All I was getting at was the fact that the sequence of sizes
requested by remapHeap with and without the patch are the same, except
that without the patch, we have one more requests at a smaller size.
I was also mis-remembering the patch, which breaks out of the backoff loop
with
} while (newSize > minSize);
I thought it matched the createHeap loop, which breaks out with
} while (newSize >= minSize);
Without the >=, we never actually try to mremap to the minSize.
>>
>> With the above patch, what behavior do you see? Does it go
>> remapHeap fail; createHeap fail; write heap; createHeap succeed; read
>> heap
>> or does it go
>> remapHeap fail; createHeap succeed
>> And what are the sizes of the heaps that fail and succeed?
>
> I see the first behavior. desiredSize is about 3 GB and minSize is about
> 2 GB on a 4 GB machine. Before the disk dump, we fail to allocate those
> 2.05 GB. After the disk dump, we can allocate the 3 GB.
I guess mremap isn't as sophisticated as I thought. Grossly
oversimplifying (which is probably why it doesn't work), but I thought
that the OS kept a mapping from virtual address space pages to whether or
not the page has been reserved by the process (via mmap) and if so the
physical memory page and/or file/swap page for that virtual address space
page. On an mremap, remember the physical/file/swap pages for the virtual
address space pages being mremap-ed, mark those virtual address space
pages as unreserved, mark the new size of virtual address space pages as
reserved, and copy back the physical/file/swap pages for the virtual
address space pages that are in common with the original mmap. [The whole
motivation for mremap is that it need not physically copy the pages,
rather, it just shuffles around the virtual/physical translation.] The
point being that mremap gets a view of the virtual address space as if the
mapping had been unmap-ed, so that it can reserve a larger size.
But, clearly, mremap is somewhat hampered by the fact that it is
re-mapping an existing mmap, whereas un-mapping (by paging to disk) and
doing a fresh mmap is able to grab a larger portion of the virtual address
space.
>> I'm not sure that there is a really principled way to make the decisions
>> about keeping vs dumping the current heap. What we really want is the
>> ability to ask the question: if I unmap this memory, can I mmap this size?
>> It seems that mremap should nearly approximate the answer to that question
>> (since it is allowed to move the base address of the mapping), but your
>> experience above seems to suggest that there are situations where we
>> cannot mremap from the current heap size to even a slightly larger heap,
>> but if we unmap (a portion of) the current heap (either by shrinking it or
>> paging it to disk and releasing it), then we can mmap something larger.
>
> I can easily reproduce the problem by compiling one of our binaries on a
> specific machine with a version of MLton that does not include the
> patch. I guess I investigate a bit more in order to identify the reason
> why we are able to allocate a larger area after paging to disk.
Oh, I'm sure that it is a real issue.
The ideal solution, especially for a situation like yours, where you are
happy to use lots of memory on a dedicated machine, is to use
@MLton fixed-heap 3.5G -- to grab as large a heap as you can (that
comfortably fits in physical memory) at the begining of the program and
never bother resizing it. As I understand it, resizing is only to 'play
nice' with other processes in the system.
The problem with fixed-heap, though, is that the runtime starts off trying
to use the Cheney-copy collector (so, it really grabs 1/2 * 3.5G) and it
may be some time before it is forced to use the mark-compact collector,
and it is only at that point that the runtime will try to grab the 3.5G.
Since fixed-heap affects the desiredSize (but not the minSize), you really
need to set fixed-heap to the size that is actually able to be allocated,
so that desiredSize == currentSize, and no resizing occurs.
More information about the MLton
mailing list