Stepping outside of the current discussion, a matter of practicality: is the paging really that bad? AFAIK, it is a memcpy running between the two heaps which is sequential access. It shouldn't take longer than it would take to copy a 3GB file on the disk.<br>
<br>If it *is* taking longer, then we need to add a hint to the windows VM to tell it that we will be doing sequential access before the memcpy, then flip it back to random-access mode.<br><br>On Fri, Dec 11, 2009 at 9:32 PM, Matthew Fluet <span dir="ltr"><<a href="mailto:matthew.fluet@gmail.com">matthew.fluet@gmail.com</a>></span> wrote:<br>
<div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">Right, and the annoying bit is that the previous heap was so close to<br>
the max heap setting. Perhaps a reasonable heuristic is that if a<br>
desired heap is "close-to" the max-heap size, just round up. Perhaps<br>
0.75 of max heap? In the max-heap 3G setting, this could still leave<br>
you in the situation where you have a 2.25G allocation and a 3G<br>
allocation at the same time to copy. Or 0.55 of max heap; that could<br>
require 1.65G+3G at the time of the copy.<br></blockquote><div><br>I would be against yet another special case in the sizing rules. Any cutoff we pick is going to fail for someone else the same way, while artificially restricting the memory growth for others. His problem would be (mostly) fixed if we flipped windows mremap to only move if growth fails. <br>
<br>I have a higher-level solution proposal: MLton.GC already has hooks that are executed after a GC to implement finalizers. Expose these to the user. If a user knows his application only consumes X memory on an error condition, he can test for this after a GC and terminate with an "Out of Memory" error as desired. <br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">> It seems like one strategy would be that keepomg the "max-heap"<br>
> setting slightly under half the available physical memory should<br>
> avoid the case where we're already using about 50% and then have<br>
> to create another heap of the same size.<br>
<br>
</div>True. Although, you would really need to use about 50% of the<br>
physical memory that you want the MLton process to have access to,<br>
else you will page via competition with other processes.<br></blockquote><div><br>I think this isn't quite right. You get paging when the working set of the other applications + heap of MLton program > RAM. If his heap stays small, there is no thrashing. Once it gets big, there wilt be thrashing iff the working set of apps + max-heap > RAM. Keeping the max-heap size warm wouldn't help, only cause more thrashing sooner.<br>
<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"> Going to a single<br>
contiguous heap, interpreting it as two semi-spaces when using a major<br>
copying-collection would be nicer here, because fixed-heap would grab<br>
the whole 3.25G up front.<br></blockquote><div><br>Yes.<br><br><blockquote style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;" class="gmail_quote">the mremap function is described as
using the Linux page table scheme to efficiently change the mapping
between virtual addresses and (physical) memory pages. It's purpose is
to be more efficient than allocating a new map and copying.<br></blockquote><br>If I could ...<br><br></div></div>