[MLton-user] gcc memory exhaustion
Matthew Fluet
fluet at tti-c.org
Mon Apr 20 21:05:29 PDT 2009
On Mon, 20 Apr 2009, Dan DuVarney wrote:
> Matthew Fluet wrote:
>> On Sat, 18 Apr 2009, Dan DuVarney wrote:
>>> I'm attempting to use SVN r7016 MLton/MinGW to compile a large program
>>> with time-profiling enabled. MLton generates a 235MB .c source file
>>> which gcc (3.4.5) chokes on (i.e., runs out ofmemory). I've saved the
>>> source files and tried compiling the MLton-generated code without any
>>> optimizations enabled, but haven't been able to successfully compile.
>>> I'm using a Windows/XP machine with 3GB memory.
>>
>> Is this using the C codegen or the native codegen? If it is using the
>> native codegen (the default on an x86 platform), then there will be
>> only one .c file that includes global and static data for the program
>> (including profiling source location data). That one file can't
>> really be split into smaller pieces. On the other hand, it is mostly
>> static data, so I'd be suprised that gcc has difficulty with it.
>
> Thanks for the help. I'm using native codegen, so it sounds like
> splitting the file is not an option.
It might not be impossible to split the file, but it won't be trivial. At
the very least, it would help to know which component of the file causes
the gcc memory exhaustion.
You might also try compiling with '-profile time-field', which directs the
compiler to track the current source location for time profiling by
setting a field of the GC state, rather than determining it from the
program counter. There is slightly more overhead when profiling this way
(but it is necessary for profiling with the C codegen), but it also avoids
generating a large static table of code pointer addresses.
> The memory problem is probably due
> to a bug in gcc, as there are number of open bugs in bugzilla
> complaining about gcc running out of memory. I tried compiling with gcc
> 4.3.3 and it also runs out of memory.
>
> -Dan
>
>
More information about the MLton-user
mailing list