[MLton] Stack size?

Wesley W. Terpstra wesley@terpstra.ca
Fri, 8 Jul 2005 16:08:07 +0200

On Fri, Jul 08, 2005 at 08:59:05AM -0400, Matthew Fluet wrote:
> > So, as I dug a bit deeper into the insides of MLton's threads, I found that
> > it copies the stack of the thread calling 'newThread'. Is this right? 
> No. 'newThread' copies the stack of the thread suspended as 'base'.

Oh! Now I get it. Sorry for the FUD. =)
I didn't understand the Prim.copyCurrent applied in a val, not a fun.
It all makes sense now, thanks!

> The
> 'base' thread is a snapshot of the main thread taken very early (i.e,.  
> when 'base' is evaluated) in the evaluation of the Basis Library, when the
> stack is very small.  The way that newThread works is that it bangs the
> new function to be evaluated by the new thread into the (unit -> unit)  
> option ref 'func', copies the 'base' stack into a new thread, and then,
> when returning to atomicSwitch, immediately switches to the new thread,
> resumes evaluation of the 'base' expression, which now finds a SOME in
> 'func', extracts the function and begins evaluating it on the new stack.

That's really clever!

>From what I understand now, there is in fact no 'exit thread', just garbage
collection, right? The guarantee of not returning, but instead switching 
means that the gc's currentThread no longer points to the stack, and as long
as it's not stored on any wait/runnable lists, it 'goes away'. =)

> You shouldn't be seeing an SML thread using more stack space than they 
> need to evaluate.  There may be an extra frame or two at the bottom 
> corresponding to the stack when 'base' was copied, but it shouldn't be 
> more than a couple hundred bytes. 

Excellent! With 1GB of virtual address space in linux, MLton only gets 26k
per stack with 40k of threads, so I was a bit worried. :-)

I'm trying to figure out how best to use epoll&kqueue in MLton atm (select
doesn't work for 40k sockets), once I have that figured out, I can start
measuring performance. (I'm hoping to simulate my network code with many
nodes on the same machine, so this isn't entirely pointless to me)

Wesley W. Terpstra