[MLton] inferring getAndSet via an Atomic{Begin, End} optimization
Stephen Weeks
sweeks at sweeks.com
Thu Aug 31 16:15:43 PDT 2006
> Why does AtomicBegin set limit? I assume that this causes signals
> to be handled as soon as the next GC safe point comes up, but you
> don't want that to happen if you are in a critical region.
No, that's not what it does. The point is that a signal may have come
in before canHandle could be bumped, and would have set limit to zero
so that the signal would be handled at the next safe point. The test
and assignment
if (gcState.signalIsPending)
gcState.limit = gcState.limitPlusSlop - LIMIT_SLOP;
resets the limit to its ordinary value so that we don't enter the
runtime to handle the signal.
> If it has to be done, wouldn't it be better to do it in AtomicEnd,
> so it is later (i.e., to catch cases where the signal arrived during
> the critical region)?
Yes, that's what we do. The test in AtomicEnd
if (gcState.signalIsPending and 0 == gcState.canHandle)
gc;
looks to see if there are any signals pending and if we are no longer
in a critical section, and, if so, enters the runtime to handle the
signal.
> I assume that the last line in AtomicEnd was supposed to call gc (i.e.,
> () added just before the semicolon).
Yes.
Looking at this anew, I wonder if we could just eliminate the test in
AtomicBegin. The point is that it's OK if a signal causes us to enter
the runtime -- we simply have to notice in the runtime that we're in a
critical section, reset gcState.limit, and return. Does that make
sense? It seems like a pure win. It would save code space and run
time in the common case.
More information about the MLton
mailing list