[MLton] More on parallelism
Suresh Jagannathan
suresh at cs.purdue.edu
Tue Dec 5 08:57:53 PST 2006
We've been focussed on two issues. The first deals with support
for CML on multi-cores and follows the outline Matthew suggested.
Essentially, we have a 1-1 mapping between OS level threads and
MLton schedulers, on top of which CML threads are scheduled.
Currently, we simply partition the heap uniformly among all
OS threads for purposes of allocation; all threads stop when any
one triggers a GC.
However, while this strategy makes sense for tightly-coupled
clusters, it doesn't scale to non-shared memory environments.
Here, we've been looking at having separate MLton processes
communicate over an open-MP infrastructure. The idea is that
between each pair of nodes, there is a dedicated communication
channel used for shipping data, effecting CML actions, and registering
CML channels that CML threads on different nodes use to communicate
with one another.
A non-local CML communication is implemented essentially as an MPI
send whose data packet contains sufficient information to construct a
local
CML action on the receiving end. To deal with references,
we implement a proxy type. For example, a dereference leads to
a communication action with the reference home that results in the
value of the reference being subsequently shipped.
The interesting question we've been grappling with is how much
we can unify the design and implementation of these two strategies with
the hope that such a unification might simplify some of the complexities
mentioned in Matthew's earlier note.
-- Suresh
On Dec 4, 2006, at 3:44 PM, Matthew Fluet wrote:
>
>> I'd like to know what exactly would need to be done to implement
>> CML using OS-level threads? I've run across some abstracts for
>> talks by John Reppy on the subject, and I read the papers from the
>> earlier thread on parallelism in october. But exactly what would
>> need to be done? (Depending on what is needed, I could possibly
>> work on parts of it)
>
> BTW, I should also mention that the developers at Purdue have said
> that they were going to attack this problem, so they may have more
> insight.
>
>
> _______________________________________________
> MLton mailing list
> MLton at mlton.org
> http://mlton.org/mailman/listinfo/mlton
More information about the MLton
mailing list