Steve,

> The number of cores per node is 32, so the 8 threads * 4 cores should be
> fine (I tried many variants anyway).
> 
> The SPMD implementation seems like the way to go, but after I split my
> tasks into different localities how can I express data dependencies
> between them?
> 
> Let’s say that I have tasks 0-10 in locality A and tasks 11-21 in locality
> B. Now, the task 15 (in locality B) requires some data produced by task 7
> (in locality A).
> 
> Should I encode these data dependencies in terms of futures when I split
> the tasks into the two localities?

Yes, either send the future over the wire (which might have surprising effects 
as we wait for the future to become ready before we actually send it) or use 
any other means of synchronizing between the two localities, usually a channel 
is a nice way of accomplishing this. You can either send the channel over to 
the other locality or use the register_as()/connect_to() functionalities expose 
by it:

    // locality 1
    hpx::lcos::channel<T> c (hpx::find_here());
    c.register_as("some-unique-name");  // careful: returns a future<void>
    c.set(T{});    // returns a future too

    // locality 2
    hpx::lcos::channel<T> c;
    c.connect_to("some-unique-name");   // careful: returns a future<void>

    // this might wait for c to become valid before calling get()
    hpx::future<T> f = c.get();         

on locality 2 'f' becomes ready as soon as c.set() was called on locality 1. 
While it does not really matter on what locality you create the channel (here 
defined by hpx::find_here()), I'd advise creating it on the receiving end of 
the pipe.

If you gave us some example code we were able to advise more concretely.

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu


> 
> Steve
> 
> 
> 
> 
> 
> 
> 
> On 9 Oct 2017, at 15:37, Hartmut Kaiser <hartmut.kai...@gmail.com> wrote:
> 
> SMPD


_______________________________________________
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to