Hello Steve and Hartmut, following your discussion about remote dependencies between futures has left me with two questions:
1.) If we pass a future as an argument to a remote function call, is the actual call of the function delayed, until the future becomes ready? So does: hpx::future<something> arg = hpx::async<act>(); hpx::async<remote_act>(REMOTE_LOC, arg); Equals: hpx::future<something> f= hpx::async<act>(); hpx::async<remote_act>(REMOTE_LOC, f.get()); 2.) When we encountered a similar problem, our solution was to aggregate the futures (of tasks A) inside AGAS adsressed components, distribute those addresses and referencing remote (and local) futures our dataflows dependet on as member variables of these components. How would such a solution compare to the channel based approach with respect to performance and scalability? Thanks, Kilian On Mon, 9 Oct 2017 17:41:54 -0600 Steve Petruzza <[email protected]> wrote: > Thank you Hartmut, > > Your suggestions are already very useful. This channels >mechanism looks awesome, I will give it a try. > > One other thing, where I can actually give you a code >example, is the following: > - an async function returns a future of a vector > - I need to dispatch the single elements of this vector >as separate futures, cause those will be used >(separately) by other async functions > > Here is what I am doing right now: > > hpx::future<std::vector<Something>> out_v = >hpx::dataflow(exe_act, locality, inputs); > > std::vector<hpx::future<Something>> >outputs_fut(out_count); > > for(int i=0; i < out_count; i++){ > outputs_fut[i] = hpx::dataflow( > [i, &out_v]() -> Something > { > return out_v.get()[i]; > } > ); > } > > This solution works but I think that the loop is just >creating a bunch of useless async calls just to take out >one of the elems as a single future. > > Is there a better way of doing this? Basically to pass >from a future<vector> to a vector<future> in HPX? > > Thank you, > Steve > > p.s.: I also tried to use an action which runs on the >same locality for the second dataflow. > >> On 9 Oct 2017, at 16:56, Hartmut Kaiser >><[email protected]> wrote: >> >> Steve, >> >>> The number of cores per node is 32, so the 8 threads * 4 >>>cores should be >>> fine (I tried many variants anyway). >>> >>> The SPMD implementation seems like the way to go, but >>>after I split my >>> tasks into different localities how can I express data >>>dependencies >>> between them? >>> >>> Let’s say that I have tasks 0-10 in locality A and tasks >>>11-21 in locality >>> B. Now, the task 15 (in locality B) requires some data >>>produced by task 7 >>> (in locality A). >>> >>> Should I encode these data dependencies in terms of >>>futures when I split >>> the tasks into the two localities? >> >> Yes, either send the future over the wire (which might >>have surprising effects as we wait for the future to >>become ready before we actually send it) or use any other >>means of synchronizing between the two localities, >>usually a channel is a nice way of accomplishing this. >>You can either send the channel over to the other >>locality or use the register_as()/connect_to() >>functionalities expose by it: >> >> // locality 1 >> hpx::lcos::channel<T> c (hpx::find_here()); >> c.register_as("some-unique-name"); // careful: >>returns a future<void> >> c.set(T{}); // returns a future too >> >> // locality 2 >> hpx::lcos::channel<T> c; >> c.connect_to("some-unique-name"); // careful: >>returns a future<void> >> >> // this might wait for c to become valid before >>calling get() >> hpx::future<T> f = c.get(); >> >> on locality 2 'f' becomes ready as soon as c.set() was >>called on locality 1. While it does not really matter on >>what locality you create the channel (here defined by >>hpx::find_here()), I'd advise creating it on the >>receiving end of the pipe. >> >> If you gave us some example code we were able to advise >>more concretely. >> >> Regards Hartmut >> --------------- >> http://boost-spirit.com >> http://stellar.cct.lsu.edu >> >> >>> >>> Steve >>> >>> >>> >>> >>> >>> >>> >>> On 9 Oct 2017, at 15:37, Hartmut Kaiser >>><[email protected]> wrote: >>> >>> SMPD >> >> > _______________________________________________ hpx-users mailing list [email protected] https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
