Steve,

Please see #2942 for a first implementation of split_future for std::vector. 
Please let us know on the ticket if this solves your problem. We will merge 
things to master as soon as you're fine with it.

Thanks!
Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu


> -----Original Message-----
> From: Steve Petruzza [mailto:spetru...@sci.utah.edu]
> Sent: Tuesday, October 10, 2017 8:46 AM
> To: hartmut.kai...@gmail.com
> Cc: hpx-users@stellar.cct.lsu.edu
> Subject: Re: [hpx-users] Strong scalability of hpx dataflow and async
> 
> Yes that would be very useful.
> And yes I know upfront the size.
> 
> Thank you!
> Steve
> 
> > On Oct 10, 2017, at 7:40 AM, Hartmut Kaiser <hartmut.kai...@gmail.com>
> wrote:
> >
> > Steve,
> >
> >>> Your suggestions are already very useful. This channels mechanism
> looks
> >>> awesome, I will give it a try.
> >>>
> >>> One other thing, where I can actually give you a code example, is the
> >>> following:
> >>> - an async function returns a future of a vector
> >>> - I need to dispatch the single elements of this vector as separate
> >>> futures, cause those will be used (separately) by other async
> functions
> >>>
> >>> Here is what I am doing right now:
> >>>
> >>> hpx::future<std::vector<Something>> out_v = hpx::dataflow(exe_act,
> >>> locality, inputs);
> >>>
> >>> std::vector<hpx::future<Something>> outputs_fut(out_count);
> >>>
> >>> for(int i=0; i < out_count; i++){
> >>>  outputs_fut[i] = hpx::dataflow(
> >>>             [i, &out_v]() -> Something
> >>>             {
> >>>               return out_v.get()[i];
> >>>            }
> >>>   );
> >>> }
> >>>
> >>> This solution works but I think that the loop is just creating a bunch
> >> of
> >>> useless async calls just to take out one of the elems as a single
> >> future.
> >>>
> >>> Is there a better way of doing this? Basically to pass from a
> >>> future<vector> to a vector<future> in HPX?
> >>
> >> We do have the split_future facility doing exactly that but just for
> >> containers with a size known at compile time (pair, tuple, array), see
> >> here: https://github.com/STEllAR-
> >> GROUP/hpx/blob/master/hpx/lcos/split_future.hpp. Frankly, I'm not sure
> >> anymore why we have not added the same for std::vector as well. From
> >> looking at the code it should just work to do something similar as
> we've
> >> implemented for std::array. I opened a new ticket to remind me to
> >> implement split_future for std::vector (https://github.com/STEllAR-
> >> GROUP/hpx/issues/2940).
> >
> > After looking into this a bit more I now understand why we have not
> implemented split_future for std::vector. Please consider:
> >
> >    std::vector<future<T>>
> >        split_future(future<std::vector<T>> && f);
> >
> > in order for this to work efficiently we need to know how many elements
> are stored in the input vector without waiting for the future to become
> ready (as waiting for the future to become ready just for this would
> defeat the purpose). But have no way of knowing how many elements will be
> held by the vector before that.
> >
> > What I could do is to implement:
> >
> >    std::vector<future<T>>
> >        split_future(future<std::vector<T>> && f, std::size_t size);
> >
> > (with 'size' specifying the number of elements the vector is expected to
> hold) as in some circumstances you know upfront how many elements to
> expect.
> >
> > Would that be of use to you?
> >
> > Thanks!
> > Regards Hartmut
> > ---------------
> > http://boost-spirit.com
> > http://stellar.cct.lsu.edu
> >
> >
> >>
> >> Regards Hartmut
> >> ---------------
> >> http://boost-spirit.com
> >> http://stellar.cct.lsu.edu
> >>
> >>
> >>> Thank you,
> >>> Steve
> >>>
> >>> p.s.: I also tried to use an action which runs on the same locality
> for
> >>> the second dataflow.
> >>>
> >>> On 9 Oct 2017, at 16:56, Hartmut Kaiser <hartmut.kai...@gmail.com>
> >> wrote:
> >>>
> >>> Steve,
> >>>
> >>>
> >>> The number of cores per node is 32, so the 8 threads * 4 cores should
> be
> >>> fine (I tried many variants anyway).
> >>>
> >>> The SPMD implementation seems like the way to go, but after I split my
> >>> tasks into different localities how can I express data dependencies
> >>> between them?
> >>>
> >>> Let’s say that I have tasks 0-10 in locality A and tasks 11-21 in
> >> locality
> >>> B. Now, the task 15 (in locality B) requires some data produced by
> task
> >> 7
> >>> (in locality A).
> >>>
> >>> Should I encode these data dependencies in terms of futures when I
> split
> >>> the tasks into the two localities?
> >>>
> >>> Yes, either send the future over the wire (which might have surprising
> >>> effects as we wait for the future to become ready before we actually
> >> send
> >>> it) or use any other means of synchronizing between the two
> localities,
> >>> usually a channel is a nice way of accomplishing this. You can either
> >> send
> >>> the channel over to the other locality or use the
> >>> register_as()/connect_to() functionalities expose by it:
> >>>
> >>>   // locality 1
> >>>   hpx::lcos::channel<T> c (hpx::find_here());
> >>>   c.register_as("some-unique-name");  // careful: returns a
> >> future<void>
> >>>   c.set(T{});    // returns a future too
> >>>
> >>>   // locality 2
> >>>   hpx::lcos::channel<T> c;
> >>>   c.connect_to("some-unique-name");   // careful: returns a
> >> future<void>
> >>>
> >>>   // this might wait for c to become valid before calling get()
> >>>   hpx::future<T> f = c.get();
> >>>
> >>> on locality 2 'f' becomes ready as soon as c.set() was called on
> >> locality
> >>> 1. While it does not really matter on what locality you create the
> >> channel
> >>> (here defined by hpx::find_here()), I'd advise creating it on the
> >>> receiving end of the pipe.
> >>>
> >>> If you gave us some example code we were able to advise more
> concretely.
> >>>
> >>> Regards Hartmut
> >>> ---------------
> >>> http://boost-spirit.com
> >>> http://stellar.cct.lsu.edu
> >>>
> >>>
> >>>
> >>>
> >>> Steve
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On 9 Oct 2017, at 15:37, Hartmut Kaiser <hartmut.kai...@gmail.com>
> >> wrote:
> >>>
> >>> SMPD
> >>>
> >
> >

_______________________________________________
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to