[
https://issues.apache.org/jira/browse/MESOS-8587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16559671#comment-16559671
]
Alexander Rukletsov edited comment on MESOS-8587 at 7/31/18 12:54 PM:
----------------------------------------------------------------------
An alternative suggestion built around parallel {{async}} idea:
{noformat}
template <
typename F,
typename C,
typename T = typename std::iterator_traits<
typename C::iterator>::value_type,
typename std::enable_if<std::is_void<
typename result_of<F(T)>::type>::value>::type* = nullptr>
Future<Nothing> async(const C& items, const F& f)
{
std::vector<Future<Nothing>> futures;
foreach (const T& item, items) {
Future<Nothing> finished = process::async(f, item);
futures.push_back(finished);
}
return process::collect(futures).then([] { return Nothing(); });
}
{noformat}
Can be used for example like this:
{noformat}
Future<Nothing> finished = process::async(
requests,
[this](const Request& request) {
// process the request and send the response
});
// Block the actor until all workers have generated responses.
finished.await();
{noformat}
was (Author: alexr):
An alternative suggestion built around parallel {{async}} idea:
{noformat}
Future<Nothing> async(const C& items, const F& f)
{
std::vector<Future<Nothing>> futures;
foreach (const T& item, items) {
Future<Nothing> finished = process::async(f, item);
futures.push_back(finished);
}
return process::collect(futures).then([] { return Nothing(); });
}
{noformat}
Can be used for example like this:
{noformat}
Future<Nothing> finished = process::async(
requests,
[this](const Request& request) {
// process the request and send the response
});
// Block the actor until all workers have generated responses.
finished.await();
{noformat}
> Introduce a parallel for each loop (and other parallel algorithms).
> -------------------------------------------------------------------
>
> Key: MESOS-8587
> URL: https://issues.apache.org/jira/browse/MESOS-8587
> Project: Mesos
> Issue Type: Improvement
> Components: libprocess
> Reporter: Benjamin Mahler
> Priority: Major
> Labels: performance
>
> Consider the following code:
> {code}
> SomeProcess::func()
> {
> foreach (const Item& item, items) {
> // Perform some const work on item.
> }
> }
> {code}
> When {{items}} becomes very large, this code would benefit from some
> parallelism. With a parallel loop construct, we could improve the performance
> of this type of code significantly:
> {code}
> SomeProcess::func()
> {
> foreach_parallel (items, [=](const Item& item) {
> // Perform some const work on item.
> });
> }
> {code}
> Ideally, this could enforce const-access to the current Process for safety.
> An implementation of this would need to do something like:
> # Split the iteration of {{items}} into 1 <= N <= num_worker_threads segments.
> # Spawn N-1 additional temporary execution Processes (or re-use from a pool)
> # Dispatch to these N-1 additional processes for them to perform their
> segment of the iteration.
> # Perform the 1st segment on the current Process.
> # Have the current Process block to wait for the others to finish. (note need
> to avoid deadlocking the worker threads here! See MESOS-8256)
> An alternative implementation would be to pull work from a shared queue:
> # Split the iteration of {{items}} into 1 <= N <= num_worker_threads
> segments. Store these segments in a lock free queue.
> # Spawn N-1 additional temporary execution Processes (or re-use from a pool)
> # Perform the 1st segment on the current Process.
> # Each process pulls a segment from the queue and executes a segment.
> # If the current Process finds the queue empty, it then needs to block
> waiting for outstanding segments to finish. Note that this cannot deadlock:
> if an item was pulled from the queue, a worker is executing it.
> An example use case for this is the task reconciliation loops in the master:
> https://github.com/apache/mesos/blob/1.5.0/src/master/master.cpp#L8385-L8419
> This generalizes to many other algorithms rather than just iteration. It may
> be good to align this with the C++ Parallelism TS, which shows how many of
> the C++ algorithms have potential for parallel counterparts.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)