westonpace commented on a change in pull request #8680:
URL: https://github.com/apache/arrow/pull/8680#discussion_r545195108
##########
File path: cpp/src/arrow/util/thread_pool_benchmark.cc
##########
@@ -136,21 +168,24 @@ static void ThreadedTaskGroup(benchmark::State& state) {
for (auto _ : state) {
auto task_group = TaskGroup::MakeThreaded(pool.get());
- for (int32_t i = 0; i < nspawns; ++i) {
- // Pass the task by reference to avoid copying it around
- task_group->Append(std::ref(task));
- }
+ task_group->Append([&task, nspawns, task_group] {
Review comment:
It's not exactly a holdover. `SerialTaskGroup::AppendReal` has
`DCHECK(!finished_);` which makes sense (you don't want to add a task to a
finished task group). However, if you add the same `DCHECK(!finished_);` to
`ThreadedTaskGroup` then the old non-nested approach in `TestTaskGroupErrors`
(in task_group_test.cc) fails.
What happens is that the first one or two tasks gets added to the task group
and, if they are fast, they finish before the task adder gets a chance to add
the rest of the tasks and the task group gets marked finished. This doesn't
happen with the serial task group since that task group cannot be marked
finished until the caller waits on this.
Ben and I had some discussion about possibly adding a `Start` method to task
group but I don't remember where that ended up. So as it is right now you need
to do this nested approach to ensure that the task group does not finish until
you have scheduled all the tasks you want to run.
Also, I put a commented out `DCHECK(!finished_);` in `ThreadedTaskGroup`. I
don't want to uncomment it because there are still some places using threaded
task group that don't use it in the nested fashion and I don't want to break
them.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]