Sincerely I would have to agree that it's seems a bit excessive the
`cancel_on_error`, unless it enabled by default and implemented in the abstract
class it should probably not be included, that was just an idea to keep
backwards compatibility.
I will personally simply add subclass of my prefer
Thanks for all your hard work on the `cancel_futures` feature!
As you said, there is a complexity cost (both in terms of the API and the
implementation) whenever a new feature is added. The current
ProcessPoolExecutor implementation, in particular, is complex enough that I
can't easily easily reas
> But, it would potentially risk adding an underutilized parameter to the
> executor constructor (which contributes to feature bloat).
That's true, personally I would always enable cancel_on_error (making it
redundant and implementing it in the abstract class), but that's just my use
case. You
> Then if Executor.__exit__ detects an exception it would call shutdown
with cancel_futures set to True.
Oh, I see. That should be rather simple:
```
def __exit__(self, exc_type, exc_val, exc_tb):
if exc_val is not None and self._cancel_on_error:
self.shutdown(wait=True, c
> Hmm, it should be possible. Do you specifically mean cancelling the pending
> futures once a single one of the submitted functions raises an exception,
> or cancelling the pending futures when the Executor itself raises an
> exception (I.E. BrokenProcessPool)? I would assume the prior, since that
Miguel Ángel Prosper wrote:
> Thank you so much for the work, I was very confused on how to even start
implementing it in the ProcessPoolExecutor, but you finished everything
super quick!
No problem! The ProcessPoolExecutor implementation wasn't immediately clear
to me either, but after some exper
Thank you so much for the work, I was very confused on how to even start
implementing it in the ProcessPoolExecutor, but you finished everything super
quick!
I'm suppose that this might be better in another thread but... Could it be
possible to changing the context manager of the executor to ca
Thank you so much for all your efforts on this change, Kyle! And thanks to
Brian Q and Antoine P for reviewing.
On Sun, Feb 2, 2020 at 15:40 Kyle Stanley wrote:
> > I would certainly be willing to look into it.
>
> As an update to this thread for anyone interested in this feature, it's
> been im
> I would certainly be willing to look into it.
As an update to this thread for anyone interested in this feature, it's
been implemented in Python 3.9 for both ProcessPoolExecutor and
ThreadPoolExecutor as a new parameter to Executor.shutdown(),
*cancel_futures*.
For a description of the feature,
> Is anyone else interested in implementing this small feature for
concurrent.futures?
I would certainly be willing to look into it. We've been discussing the
possibility of a native threadpool for asyncio in the future (
https://bugs.python.org/issue32309), so it would certainly be beneficial
for
(Belatedly)
Is anyone else interested in implementing this small feature for
concurrent.futures?
On Fri, Jan 3, 2020 at 18:28 Miguel Ángel Prosper <
miguelangel.pros...@gmail.com> wrote:
> > It looks like you have a good handle on the code -- do you want to
> submit a PR to GitHub to add such a
> It looks like you have a good handle on the code -- do you want to submit a
> PR to GitHub to add such a parameter?
Thanks, but I'm not really sure how to implement it in the ProcessPoolExecutor,
I just think the solution is probably related to the code responsible of
handling a failed initia
> But I don’t think “terminate” is the right name. Maybe “cancel”? Or even
> “shutdown(wait=whatever, cancel=True)?”
"terminate" was definitely not a good name, especially because it doesn't
actually terminate anything, it just cancels some of the operations. Since it
also has to cooperate with
On Jan 3, 2020, at 10:11, Miguel Ángel Prosper
wrote:
>
>
>>
>> Having a way to clear the queue and then shutdown once existing jobs are
>> done is a lot
>> more manageable.
> ...
>> So the only clean way to do this is cooperative: flush the queue, send some
>> kind of
>> message to all chi
On Fri, Jan 3, 2020 at 3:28 PM Miguel Ángel Prosper <
miguelangel.pros...@gmail.com> wrote:
> > gets one item from the queue, runs it, and then checks if the executor
> is being shut down.
>
> That's exactly what I thought at first, but just after that the continue
> statement prevents that check,
> gets one item from the queue, runs it, and then checks if the executor is
> being shut down.
That's exactly what I thought at first, but just after that the continue
statement prevents that check, so all futures always get processed. Only when
the sentinel is reached, which it's placed at the
Looking at the implementation in concurrent/futures/thread.py, it looks
like each of the worker threads repeatedly gets one item from the queue,
runs it, and then checks if the executor is being shut down. Worker threads
get added dynamically until the executor's max thread count is reached. New
fu
> Having a way to clear the queue and then shutdown once existing jobs are done
> is a lot
> more manageable.
...
> So the only clean way to do this is cooperative: flush the queue, send some
> kind of
> message to all children telling them to finish as quickly as possible, then
> wait for them
On Jan 2, 2020, at 20:40, Miguel Ángel Prosper
wrote:
>
> I think it would be very helpful to have an additional argument (cancel for
> example) added to Executor.shutdown that cancels all pending futures
> submitted to the executor.
> Then context manager would gain the ability to abort all
19 matches
Mail list logo