> On 4 Sep 2019, at 22:58, Andrew Barnert wrote:
>
>> On Sep 4, 2019, at 10:17, Anders Hovmöller wrote:
>>
>>> .
>>
>> Doesn't all that imply that it'd be good if you could just pass it the queue
>> object you want?
>
> Pass it a queue object that you construct? Or a queue factory (which
On Sep 4, 2019, at 19:52, Bar Harel wrote:
>
> I'm sorry but I truly fail to see the complication:
>
> sem = Semaphore(10) # line num 1 somewhere near executor creation
> sem.acquire() # line number 2, right before submit
> future = executor.sumbit(...)
> future.add_done_callback(lambda x:
I'm sorry but I truly fail to see the complication:
sem = Semaphore(10) # line num 1 somewhere near executor creation
sem.acquire() # line number 2, right before submit
future = executor.sumbit(...)
future.add_done_callback(lambda x: sem.release()) # line number 3, right
after submit.
It's
On Sep 4, 2019, at 08:54, Dan Sommers <2qdxy4rzwzuui...@potatochowder.com>
wrote:
>
> How does blocking the submit call differ from setting max_workers
> in the call to ThreadPoolExecutor?
Here’s a concrete example from my own code:
I need to create thousands of images, each of which is about
On Sep 4, 2019, at 10:17, Anders Hovmöller wrote:
>
>
>> On 4 Sep 2019, at 18:31, Andrew Barnert via Python-ideas
>> wrote:
>>
>> On Sep 4, 2019, at 04:21, Chris Simmons wrote:
>>
>> I have seen deployed servers that wrap an Executor with a Semaphore to add
>> this functionality (which is
On Wed, Sep 4, 2019, 10:40 PM Dan Sommers <
2qdxy4rzwzuui...@potatochowder.com> wrote:
> I'm sure I'm missing something, but isn't that the point of a
> ThreadPoolExecutor? Yes, you can submit more requests than you
> have resources to execute concurrently, but the executor itself
> limits the
On 9/4/19 11:08 AM, Joao S. O. Bueno wrote:
I second that such a feature would be useful, as I am on the verge of
implementing
a work-around for that in a project right now.
I'm sure I'm missing something, but isn't that the point of a
ThreadPoolExecutor? Yes, you can submit more requests
>
> I must ask again about the actual necessity of adding a blocking call to
executor.submit in that particular case.
If I may intervene, the issue we're discussing about is frequently
encountered with asyncio: You can have an enormous queue of clients or
requests, creating a coroutine for each
> On 4 Sep 2019, at 18:31, Andrew Barnert via Python-ideas
> wrote:
>
> On Sep 4, 2019, at 04:21, Chris Simmons wrote:
>
> I have seen deployed servers that wrap an Executor with a Semaphore to add
> this functionality (which is mildly silly, but not when the “better”
> alternative is to
On Sep 4, 2019, at 04:21, Chris Simmons wrote:
I have seen deployed servers that wrap an Executor with a Semaphore to add this
functionality (which is mildly silly, but not when the “better” alternative is
to subclass the Executor and use knowledge of its implementation intervals…).
Which
(Somehow your post came in twice. I'm replying to the second one.)
This seems a reasonable idea. A problem may be how to specify this, since
all positional and keyword arguments to `submit()` after the function
object are passed to the call. A possible solution would be to add a second
call,
11 matches
Mail list logo