Thanks for the info! It sort of sounds like the default-executor threads
and the sync-server threads come from the same thread pool. I know that
ServerBuilder::SetResourceQuota sets the max number sync-server threads.
Does it have any impact on the number of default-executor threads? It
doesn't seem to.

Also, can you tell me under what conditions a process can expect to wind up
with dozens of sleeping default-executor threads? My process doesn't start
off like that but after a few hours thats how it winds up. It is not under
high load, just serving a few short-lived streaming-response queries every
few seconds. If I limit the max number of these threads to lets say 10, do
you anticipate that connections would be refused under these conditions (ie
serving a few short-lived streaming-response queries every few seconds)?

Last, is it safe to manually kill these sleeping threads? They seem to be
blocked on a condition variable.

Thanks!

-Jeff

On Thu, Jan 6, 2022 at 3:39 PM Mark D. Roth <[email protected]> wrote:

> The C++ sync server has one thread pool for both polling and request
> handlers. When a request comes in, an existing polling thread basically
> becomes a request handler thread, and when the request handler completes,
> that thread is available to become a polling thread again. The MIN_POLLERS
> and MAX_POLLERS options
> <https://grpc.github.io/grpc/cpp/classgrpc_1_1_server_builder.html#aff66bd93cba7d4240a64550fe1fca88d>
> (which can be set via ServerBuilder::SetSyncServerOption()
> <https://grpc.github.io/grpc/cpp/classgrpc_1_1_server_builder.html#acbc2a859672203e7be097de55e296d4b>)
> allow tuning the number of threads that are used for polling: when a
> polling thread becomes a request handler thread, if there are not enough
> polling threads remaining, a new one will be spawned, and when a request
> handler finishes, if there are too many polling threads, the thread will
> terminate.
>
> On Wed, Jan 5, 2022 at 12:54 PM Jeff Steger <[email protected]> wrote:
>
>> Ah never mind I see you answered, apologies. Let me ask you this: am I
>> stuck with all of these default-executor threads that my process is
>> spawning? Is there no way to limit them? Do they come from same pool as
>> grpc sync server threads?
>>
>> On Wed, Jan 5, 2022 at 3:51 PM Jeff Steger <[email protected]> wrote:
>>
>>> Can you specifically answer this:
>>>
>>> grpc-java has a method in its ServerBuilder class to set the Executor.
>>> Is there similar functionality for grpc-c++ ?
>>>
>>> Thanks!
>>>
>>> On Tue, Jan 4, 2022 at 11:55 AM Mark D. Roth <[email protected]> wrote:
>>>
>>>> I answered this in the other thread you posted on.
>>>>
>>>> On Sun, Jan 2, 2022 at 9:39 AM Jeff Steger <[email protected]> wrote:
>>>>
>>>>> grpc-java has a method in its ServerBuilder class to set the Executor.
>>>>> Is there similar functionality for grpc-c++ ? I am running a C++ grpc
>>>>> server and the number of executor threads it spawns is high and seems to
>>>>> never decrease, even when connections stop.
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "grpc.io" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to [email protected].
>>>>> To view this discussion on the web visit
>>>>> https://groups.google.com/d/msgid/grpc-io/CAA-WHunWvX5Tr6Vp3e-E6vcKgzD%3DGzsCNoZzqNNQ8Ox8BZvggA%40mail.gmail.com
>>>>> <https://groups.google.com/d/msgid/grpc-io/CAA-WHunWvX5Tr6Vp3e-E6vcKgzD%3DGzsCNoZzqNNQ8Ox8BZvggA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>>> .
>>>>>
>>>>
>>>>
>>>> --
>>>> Mark D. Roth <[email protected]>
>>>> Software Engineer
>>>> Google, Inc.
>>>>
>>>
>
> --
> Mark D. Roth <[email protected]>
> Software Engineer
> Google, Inc.
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAA-WHumcnUakyO4GUnKnaDwviEh70kZdUggAG0TWQ-z7kjOjbw%40mail.gmail.com.

Reply via email to