Hi AJ,

Thanks for the reply. May I suggest making max number of threads (and/or
any rate limits) configurable.

Jeff

On Tue, May 16, 2023 at 7:44 PM 'AJ Heller' via grpc.io <
grpc-io@googlegroups.com> wrote:

> Hello all, I want to offer a quick update. tl;dr: Jeff's analysis is
> correct. The executor is legacy code at this point, slated for deletion,
> and increasingly unused.
>
> We have been carefully replacing the legacy I/O, timer, and async
> execution implementations with a new public EventEngine
> <https://github.com/grpc/grpc/blob/63ecc4ba3ee958d8f95b133ce38dbedb70fc0658/include/grpc/event_engine/event_engine.h>
>  API
> and its default implementations. The new thread pools do still auto-scale
> as needed - albeit with different heuristics, which are evolving as we
> benchmark - but threads are now reclaimed if/when gRPC calms down from a
> burst of activity that caused the pool to grow. Also, I believe the
> executor did not rate limit thread creation when closure queues reached
> their max depths, but the default EventEngine implementations do rate limit
> thread creation (currently capped at 1 new thread per second, but that's an
> implementation detail which may change ... some benchmarks have shown it to
> be a pretty effective rate). Beginning around gRPC v.148, you should see an
> increasing number of "event_engine" threads, and a decreasing number of
> executor threads. Ultimately we aim to unify all async activity into a
> single auto-scaling thread pool under the EventEngine.
>
> And since the EventEngine is a public API, any integrators that want
> complete control over thread behavior can implement their own EventEngine
> and plug it in to gRPC. gRPC will (eventually) use a provided engine for
> all async execution, timers, and I/O. Implementing an engine is not a small
> task, but it is an option people have been requesting for years. Otherwise,
> the default threading behavior provided by gRPC is tuned for performance -
> if starting a thread helps gRPC move faster, then that's what it will do.
>
> Hope this helps!
> -aj
>
> On Friday, May 12, 2023 at 4:03:58 PM UTC-7 Jiqing Tang wrote:
>
>> Thanks so much Jeff, agree reaping them after they being idle would be
>> great.
>>
>> On Friday, May 12, 2023 at 6:59:28 PM UTC-4 Jeff Steger wrote:
>>
>>> This is as close to an explanation as I have found:
>>>
>>> look at sreecha’s response in
>>> https://github.com/grpc/grpc/issues/14578
>>>
>>> tldr:
>>> “ The max number of threads can be 2x
>>> <https://github.com/grpc/grpc/blob/v1.10.0/src/core/lib/iomgr/executor.cc#L94>
>>>  the
>>> number cores and unfortunately its not configurable at the moment….. any
>>> executor threads and timer-manager you see are by-design; unless the
>>> threads are more than 2x the number of cores on your machine in which case
>>> it is clearly a bug”
>>>
>>>
>>> From my observation of the thread count and from my examination of the
>>> grpc code (which I admit I performed some years ago), it is evident to me
>>> that the grpc framework spawns threads up to 2x the number of hardware
>>> cores. It will spawn a new thread if an existing thread in its threadpool
>>> is busy iirc. The issue is that the grpc framework never reaps idle
>>> threads. Once a thread is created, it is there for the lifetime if the grpc
>>> server. There is no way to configure the max number of threads either. It
>>> is really imo a sloppy design. threads aren’t free and this framework keeps
>>> (in my case) dozens and dozens of idle threads around even during long
>>> periods of low or no traffic. Maybe they fixed it in newer versions, idk.
>>>
>>> On Fri, May 12, 2023 at 5:58 PM Jiqing Tang <jiqin...@gmail.com> wrote:
>>>
>>>> Hi Jeff and Mark,
>>>>
>>>> I just ran into the same issue with an async c++ GRPC server (version
>>>> 1.37.1), was curious about these default-executo threads and then got this
>>>> thread, did you guys figure out what these threads are for? The number
>>>> seems to be about 2x of the polling worker threads.
>>>>
>>>> Thanks!
>>>>
>>>> On Friday, January 7, 2022 at 3:47:51 PM UTC-5 Jeff Steger wrote:
>>>>
>>>>> Thanks Mark, I will turn on trace and see if I see anything odd. I was
>>>>> reading about a function called Executor::SetThreadingDefault(bool enable)
>>>>> that I think I can safely call after i create my grpc server. It is a
>>>>> public function and seems to allow me to toggle between a threaded
>>>>> implementation and an async one. Is that accurate? Is calling this 
>>>>> function
>>>>> safe to do and/or recommended (or at least not contra-recommended). Thanks
>>>>> again for your help!
>>>>>
>>>>> Jeff
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Jan 7, 2022 at 11:14 AM Mark D. Roth <ro...@google.com> wrote:
>>>>>
>>>>>> Oh, sorry, I thought you were asking about the sync server threads.
>>>>>> The default-executor threads sound like threads that are spawned 
>>>>>> internally
>>>>>> inside of C-core for things like synchronous DNS resolution; those should
>>>>>> be completely unrelated to the sync server threads.  I'm not sure what
>>>>>> would cause those threads to pile up.
>>>>>>
>>>>>> Try running with the env vars GRPC_VERBOSITY=DEBUG
>>>>>> GRPC_TRACE=executor and see if that yields any useful log information.  
>>>>>> In
>>>>>> particular, try running that with a debug build, since that will add
>>>>>> additional information about where in the code the closures on the 
>>>>>> executor
>>>>>> threads are coming from.
>>>>>>
>>>>>> On Thu, Jan 6, 2022 at 7:05 PM Jeff Steger <be2...@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>> Thanks for the info! It sort of sounds like the default-executor
>>>>>>> threads and the sync-server threads come from the same thread pool. I 
>>>>>>> know
>>>>>>> that ServerBuilder::SetResourceQuota sets the max number sync-server
>>>>>>> threads. Does it have any impact on the number of default-executor 
>>>>>>> threads?
>>>>>>> It doesn't seem to.
>>>>>>>
>>>>>>> Also, can you tell me under what conditions a process can expect to
>>>>>>> wind up with dozens of sleeping default-executor threads? My process
>>>>>>> doesn't start off like that but after a few hours thats how it winds 
>>>>>>> up. It
>>>>>>> is not under high load, just serving a few short-lived 
>>>>>>> streaming-response
>>>>>>> queries every few seconds. If I limit the max number of these threads to
>>>>>>> lets say 10, do you anticipate that connections would be refused under
>>>>>>> these conditions (ie serving a few short-lived streaming-response 
>>>>>>> queries
>>>>>>> every few seconds)?
>>>>>>>
>>>>>>> Last, is it safe to manually kill these sleeping threads? They seem
>>>>>>> to be blocked on a condition variable.
>>>>>>>
>>>>>>> Thanks!
>>>>>>>
>>>>>>> -Jeff
>>>>>>>
>>>>>>> On Thu, Jan 6, 2022 at 3:39 PM Mark D. Roth <ro...@google.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> The C++ sync server has one thread pool for both polling and
>>>>>>>> request handlers. When a request comes in, an existing polling thread
>>>>>>>> basically becomes a request handler thread, and when the request 
>>>>>>>> handler
>>>>>>>> completes, that thread is available to become a polling thread again. 
>>>>>>>> The MIN_POLLERS
>>>>>>>> and MAX_POLLERS options
>>>>>>>> <https://grpc.github.io/grpc/cpp/classgrpc_1_1_server_builder.html#aff66bd93cba7d4240a64550fe1fca88d>
>>>>>>>> (which can be set via ServerBuilder::SetSyncServerOption()
>>>>>>>> <https://grpc.github.io/grpc/cpp/classgrpc_1_1_server_builder.html#acbc2a859672203e7be097de55e296d4b>)
>>>>>>>> allow tuning the number of threads that are used for polling: when a
>>>>>>>> polling thread becomes a request handler thread, if there are not 
>>>>>>>> enough
>>>>>>>> polling threads remaining, a new one will be spawned, and when a 
>>>>>>>> request
>>>>>>>> handler finishes, if there are too many polling threads, the thread 
>>>>>>>> will
>>>>>>>> terminate.
>>>>>>>>
>>>>>>>> On Wed, Jan 5, 2022 at 12:54 PM Jeff Steger <be2...@gmail.com>
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Ah never mind I see you answered, apologies. Let me ask you this:
>>>>>>>>> am I stuck with all of these default-executor threads that my process 
>>>>>>>>> is
>>>>>>>>> spawning? Is there no way to limit them? Do they come from same pool 
>>>>>>>>> as
>>>>>>>>> grpc sync server threads?
>>>>>>>>>
>>>>>>>>> On Wed, Jan 5, 2022 at 3:51 PM Jeff Steger <be2...@gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Can you specifically answer this:
>>>>>>>>>>
>>>>>>>>>> grpc-java has a method in its ServerBuilder class to set the
>>>>>>>>>> Executor. Is there similar functionality for grpc-c++ ?
>>>>>>>>>>
>>>>>>>>>> Thanks!
>>>>>>>>>>
>>>>>>>>>> On Tue, Jan 4, 2022 at 11:55 AM Mark D. Roth <ro...@google.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> I answered this in the other thread you posted on.
>>>>>>>>>>>
>>>>>>>>>>> On Sun, Jan 2, 2022 at 9:39 AM Jeff Steger <be2...@gmail.com>
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> grpc-java has a method in its ServerBuilder class to set the
>>>>>>>>>>>> Executor. Is there similar functionality for grpc-c++ ? I am 
>>>>>>>>>>>> running a C++
>>>>>>>>>>>> grpc server and the number of executor threads it spawns is high 
>>>>>>>>>>>> and seems
>>>>>>>>>>>> to never decrease, even when connections stop.
>>>>>>>>>>>>
>>>>>>>>>>>> --
>>>>>>>>>>>> You received this message because you are subscribed to the
>>>>>>>>>>>> Google Groups "grpc.io" group.
>>>>>>>>>>>> To unsubscribe from this group and stop receiving emails from
>>>>>>>>>>>> it, send an email to grpc-io+u...@googlegroups.com.
>>>>>>>>>>>> To view this discussion on the web visit
>>>>>>>>>>>> https://groups.google.com/d/msgid/grpc-io/CAA-WHunWvX5Tr6Vp3e-E6vcKgzD%3DGzsCNoZzqNNQ8Ox8BZvggA%40mail.gmail.com
>>>>>>>>>>>> <https://groups.google.com/d/msgid/grpc-io/CAA-WHunWvX5Tr6Vp3e-E6vcKgzD%3DGzsCNoZzqNNQ8Ox8BZvggA%40mail.gmail.com?utm_medium=email&utm_source=footer>
>>>>>>>>>>>> .
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Mark D. Roth <ro...@google.com>
>>>>>>>>>>> Software Engineer
>>>>>>>>>>> Google, Inc.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Mark D. Roth <ro...@google.com>
>>>>>>>> Software Engineer
>>>>>>>> Google, Inc.
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Mark D. Roth <ro...@google.com>
>>>>>> Software Engineer
>>>>>> Google, Inc.
>>>>>>
>>>>> --
>>>> You received this message because you are subscribed to the Google
>>>> Groups "grpc.io" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>> an email to grpc-io+u...@googlegroups.com.
>>>>
>>> To view this discussion on the web visit
>>>> https://groups.google.com/d/msgid/grpc-io/3ef87e52-f06e-492f-aecd-b6a58b3bd654n%40googlegroups.com
>>>> <https://groups.google.com/d/msgid/grpc-io/3ef87e52-f06e-492f-aecd-b6a58b3bd654n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>> --
> You received this message because you are subscribed to the Google Groups "
> grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to grpc-io+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/d6eb990c-30bc-464d-a8f1-72fc64350b23n%40googlegroups.com
> <https://groups.google.com/d/msgid/grpc-io/d6eb990c-30bc-464d-a8f1-72fc64350b23n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAA-WHumojLGu_%2BvYwKDAHbkjBzAX2mePrr3Wa-0uy157n3XcqQ%40mail.gmail.com.

Reply via email to