Thank you Craig. What is the timeline for those work?

I understand the current implementation does not give us control of the
thread counts. By reading the code, the *Executor* can create as many as 2
times the cpu core threads. I have more questions here:

   1. What kind of *internal* threads is gRPC creating, how many for each
   kind, and what are they for? I know there are timer threads,
*Executor* threads,
   but I don't know what they are for. What other threads?
   2. Where are the threads to receive/send packet to socket? I cannot find
   any in the gRPC code base, not sure there are threads for that.
   3. Are there any places where gRPC creates short-lived threads?

These questions are based on gRPC C++ implementation.

Thanks,
Alex


On Thu, Jun 17, 2021 at 10:47 AM 'Craig Tiller' via grpc.io <
[email protected]> wrote:

> I don't think we make any guarantees about thread count right now, over
> and above bounded and relatively small.
>
> We are moving to a new design around a thing called EventEngine, which
> will allow pluggable IO, thread pools, and timers. I expect once that work
> is done we'll be in a position to start to write down some level of
> guarantees (I hope, but can't yet commit to 0 threads outside of
> EventEngine being a reasonable bar to hit).
>
> On Thu, Jun 17, 2021 at 10:38 AM 'Jonathan Basseri' via grpc.io <
> [email protected]> wrote:
>
>> Just to add some detail to this question, Alex and I work on a platform
>> which exerts a lot of control over threads and allocations to guarantee
>> high throughput and low latency. We would like to begin exposing gRPC
>> services from the platform without compromising performance.
>>
>> Our hope was to write an async server which would enqueue work in our own
>> thread manager and go right back to listening for incoming requests. We
>> have seen the APIs for controlling concurrency in both
>> `ResourceQuota` and `CompletionQueue` (e.g. here
>> <https://stackoverflow.com/a/52301414>) but it seems like there are
>> still internal gRPC threads.
>>
>> So it comes down to these questions: *Can I control the total number of
>> threads*, both short-lived and long-lived, that gRPR creates? If not,
>> can I provide a strong guarantee about the max # of threads?
>>
>> (Getting arenas to work with our custom allocator is probably a topic for
>> a future thread.)
>>
>> Thanks for your help,
>> Jonathan
>>
>> On Wednesday, June 16, 2021 at 4:28:48 PM UTC-7 Alex Zuo wrote:
>>
>>> I am trying to understand how many internal threads gRPC creates in
>>> async mode. I find some timer threads, and some threads in Executor. Are
>>> there any threads? Are there any short-lived thread?
>>>
>>> Also are there any threads to receive bytes from socket and deserialize
>>> them?
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "grpc.io" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to [email protected].
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/grpc-io/4fba9ffc-0d00-40bc-887f-e5134a494f83n%40googlegroups.com
>> <https://groups.google.com/d/msgid/grpc-io/4fba9ffc-0d00-40bc-887f-e5134a494f83n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "grpc.io" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/grpc-io/8laaUSfP0vE/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/grpc-io/CAAvp3oN-o_CN7XrqfHzTCM0zPoxpyMjp5iQxj1hKCqAKF8zDgg%40mail.gmail.com
> <https://groups.google.com/d/msgid/grpc-io/CAAvp3oN-o_CN7XrqfHzTCM0zPoxpyMjp5iQxj1hKCqAKF8zDgg%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/CAPYe9CGkGs4Jioq94q4E30-AVkKSwMRMZFegTHuqK%3DF5hvC8fw%40mail.gmail.com.

Reply via email to