Hi AJ,
If I may - a follow-up question after looking a little bit into the 
EventEngine interface. 

I need to satisfy a very minimal requirement - being able to control the 
number of internal threads and the cpu affinity of those threads. I 
wouldn't want to change any other default behavior of grpc. Ideally what I 
would like to do is to use the existing default EventEngine with a small 
change to it's thread pool.
I don't see a way to do it with current interface, especially if I want to 
use gRPC as an installed library and not link it into my source code (via 
submodule or some other method). Only the EventEngine interface is exposed 
to library users. 
Any advice on that? Is my only option to incorporate the grpc source into 
my project?

Thanks,
Dan

On Thursday, February 8, 2024 at 12:50:04 AM UTC+2 Dan Cohen wrote:

> Thanks AJ.
> Will have a look at the EventEngine then.
>  
> Sorry about replying privately, not sure how it happened, it wasn't my 
> intention.
>
>
> On Tuesday, February 6, 2024 at 8:01:35 PM UTC+2 AJ Heller wrote:
>
> Dan,
>
> Replying here on the mailing list thread.
>
> > Thanks AJ!
> >
> > Using taskset is not an option for me, as my gRPC server is part of the 
> executable that also does the more sensitive work IO load work. So what I 
> need is an internal process differentiation between the threads' affinity. 
> You wouldn't recommend patching the affinity into thd.cc because of the 
> possible impact/side-effect  on grpc behavior? or is there another reason I 
> should be aware of??
>
> Maintaining patches against gRPC may make it difficult to upgrade your 
> library. It's best to stay up to date with the latest gRPC versions if 
> possible, to take advantage of bug fixes, performance improvements, new 
> features, etc.. You'll also be hard-pressed to get support for a modified 
> library, presuming you run into something tricky and want to post here or 
> to Stackoverflow. Those are my main reservations, but they're subjective, 
> please do what makes sense for your use case.
>
> >
> > As for the EventEngine interface - this looks very interesting, I will 
> take a look. Thanks. Where can I find the default gRPC implementation of 
> the EventEngine?
>
> The Posix, Windows, and iOS implementations all live here 
> https://github.com/grpc/grpc/tree/cb7172dc17c005e696d0b6945d2927a9e8bf81ac/src/core/lib/event_engine.
>  
> For learning purposes: Posix is the most complex/featureful, and Windows is 
> comparatively simple.
>
> >
> > Thanks a lot,
> > Dan  
>
> On Monday, February 5, 2024 at 1:30:35 PM UTC-8 AJ Heller wrote:
>
> Hi Dan,
>
> If you're interested in CPU affinity for the entire server process on 
> Linux, you can use `taskset` https://linux.die.net/man/1/taskset. 
> Otherwise, you'll likely want to patch `thd.cc` and use pthread's affinity 
> APIs, but I don't recommend it.
>
> For more advanced use cases with the C/C++ library, you can also get full 
> control over the threading model and all async behavior by implementing a 
> custom EventEngine 
> <https://github.com/grpc/grpc/blob/5c988a47c4285bf8973a96c3a45bc15a7b9b678b/include/grpc/event_engine/event_engine.h>
> .
>
> Cheers,
> -aj
> On Thursday, January 25, 2024 at 10:26:07 AM UTC-8 Dan Cohen wrote:
>
> Hello,
>
> I'm implementing an async gRPC server in c++.
> I need to control and limit the cores that are used by gRPC internal 
> threads (the completion queues handler threads are controlled by me) - i.e. 
> I need to set those threads' affinity.
>
> Is there a way for me to do this without changing gRPC code? 
> If not, where in the code would you recommend to start looking for 
> changing this? 
>
> Thanks,
> Dan
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to grpc-io+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/65738c9f-5321-406f-aaf4-5ab47f2625f2n%40googlegroups.com.

Reply via email to