First, first of all, I am not sure to set a larger pool size could get more
performance, usually, starting more threads could increase the CPU load,
and decrease the performance.
Then back your overriding, in the default configuration, the sharing-server
is not active, you need to set core/default/gRPCThreadPoolSize. The
`receiver-sharing-server` is not activated when you set a separated ip:port.

Sheng Wu 吴晟
Twitter, wusheng1108


Zhang, James <[email protected]> 于2020年2月7日周五 下午10:39写道:

> Dear Skywalking Dev,
> I had deployed Skywalking into K8S and allocated 3 cores Pod to OAP
> (6.5.0) service.
> In my microservice environment, after more Java agents connected OAP
> service for trace data reporting, OAP server reported below log:
> - org.apache.skywalking.oap.server.library.server.grpc.GRPCServer
> -478508906 [grpc-default-worker-ELG-3-2] WARN [] - Grpc server thread pool
> is full, rejecting the task
>
> I checked the source code and found that the default GRPCServer thread
> pool size is
> private int threadPoolSize = Runtime.getRuntime().availableProcessors() *
> 4;
>
> therefore my OAP GRPCServer thread pool size is set to 3*4=12 threads.
>
> However, I also found that this default threadPoolSize can be override by
> CoreMouleConfig. gRPCThreadPoolSize and I tried to set the system
> environment SW_RECEIVER_SHARING_GRPC_THREAD_POOL_SIZE to override the
> default value in application.yml
> receiver-sharing-server:
>   default:
>    gRPCThreadPoolSize: ${SW_RECEIVER_SHARING_GRPC_THREAD_POOL_SIZE:0}
>
> I tried to set the SW_RECEIVER_SHARING_GRPC_THREAD_POOL_SIZE to 32 for
> increasing the GRPC thread pool size. However after I set this environment,
> it seems that the GRPC thread pool keeps the same default 3*4 size.
>
> Can you tell me whether this setting is effective to override the default
> processors*4 setting for increasing the GRPC server thread pool size for
> better performance?
>
> Thanks & Best Regards
>
> Xiaochao Zhang(James)
> DI SW CAS MP EMK DO-CHN
> No.7, Xixin Avenue, Chengdu High-Tech Zone
> Chengdu, China  611731
> Email: [email protected] <mailto:[email protected]>
>
>

Reply via email to