cylrk opened a new issue, #10816:
URL: https://github.com/apache/skywalking/issues/10816

   ### Search before asking
   
   - [X] I had searched in the 
[issues](https://github.com/apache/skywalking/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Apache SkyWalking Component
   
   OAP server (apache/skywalking)
   
   ### What happened
   
   hello! Please ask two questions. During the process of using Skywalking 9.1 
(deploying 5 8-core 16G servers on Alibaba ECS) in the production environment, 
there were some situations where data loss was reported from the agent to the 
OAP cluster (as can be seen from the logs in the first version, there were many 
GRPC timeouts)
   So in the later stage, it was changed to kafka reporter for reporting data. 
However, there is still some data loss.
   2. During the log investigation process, many Grpc server thread pools were 
found to be full, rejecting the task. The queue size is the default of 10000. I 
would like to know if after reporting data nodes through Kafka, shouldn't there 
be so many Grpc tasks? 
   <img width="1444" alt="image" 
src="https://github.com/apache/skywalking/assets/65347924/cd5cb2d0-e72b-41b0-8466-1f47773f90aa";>
   
   
   ### What you expected to happen
   
   I hope there will be no grpc task!
   
   ### How to reproduce
   
   The Grpc server thread pool is full, rejecting the task after replacing it 
with kafka after a large amount of timeout was originally reported by Grpc
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to