rickyma opened a new issue, #1639:
URL: https://github.com/apache/incubator-uniffle/issues/1639

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/incubator-uniffle/issues?q=is%3Aissue) and 
found no similar issues.
   
   
   ### Describe the bug
   
   As described in 
https://github.com/apache/incubator-uniffle/issues/1596#issuecomment-2012168823,
 I find out https://github.com/apache/incubator-uniffle/pull/1605 is not 
enough. 
   
   For example, because Netty processing is asynchronous, even if we only 
configure the number of Netty workers to be 200, the number of threads running 
concurrently to read local files may far exceed 200. This is because when the 
Netty worker thread finishes calling `writeAndFlush`, it immediately goes to 
receive and process the next client request, without waiting for the data to be 
sent completely. The process of sending data is asynchronous.
   
   So we should limit the concurrency when reading local files.
   
   ### Affects Version(s)
   
   master
   
   ### Uniffle Server Log Output
   
   _No response_
   
   ### Uniffle Engine Log Output
   
   _No response_
   
   ### Uniffle Server Configurations
   
   _No response_
   
   ### Uniffle Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes I am willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to