Ngone51 commented on pull request #30139:
URL: https://github.com/apache/spark/pull/30139#issuecomment-722424971


   > Each iterate of all streams is small, but each chunk will trigger count 
chunksBeingTransferred , the cost will be very serious
   and there is also a competition for streams's locks since it is a 
ConcurrentHashMap.
   
   It doesn't make sense to me if we count the cost for the total cost of 
chunks. And your benchmark seems to prove that the main cost comes from the 
traversing of streams, isn't it?
   
   BTW, shall we avoid traversing when 
`maxChunksBeingTransferred=Long.MAX_VALUE`?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to