zuston commented on issue #378:
URL: 
https://github.com/apache/incubator-uniffle/issues/378#issuecomment-1343746781

   > Maybe we could introduce multi-thread writing HDFS. If the file is too 
big, we could split them to multiple files. 
   
   Yes. The key of problem is the low speed of writing single one data file. 
   
   > ByteDance CSS have similar concept. If file exceed the size, we will open 
and write another file.
   
   Let me take a look. But I think writing another file is not a good solution, 
which wont improve the writing concurrency for multiple same partition events.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@uniffle.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to