duanmeng commented on pull request #29907:
URL: https://github.com/apache/spark/pull/29907#issuecomment-705995805


   > > This is related to disk/kernel but we can avoid it in spark without any 
performance loss.
   > 
   > Do you have any reference regarding the kernel issue?In other words, do 
you have any evidence to support the issue is caused by a kernel issue?
   
   Unfortunately I don't have directly evidence since it's rarely to happen and 
the kernel is a inhouse kernel. I can only infer from the log that the file 
exist and its length is 1462 but read 0 bytes from it, maybe the data is still 
in the block layer when the file was read, so IMHO we can defend it in 
framework level (eg, spark) since the actual length is count after writing 
records and commiting.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to