ajithme commented on issue #24142: [SPARK-27194] Job failures when task attempts do not clean up spark-staging parquet files URL: https://github.com/apache/spark/pull/24142#issuecomment-474304840 @cloud-fan In ``InsertIntoHadoopFsRelationCommand`` when ``dynamicPartitionOverwrite`` is true, this will cause ``org.apache.spark.internal.io.HadoopMapReduceCommitProtocol#newTaskTempFile`` to choose the partition directory. But in case of task reattempt due to executor loss and new executor being launched in same machine, the output will be on same path and same file. Hence it will break. Refer https://issues.apache.org/jira/browse/SPARK-27194 for the stack, here Currently looks like from logs the file name for task 200.0 and 200.1(reattempt) expected file name to be, part-00200-blah-blah.c000.snappy.parquet. (refer org.apache.spark.internal.io.HadoopMapReduceCommitProtocol#getFilename). As only task name is used in file name, this will cause the conflict(only when new executor and reattempt task is launched on same machine). Please correct me if i am wrong
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
