Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/21257#discussion_r197174835
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -207,9 +210,23 @@ case class InsertIntoHadoopFsRelationCommand(
}
// first clear the path determined by the static partition keys (e.g.
/table/foo=1)
val staticPrefixPath =
qualifiedOutputPath.suffix(staticPartitionPrefix)
- if (fs.exists(staticPrefixPath) && !committer.deleteWithJob(fs,
staticPrefixPath, true)) {
- throw new IOException(s"Unable to clear output " +
- s"directory $staticPrefixPath prior to writing to it")
+ if (fs.exists(staticPrefixPath)) {
+ if (staticPartitionPrefix.isEmpty && outputCheck) {
+ // input contain output, only delete output sub files when job
commit
+ val files = fs.listFiles(staticPrefixPath, false)
+ while (files.hasNext) {
+ val file = files.next()
+ if (!committer.deleteWithJob(fs, file.getPath, false)) {
+ throw new IOException(s"Unable to clear output " +
--- End diff --
as `committer.deleteWithJob()` returns true in base class, that check won't
do much, at least not with the default impl. Probably better just to have
`deleteWithJob()` return Unit, require callers to raise an exception on a
delete failure. Given that delete() is required to say "dest doesn't exist if
you return", I don't think they need to do any checks at all
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]