Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/21257#discussion_r197176292
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/InsertIntoHadoopFsRelationCommand.scala
---
@@ -207,9 +210,23 @@ case class InsertIntoHadoopFsRelationCommand(
}
// first clear the path determined by the static partition keys (e.g.
/table/foo=1)
val staticPrefixPath =
qualifiedOutputPath.suffix(staticPartitionPrefix)
- if (fs.exists(staticPrefixPath) && !committer.deleteWithJob(fs,
staticPrefixPath, true)) {
- throw new IOException(s"Unable to clear output " +
- s"directory $staticPrefixPath prior to writing to it")
+ if (fs.exists(staticPrefixPath)) {
+ if (staticPartitionPrefix.isEmpty && outputCheck) {
+ // input contain output, only delete output sub files when job
commit
+ val files = fs.listFiles(staticPrefixPath, false)
--- End diff --
if there are a lot of files here, you've gone from a dir delete which was
O(1) on a fileystem, probably O(descendant) on an object store to at
O(children) on an FS, O(children * descendants (chlld)) op here. Not
significant for a small number of files, but could potentially be expensive.
Why do the iteration at all?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]