sunchao commented on a change in pull request #31413:
URL: https://github.com/apache/spark/pull/31413#discussion_r568353916
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -591,20 +590,48 @@ case class FileSourceScanExec(
logInfo(s"Planning scan with bin packing, max size: $maxSplitBytes bytes,
" +
s"open cost is considered as scanning $openCostInBytes bytes.")
+ // Filter files with bucket pruning if possible
+ lazy val ignoreCorruptFiles =
fsRelation.sparkSession.sessionState.conf.ignoreCorruptFiles
+ val canPrune: Path => Boolean = optionalBucketSet match {
+ case Some(bucketSet) =>
+ filePath => {
+ BucketingUtils.getBucketId(filePath.getName) match {
+ case Some(id) => bucketSet.get(id)
+ case None =>
+ if (ignoreCorruptFiles) {
+ // If ignoring corrupt file, do not prune when bucket file
name is invalid
Review comment:
Cool thanks for pointing to the discussion. I'm just not sure whether
the corrupted file should be ignored or processed if the flag is turned on.
`ignoreCorruptedFiles` seems to indicate that the problematic file should be
ignored so it is a bit confusing that we still process it here. Also IMO
ignoring it seems to be slightly safer (thinking someone dump garbage files
into the bucketed partition dir)?
cc @maropu @viirya
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]