c21 commented on a change in pull request #31413:
URL: https://github.com/apache/spark/pull/31413#discussion_r568357088



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -591,20 +590,48 @@ case class FileSourceScanExec(
     logInfo(s"Planning scan with bin packing, max size: $maxSplitBytes bytes, 
" +
       s"open cost is considered as scanning $openCostInBytes bytes.")
 
+    // Filter files with bucket pruning if possible
+    lazy val ignoreCorruptFiles = 
fsRelation.sparkSession.sessionState.conf.ignoreCorruptFiles
+    val canPrune: Path => Boolean = optionalBucketSet match {
+      case Some(bucketSet) =>
+        filePath => {
+          BucketingUtils.getBucketId(filePath.getName) match {
+            case Some(id) => bucketSet.get(id)
+            case None =>
+              if (ignoreCorruptFiles) {
+                // If ignoring corrupt file, do not prune when bucket file 
name is invalid

Review comment:
       I feel either skipping or processing the file is no way perfect. There 
can be other corruption case, where e.g. the table (specified with 1024 
buckets), but only had 500 files underneath. This could be due to some other 
compute engines or users accidentally dump data here without respecting spark 
bucketing metadata. We have no efficient way to handle if number of files fewer 
than number of buckets.
   
   The existing usage of `ignoreCorruptFiles` [skip reading some of content of 
file](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileScanRDD.scala#L160),
 so it's also not completely ignoring. But I am fine if we think we need 
another config name for this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to