HyukjinKwon commented on a change in pull request #31413:
URL: https://github.com/apache/spark/pull/31413#discussion_r603296475



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -591,20 +590,41 @@ case class FileSourceScanExec(
     logInfo(s"Planning scan with bin packing, max size: $maxSplitBytes bytes, 
" +
       s"open cost is considered as scanning $openCostInBytes bytes.")
 
+    // Filter files with bucket pruning if possible
+    val bucketingEnabled = 
fsRelation.sparkSession.sessionState.conf.bucketingEnabled
+    val shouldProcess: Path => Boolean = optionalBucketSet match {
+      case Some(bucketSet) if bucketingEnabled =>
+        filePath => {
+          BucketingUtils.getBucketId(filePath.getName) match {
+            case Some(id) => bucketSet.get(id)
+            case None =>
+              // Do not prune the file if bucket file name is invalid
+              true

Review comment:
       Hm, it could be one liner:
   
   ```scala
           filePath => 
BucketingUtils.getBucketId(filePath.getName).forall(bucketSet.get)
   ```
   
   If it looks less readable we could:
   
   ```scala
           filePath => 
BucketingUtils.getBucketId(filePath.getName).map(bucketSet.get).getOrElse
   ```
   
   If we worry about perf penalty from pattern matching, etc. we could do:
   
   ```scala
           filePath => {
             val bucketId = BucketingUtils.getBucketId(filePath.getName)
             if (bucketId.isEmpty) true else bucketSet.get(bucketId.get)
           }
   ``` 
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to