c21 commented on a change in pull request #31413:
URL: https://github.com/apache/spark/pull/31413#discussion_r567633367
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -591,20 +590,34 @@ case class FileSourceScanExec(
logInfo(s"Planning scan with bin packing, max size: $maxSplitBytes bytes,
" +
s"open cost is considered as scanning $openCostInBytes bytes.")
+ // Filter files with bucket pruning if possible
+ val filePruning: Path => Boolean = optionalBucketSet match {
+ case Some(bucketSet) =>
+ filePath => bucketSet.get(BucketingUtils.getBucketId(filePath.getName)
+ .getOrElse(sys.error(s"Invalid bucket file $filePath")))
Review comment:
The error here indicates there's data corruption (invalid file name) for
spark data source bucketed table. The benefit for logging warning here is to
unblock read these kind of corrupted bucketed tables with disabling bucketing.
I feel this is dangerous. Users should not rely on disabling bucketing to read
potentially wrong data from bucketed table, they should correct the table. I
am more preferring to fail loud here with exception, as warning logging would
be very hard to debug. But I am open to others opinions as well, cc @maropu and
@cloud-fan .
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]