c21 commented on a change in pull request #31413:
URL: https://github.com/apache/spark/pull/31413#discussion_r568351355
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
##########
@@ -591,20 +590,48 @@ case class FileSourceScanExec(
logInfo(s"Planning scan with bin packing, max size: $maxSplitBytes bytes,
" +
s"open cost is considered as scanning $openCostInBytes bytes.")
+ // Filter files with bucket pruning if possible
+ lazy val ignoreCorruptFiles =
fsRelation.sparkSession.sessionState.conf.ignoreCorruptFiles
+ val canPrune: Path => Boolean = optionalBucketSet match {
+ case Some(bucketSet) =>
+ filePath => {
+ BucketingUtils.getBucketId(filePath.getName) match {
+ case Some(id) => bucketSet.get(id)
+ case None =>
+ if (ignoreCorruptFiles) {
+ // If ignoring corrupt file, do not prune when bucket file
name is invalid
Review comment:
@sunchao - this is newly introduced. Updated PR description.
> Also I'm not sure if this is the best choice: if a bucketed table is
corrupted, should we read the corrupt file? it will likely lead to incorrect
results. On the other hand we can choose to ignore the file which seems to be
more aligned with the name of the config, although result could still be
incorrect.
Note by default the exception will be thrown here and query will be failed
loud. We allow a config here to help existing users to work around if they
want. See relevant discussion in
https://github.com/apache/spark/pull/31413#discussion_r567623746 .
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]