Github user windpiger commented on a diff in the pull request: https://github.com/apache/spark/pull/17176#discussion_r104660965 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala --- @@ -159,36 +159,11 @@ class HadoopTableReader( def verifyPartitionPath( partitionToDeserializer: Map[HivePartition, Class[_ <: Deserializer]]): Map[HivePartition, Class[_ <: Deserializer]] = { - if (!sparkSession.sessionState.conf.verifyPartitionPath) { --- End diff -- after this pr https://github.com/apache/spark/pull/17187ï¼ read hive table which does not use `stored by` will not use `HiveTableScanExec`. this function has a bug ,that if the partition path is custom path 1. it will still do filter for all partition path in the parameter `partitionToDeserializer`, 2. it will scan the path which does not belong to the table ,e.g. custom path is `/root/a` and the partitionSpec is `b=1/c=2`, this will lead to scan `/` because of the `getPathPatternByPath `
--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org