[GitHub] spark pull request #15158: [SPARK-17603] [SQL] Utilize Hive-generated Statis...
Github user gatorsmile closed the pull request at: https://github.com/apache/spark/pull/15158 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #15158: [SPARK-17603] [SQL] Utilize Hive-generated Statis...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/15158#discussion_r79705605 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/MetastoreRelation.scala --- @@ -109,39 +109,59 @@ private[hive] case class MetastoreRelation( } @transient override lazy val statistics: Statistics = { -catalogTable.stats.getOrElse(Statistics( - sizeInBytes = { -val totalSize = hiveQlTable.getParameters.get(StatsSetupConst.TOTAL_SIZE) -val rawDataSize = hiveQlTable.getParameters.get(StatsSetupConst.RAW_DATA_SIZE) -// TODO: check if this estimate is valid for tables after partition pruning. -// NOTE: getting `totalSize` directly from params is kind of hacky, but this should be -// relatively cheap if parameters for the table are populated into the metastore. -// Besides `totalSize`, there are also `numFiles`, `numRows`, `rawDataSize` keys -// (see StatsSetupConst in Hive) that we can look at in the future. -BigInt( - // When table is external,`totalSize` is always zero, which will influence join strategy - // so when `totalSize` is zero, use `rawDataSize` instead - // when `rawDataSize` is also zero, use `HiveExternalCatalog.STATISTICS_TOTAL_SIZE`, - // which is generated by analyze command. - if (totalSize != null && totalSize.toLong > 0L) { -totalSize.toLong - } else if (rawDataSize != null && rawDataSize.toLong > 0) { -rawDataSize.toLong - } else if (sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled) { -try { - val hadoopConf = sparkSession.sessionState.newHadoopConf() - val fs: FileSystem = hiveQlTable.getPath.getFileSystem(hadoopConf) - fs.getContentSummary(hiveQlTable.getPath).getLength -} catch { - case e: IOException => -logWarning("Failed to get table size from hdfs.", e) -sparkSession.sessionState.conf.defaultSizeInBytes -} - } else { -sparkSession.sessionState.conf.defaultSizeInBytes - }) +catalogTable.stats.getOrElse { + // For non-partitioned tables, Hive-generated statistics are stored in table properties + // For partitioned tables, Hive-generated statistics are stored in partition properties + val (totalSize, rawDataSize) = if (catalogTable.partitionColumnNames.isEmpty) { +val properties = Option(hiveQlTable.getParameters).map(_.asScala.toMap).orNull +(properties.get(StatsSetupConst.TOTAL_SIZE).map(_.toLong), + properties.get(StatsSetupConst.RAW_DATA_SIZE).map(_.toLong)) + } else { +(getTotalTableSize(StatsSetupConst.TOTAL_SIZE), + getTotalTableSize(StatsSetupConst.RAW_DATA_SIZE)) } -)) + Statistics( +sizeInBytes = { + BigInt( +// When table is external,`totalSize` is always zero, which will influence join strategy +// so when `totalSize` is zero, use `rawDataSize` instead. When `rawDataSize` is also +// zero, fall back to filesystem to get file sizes, if enabled. +if (totalSize.isDefined && totalSize.get > 0L) { + totalSize.get +} else if (rawDataSize.isDefined && rawDataSize.get > 0L) { + rawDataSize.get +} else if (sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled) { + try { +val hadoopConf = sparkSession.sessionState.newHadoopConf() +val fs: FileSystem = hiveQlTable.getPath.getFileSystem(hadoopConf) +fs.getContentSummary(hiveQlTable.getPath).getLength + } catch { +case e: IOException => + logWarning("Failed to get table size from hdfs.", e) + sparkSession.sessionState.conf.defaultSizeInBytes + } +} else { + sparkSession.sessionState.conf.defaultSizeInBytes +}) +} + ) +} + } + + // For partitioned tables, get the size of all the partitions. + // Note: the statistics might not be gathered for all the partitions. + // For partial collection, we will not utilize the Hive-generated statistics. + private def getTotalTableSize(statType: String): Option[Long] = { --- End diff -- : ) We are facing the same issue in both data source tables and hive tables. That requires another PR to change how we use the statistics of leaf nodes. Partition filtering on statistics should be considered by
[GitHub] spark pull request #15158: [SPARK-17603] [SQL] Utilize Hive-generated Statis...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/15158#discussion_r79656655 --- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/MetastoreRelation.scala --- @@ -109,39 +109,59 @@ private[hive] case class MetastoreRelation( } @transient override lazy val statistics: Statistics = { -catalogTable.stats.getOrElse(Statistics( - sizeInBytes = { -val totalSize = hiveQlTable.getParameters.get(StatsSetupConst.TOTAL_SIZE) -val rawDataSize = hiveQlTable.getParameters.get(StatsSetupConst.RAW_DATA_SIZE) -// TODO: check if this estimate is valid for tables after partition pruning. -// NOTE: getting `totalSize` directly from params is kind of hacky, but this should be -// relatively cheap if parameters for the table are populated into the metastore. -// Besides `totalSize`, there are also `numFiles`, `numRows`, `rawDataSize` keys -// (see StatsSetupConst in Hive) that we can look at in the future. -BigInt( - // When table is external,`totalSize` is always zero, which will influence join strategy - // so when `totalSize` is zero, use `rawDataSize` instead - // when `rawDataSize` is also zero, use `HiveExternalCatalog.STATISTICS_TOTAL_SIZE`, - // which is generated by analyze command. - if (totalSize != null && totalSize.toLong > 0L) { -totalSize.toLong - } else if (rawDataSize != null && rawDataSize.toLong > 0) { -rawDataSize.toLong - } else if (sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled) { -try { - val hadoopConf = sparkSession.sessionState.newHadoopConf() - val fs: FileSystem = hiveQlTable.getPath.getFileSystem(hadoopConf) - fs.getContentSummary(hiveQlTable.getPath).getLength -} catch { - case e: IOException => -logWarning("Failed to get table size from hdfs.", e) -sparkSession.sessionState.conf.defaultSizeInBytes -} - } else { -sparkSession.sessionState.conf.defaultSizeInBytes - }) +catalogTable.stats.getOrElse { + // For non-partitioned tables, Hive-generated statistics are stored in table properties + // For partitioned tables, Hive-generated statistics are stored in partition properties + val (totalSize, rawDataSize) = if (catalogTable.partitionColumnNames.isEmpty) { +val properties = Option(hiveQlTable.getParameters).map(_.asScala.toMap).orNull +(properties.get(StatsSetupConst.TOTAL_SIZE).map(_.toLong), + properties.get(StatsSetupConst.RAW_DATA_SIZE).map(_.toLong)) + } else { +(getTotalTableSize(StatsSetupConst.TOTAL_SIZE), + getTotalTableSize(StatsSetupConst.RAW_DATA_SIZE)) } -)) + Statistics( +sizeInBytes = { + BigInt( +// When table is external,`totalSize` is always zero, which will influence join strategy +// so when `totalSize` is zero, use `rawDataSize` instead. When `rawDataSize` is also +// zero, fall back to filesystem to get file sizes, if enabled. +if (totalSize.isDefined && totalSize.get > 0L) { + totalSize.get +} else if (rawDataSize.isDefined && rawDataSize.get > 0L) { + rawDataSize.get +} else if (sparkSession.sessionState.conf.fallBackToHdfsForStatsEnabled) { + try { +val hadoopConf = sparkSession.sessionState.newHadoopConf() +val fs: FileSystem = hiveQlTable.getPath.getFileSystem(hadoopConf) +fs.getContentSummary(hiveQlTable.getPath).getLength + } catch { +case e: IOException => + logWarning("Failed to get table size from hdfs.", e) + sparkSession.sessionState.conf.defaultSizeInBytes + } +} else { + sparkSession.sessionState.conf.defaultSizeInBytes +}) +} + ) +} + } + + // For partitioned tables, get the size of all the partitions. + // Note: the statistics might not be gathered for all the partitions. + // For partial collection, we will not utilize the Hive-generated statistics. + private def getTotalTableSize(statType: String): Option[Long] = { --- End diff -- what if a query only reads some partitions? Looks like the table statistics depend on partition pruning. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as
[GitHub] spark pull request #15158: [SPARK-17603] [SQL] Utilize Hive-generated Statis...
GitHub user gatorsmile opened a pull request: https://github.com/apache/spark/pull/15158 [SPARK-17603] [SQL] Utilize Hive-generated Statistics For Partitioned Tables ### What changes were proposed in this pull request? For non-partitioned tables, Hive-generated statistics are stored in table properties. However, for partitioned tables, Hive-generated statistics are stored in partition properties. Thus, we are unable to utilize the Hive-generated statistics for partitioned tables. The statistics might not be gathered for all the partitions in Hive. For partial collection, we will not utilize the Hive-generated statistics. ### How was this patch tested? Added test cases. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gatorsmile/spark partitionedTableStatistics Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/15158.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #15158 commit 061e60b3af819f235e531b1de24f136a431dc23c Author: gatorsmile Date: 2016-09-20T04:49:51Z fix. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org