[GitHub] [spark] cloud-fan commented on a diff in pull request #36412: [SPARK-39073][SQL] Keep rowCount after hive table partition pruning if table only have hive statistics

2022-05-15 Thread GitBox


cloud-fan commented on code in PR #36412:
URL: https://github.com/apache/spark/pull/36412#discussion_r87309


##
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/PruneHiveTablePartitions.scala:
##
@@ -80,10 +80,15 @@ private[sql] class PruneHiveTablePartitions(session: 
SparkSession)
   val colStats = filteredStats.map(_.attributeStats.map { case (attr, 
colStat) =>
 (attr.name, colStat.toCatalogColumnStat(attr.name, attr.dataType))
   })
+  val rowCount = if 
(prunedPartitions.forall(_.stats.flatMap(_.rowCount).exists(_ > 0))) {

Review Comment:
   you are right, I misread the code.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a diff in pull request #36412: [SPARK-39073][SQL] Keep rowCount after hive table partition pruning if table only have hive statistics

2022-05-13 Thread GitBox


cloud-fan commented on code in PR #36412:
URL: https://github.com/apache/spark/pull/36412#discussion_r872431470


##
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/PruneHiveTablePartitions.scala:
##
@@ -80,10 +80,15 @@ private[sql] class PruneHiveTablePartitions(session: 
SparkSession)
   val colStats = filteredStats.map(_.attributeStats.map { case (attr, 
colStat) =>
 (attr.name, colStat.toCatalogColumnStat(attr.name, attr.dataType))
   })
+  val rowCount = if 
(prunedPartitions.forall(_.stats.flatMap(_.rowCount).exists(_ > 0))) {

Review Comment:
   I think we need to make sure rowCount exists in all the partitions:
   `_.stats.map(s => s.isDefined && s.get.rowCount.isDefined && 
s.get.rowCount.get > 0)`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a diff in pull request #36412: [SPARK-39073][SQL] Keep rowCount after hive table partition pruning if table only have hive statistics

2022-05-09 Thread GitBox


cloud-fan commented on code in PR #36412:
URL: https://github.com/apache/spark/pull/36412#discussion_r868230134


##
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/PruneHiveTablePartitions.scala:
##
@@ -74,6 +74,32 @@ private[sql] class PruneHiveTablePartitions(session: 
SparkSession)
 0L
   }
 }
+if (sizeOfPartitions.forall(_ > 0) && 
prunedPartitions.forall(_.stats.isDefined)) {
+  val rowCountOfPartitions = prunedPartitions.map { partition =>
+val rowCount = partition.stats.get.rowCount.map(_.toLong)
+if (rowCount.isDefined && rowCount.get > 0L) {
+  rowCount.get
+} else {
+  0L
+}
+  }
+  if (rowCountOfPartitions.forall(_ > 0)) {
+val stats = relation.stats
+val oldRowCount = stats.rowCount
+val newRowCount = BigInt(rowCountOfPartitions.sum)
+var colStats = Map.empty[String, CatalogColumnStat]
+if (oldRowCount.isDefined) {
+  stats.attributeStats.map(_._2.updateCountStats(
+oldNumRows = oldRowCount.get, newNumRows = newRowCount))
+  colStats = stats.attributeStats.map{ case (attr, colStat) =>
+(attr.name, colStat.toCatalogColumnStat(attr.name, attr.dataType))}
+}
+return relation.tableMeta.copy(stats = Some(CatalogStatistics(
+  sizeInBytes = BigInt(sizeOfPartitions.sum),
+  rowCount = Option(newRowCount),
+  colStats = colStats)))
+  }
+}
 if (sizeOfPartitions.forall(_ > 0)) {

Review Comment:
   I think we can put the new code inside this `if` branch. The idea is to use 
the accurate row count from partitions directly instead of estimating it.
   ```
   val rowCount = if 
(prunedPartitions.forall(_.stats.flatMap(_.rowCount).filter(_ > 0).isDefined)) {
 ...
   } else {
 ...
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org