singhpk234 commented on code in PR #37083:
URL: https://github.com/apache/spark/pull/37083#discussion_r918534436
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/BasicStatsPlanVisitor.scala:
##########
@@ -17,16 +17,40 @@
package org.apache.spark.sql.catalyst.plans.logical.statsEstimation
+import org.apache.spark.sql.catalyst.expressions.AttributeMap
import org.apache.spark.sql.catalyst.plans.logical._
/**
- * A [[LogicalPlanVisitor]] that computes the statistics for the cost-based
optimizer.
+ * An [[LogicalPlanVisitor]] that computes a single dimension for plan stats:
size in bytes.
*/
object BasicStatsPlanVisitor extends LogicalPlanVisitor[Statistics] {
- /** Falls back to the estimation computed by
[[SizeInBytesOnlyStatsPlanVisitor]]. */
- private def fallback(p: LogicalPlan): Statistics =
SizeInBytesOnlyStatsPlanVisitor.visit(p)
+ /**
+ * A default, commonly used estimation for unary nodes. We assume the input
row number is the
+ * same as the output row number, and compute sizes based on the column
types.
+ */
+ private def visitUnaryNode(p: UnaryNode): Statistics = {
+ // There should be some overhead in Row object, the size should not be
zero when there is
+ // no columns, this help to prevent divide-by-zero error.
+ val childRowSize = EstimationUtils.getSizePerRow(p.child.output)
+ val outputRowSize = EstimationUtils.getSizePerRow(p.output)
+ // Assume there will be the same number of rows as child has.
+ var sizeInBytes = (p.child.stats.sizeInBytes * outputRowSize) /
childRowSize
+ if (sizeInBytes == 0) {
+ // sizeInBytes can't be zero, or sizeInBytes of BinaryNode will also be
zero
+ // (product of children).
+ sizeInBytes = 1
+ }
+
+ // v2 sources can bubble-up rowCount, so always propagate.
+ // Don't propagate attributeStats, since they are not estimated here.
+ Statistics(sizeInBytes = sizeInBytes, rowCount = p.child.stats.rowCount)
Review Comment:
In this estimator i.e visitUnaryNode, we adjust the size by scaling it by
(input row size / output row size) but since we don't have much info (in terms
of min / max / ndv etc) to estimate the row count we just say the node output's
child output row count which is mostly true for operators like project etc.
Since we were just computing sizeInBytes and just propagating rowCounts as
it is.
Appologies I forgot to update the comment as per proposed behaviour.
Should I rephrase it to:
- `estimates size in bytes, row count for plan stats`
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]