karuppayya edited a comment on pull request #28662:
URL: https://github.com/apache/spark/pull/28662#issuecomment-638629109


   > Can you explain why DetermineTableStats will calculate the statistics 
multiple times?
   ```
     at 
org.apache.spark.sql.hive.DetermineTableStats.hiveTableWithStats(HiveStrategies.scala:121)
     at 
org.apache.spark.sql.hive.DetermineTableStats$$anonfun$apply$2.applyOrElse(HiveStrategies.scala:150)
     at 
org.apache.spark.sql.hive.DetermineTableStats$$anonfun$apply$2.applyOrElse(HiveStrategies.scala:147)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$2(AnalysisHelper.scala:108)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$870.808816071.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:108)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$869.1113025977.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$872.1354725727.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1144.1492742163.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$869.1113025977.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$4(AnalysisHelper.scala:113)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$872.1354725727.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$mapChildren$1(TreeNode.scala:399)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode$$Lambda$1144.1492742163.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:237)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:397)
     at 
org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:350)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsDown$1(AnalysisHelper.scala:113)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$869.1113025977.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown(AnalysisHelper.scala:106)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsDown$(AnalysisHelper.scala:104)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators(AnalysisHelper.scala:73)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperators$(AnalysisHelper.scala:72)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
     at 
org.apache.spark.sql.hive.DetermineTableStats.apply(HiveStrategies.scala:147)
     at 
org.apache.spark.sql.hive.DetermineTableStats.apply(HiveStrategies.scala:114)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$863.668742490.apply(Unknown
 Source:-1)
     at 
scala.collection.IndexedSeqOptimized.foldLeft(IndexedSeqOptimized.scala:60)
     at 
scala.collection.IndexedSeqOptimized.foldLeft$(IndexedSeqOptimized.scala:68)
     at scala.collection.mutable.ArrayBuffer.foldLeft(ArrayBuffer.scala:49)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$862.773865813.apply(Unknown
 Source:-1)
     at scala.collection.immutable.List.foreach(List.scala:392)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:176)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$$anonfun$apply$20.applyOrElse(Analyzer.scala:2139)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$$anonfun$apply$20.applyOrElse(Analyzer.scala:2116)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$3(AnalysisHelper.scala:90)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$866.1662235713.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:72)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUp$1(AnalysisHelper.scala:90)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$Lambda$864.152426436.apply(Unknown
 Source:-1)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:86)
     at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:84)
     at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$.apply(Analyzer.scala:2116)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveAggregateFunctions$.apply(Analyzer.scala:2115)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:149)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$863.668742490.apply(Unknown
 Source:-1)
     at 
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
     at 
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
     at scala.collection.immutable.List.foldLeft(List.scala:89)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:146)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:138)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$Lambda$862.773865813.apply(Unknown
 Source:-1)
     at scala.collection.immutable.List.foreach(List.scala:392)
     at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:138)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:176)
     at 
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:170)
   ```
   
   - In the above trace we can see `DetermineTableStats` got triggered as part 
of executing `ResolveAggregateFunctions`.
   - `ResolveAggregateFunctions` is part of a batch which has `FixedPoint` as 
its execution strategy, hence the computation can happen any number of times, 
based on when Fixed point is reached.
   - This is not specific to `ResolveAggregateFunctions`, any analyzer rule 
which invokes 
`org.apache.spark.sql.catalyst.analysis.Analyzer#executeSameContext`  will face 
this issue.
   - In best case , `DetermineTableStats` rule will run atleast thrice as part 
of analysis, twice as part of `ResolveAggregateFunctions`(assuming fixed point 
is reached in the first two attempts ) and once as past of 
`postHocResolutionRules#DetermineTableStats`
   
   > Once you finish query analysis of a dataframe, the analyzed plan is kept 
as QueryExecution.analyzed. Why accessing it will cause re-calculation?
   
   In the description, i had written the code to trigger the analysis phase. At 
the end of analysis phase, `DetermineTableStats` would have run multiple times, 
which could slow down query perf.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to