Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/11714#discussion_r57153635
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/rules/RuleExecutor.scala
---
@@ -46,15 +48,24 @@ abstract class RuleExecutor[TreeType <: TreeNode[_]]
extends Logging {
/**
* An execution strategy for rules that indicates the maximum number of
executions. If the
* execution reaches fix point (i.e. converge) before maxIterations, it
will stop.
+ * If throwsExceptionUponMaxIterations is equal to true, it will issue
an exception
+ * TreeNodeException.
*/
- abstract class Strategy { def maxIterations: Int }
+ abstract class Strategy {
+ def maxIterations: Int
+ def throwsExceptionUponMaxIterations: Boolean
+ }
/** A strategy that only runs once. */
- case object Once extends Strategy { val maxIterations = 1 }
+ case object Once extends Strategy {
+ override val maxIterations = 1
+ override val throwsExceptionUponMaxIterations = false
--- End diff --
@srowen This is used in multiple components, e.g., `Analyzer` and
`Optimizer`. I am not sure how to do it in a more efficient way. Could you give
me some hints how we did in the Spark? Any example in the existing code base?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]