Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19767#discussion_r152442103
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/conditionalExpressions.scala
---
@@ -64,52 +64,22 @@ case class If(predicate: Expression, trueValue:
Expression, falseValue: Expressi
val trueEval = trueValue.genCode(ctx)
val falseEval = falseValue.genCode(ctx)
- // place generated code of condition, true value and false value in
separate methods if
- // their code combined is large
- val combinedLength = condEval.code.length + trueEval.code.length +
falseEval.code.length
--- End diff --
BTW if it's really an issue, we can add splitting logic in
non-leaf/non-unary nodes. This is much less work than before because: 1. no
need to care about unary nodes 2. the splitting logic can be simpler because
all children are guaranteed to generate less than 1000 LOC.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]