maropu commented on a change in pull request #32699:
URL: https://github.com/apache/spark/pull/32699#discussion_r643154174
##########
File path: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
##########
@@ -2882,6 +2882,31 @@ class DataFrameSuite extends QueryTest
df2.collect()
assert(accum.value == 15)
}
+
+ test("SPARK-35560: Remove redundant subexpression evaluation in nested
subexpressions") {
+ Seq(1, Int.MaxValue).foreach { splitThreshold =>
+ withSQLConf(SQLConf.CODEGEN_METHOD_SPLIT_THRESHOLD.key ->
splitThreshold.toString) {
+ val accum = sparkContext.longAccumulator("call")
+ val simpleUDF = udf((s: String) => {
+ accum.add(1)
+ s
+ })
+
+ // Common exprs:
+ // 1. simpleUDF($"id")
+ // 2. functions.length(simpleUDF($"id"))
Review comment:
Q: What if a tree has more deeply-nested common exprs? The current logic
can work well? e.g., I thought it like this;
```
// subExpr1 = simpleUDF($"id");
// subExpr2 = functions.length(subExpr1);
// subExpr3 = functions.xxxx(subExpr2);
// subExpr4 = ...
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]