pan3793 commented on code in PR #44352:
URL: https://github.com/apache/spark/pull/44352#discussion_r1427800437


##########
sql/core/src/test/resources/sql-tests/analyzer-results/udf/postgreSQL/udf-select_having.sql.out:
##########
@@ -102,12 +102,11 @@ Project [udf(b)#x, udf(c)#x]
 SELECT udf(b), udf(c) FROM test_having
        GROUP BY b, c HAVING udf(b) = 3 ORDER BY udf(b), udf(c)
 -- !query analysis
-Project [udf(b)#x, udf(c)#x]

Review Comment:
   both Analyzed plan and Optimized plan are changed :)
   
   before:
   ```
   == Analyzed Logical Plan ==
   udf(b): int, udf(c): double
   Project [udf(b)#24, udf(c)#25]
   +- Sort [udf(b#21) ASC NULLS FIRST, udf(cast(c#22 as double)) ASC NULLS 
FIRST], true
      +- Filter (udf(b)#24 = 3)
         +- Aggregate [b#21, c#22], [udf(b#21) AS udf(b)#24, udf(cast(c#22 as 
double)) AS udf(c)#25, b#21, c#22]
            +- SubqueryAlias spark_catalog.default.test_having
               +- Relation 
spark_catalog.default.test_having[a#20,b#21,c#22,d#23] parquet
   
   == Optimized Logical Plan ==
   Project [udf(b)#24, udf(c)#25]
   +- Sort [udf(b#21) ASC NULLS FIRST, udf(cast(c#22 as double)) ASC NULLS 
FIRST], true
      +- Aggregate [b#21, c#22], [udf(b#21) AS udf(b)#24, udf(cast(c#22 as 
double)) AS udf(c)#25, b#21, c#22]
         +- Project [b#21, c#22]
            +- Filter (isnotnull(b#21) AND (udf(b#21) = 3))
               +- Relation 
spark_catalog.default.test_having[a#20,b#21,c#22,d#23] parquet
   ```
   
   after:
   ```
   == Analyzed Logical Plan ==
   udf(b): int, udf(c): double
   Sort [udf(b)#9 ASC NULLS FIRST, udf(c)#10 ASC NULLS FIRST], true
   +- Filter (udf(b)#9 = 3)
      +- Aggregate [b#6, c#7], [udf(b#6) AS udf(b)#9, udf(cast(c#7 as double)) 
AS udf(c)#10]
         +- SubqueryAlias spark_catalog.default.test_having
            +- Relation spark_catalog.default.test_having[a#5,b#6,c#7,d#8] 
parquet
   
   == Optimized Logical Plan ==
   Sort [udf(b)#9 ASC NULLS FIRST, udf(c)#10 ASC NULLS FIRST], true
   +- Aggregate [b#6, c#7], [udf(b#6) AS udf(b)#9, udf(cast(c#7 as double)) AS 
udf(c)#10]
      +- Project [b#6, c#7]
         +- Filter (isnotnull(b#6) AND (udf(b#6) = 3))
            +- Relation spark_catalog.default.test_having[a#5,b#6,c#7,d#8] 
parquet
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to