JkSelf commented on code in PR #40914:
URL: https://github.com/apache/spark/pull/40914#discussion_r1174741010


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/stat/StatFunctions.scala:
##########
@@ -288,7 +288,7 @@ object StatFunctions extends Logging {
     }
 
     // If there is no selected columns, we don't need to run this aggregate, 
so make it a lazy val.
-    lazy val aggResult = ds.select(aggExprs: 
_*).queryExecution.toRdd.collect().head
+    lazy val aggResult = ds.select(aggExprs: 
_*).queryExecution.toRdd.map(_.copy()).collect().head

Review Comment:
    We encountered this issue when passing gluten 
[ut](https://github.com/apache/spark/blob/0515e6b96fde80f72c5bcedf0d02884bd46450ab/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala#L2609).
 Because in Gluten we rewritten 
[ColumnarToRow](https://github.com/oap-project/gluten/blob/d9c38e6897f0b94687934cbf5ac68c12cda2bc96/gluten-data/src/main/scala/org/apache/spark/sql/execution/GlutenColumnarToRowExec.scala#L188).
 which will release the row after the row is used. It seems that this issue is 
hard to reproduce on apache spark. Because the ColumnarToRow of apache spark is 
in on heap memory, which is recycled by GC. Do you have any suggestion to 
reproduce this issue in apache spark? @HyukjinKwon @cloud-fan @zhengruifeng 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to