cloud-fan commented on a change in pull request #24149: [SPARK-27207][SQL] :
Ensure aggregate buffers are initialized again for So…
URL: https://github.com/apache/spark/pull/24149#discussion_r281468622
##########
File path:
sql/core/src/test/scala/org/apache/spark/sql/TypedImperativeAggregateSuite.scala
##########
@@ -299,5 +319,87 @@ object TypedImperativeAggregateSuite {
}
}
+ /**
+ * Calculate the max value with object aggregation buffer. This stores class
MaxValue
+ * in aggregation buffer.
+ */
+ private case class TypedMax2(
Review comment:
can we simplify it? I think we just need to do some initialization work in
`createAggregationBuffer`.
```
case class MyUDAF ... {
var initialized = false
override def createAggregationBuffer(): MyBuffer = {
initialized = true
null
}
override def update(buffer: MaxValue, input: InternalRow): MyBuffer = {
assert(initialized)
null
}
...
}
```
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]