Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/12040#discussion_r58115026
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/Window.scala ---
@@ -885,11 +886,21 @@ private[execution] object AggregateProcessor {
val evaluateExpressions =
mutable.Buffer.fill[Expression](ordinal)(NoOp)
val imperatives = mutable.Buffer.empty[ImperativeAggregate]
+ // SPARK-14244: `SizeBasedWindowFunction`s are firstly created on
driver side and then
+ // serialized to executor side. These functions all reference a global
singleton window
+ // partition size attribute reference, i.e.,
`SizeBasedWindowFunction.n`. Here we must collect
+ // the singleton instance created on driver side instead of using
executor side
+ // `SizeBasedWindowFunction.n` to avoid binding failure caused by
mismatching expression ID.
+ val partitionSize = {
--- End diff --
@liancheng how about we return the option here? Then we don't need to do
another iteration round to determine the value of track partitions, and use
`isDefined`/`foreach` and `get`/`toSeq` further down the line.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]