Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/21082#discussion_r183768758
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
---
@@ -268,3 +269,38 @@ object PhysicalAggregation {
case _ => None
}
}
+
+/**
+ * An extractor used when planning physical execution of a window. This
extractor outputs
+ * the window function type of the logical window.
+ *
+ * The input logical window must contain same type of window functions,
which is ensured by
+ * the rule ExtractWindowExpressions in the analyzer.
+ */
+object PhysicalWindow {
+ // windowFunctionType, windowExpression, partitionSpec, orderSpec, child
+ type ReturnType =
+ (WindowFunctionType, Seq[NamedExpression], Seq[Expression],
Seq[SortOrder], LogicalPlan)
+
+ def unapply(a: Any): Option[ReturnType] = a match {
+ case expr @ logical.Window(windowExpressions, partitionSpec,
orderSpec, child) =>
+
+ if (windowExpressions.isEmpty) {
+ throw new AnalysisException(s"Window expression is empty in $expr")
+ }
+
+ val windowFunctionType =
windowExpressions.map(WindowFunctionType.functionType)
+ .reduceLeft ( (t1: WindowFunctionType, t2: WindowFunctionType) =>
--- End diff --
If we want to do this in Analyzer, then we would carry the
WindowFunctionType in the logical plan.
I did it this way to avoid changing the logical node. I am open to add
WindowFunctionType to the logical plan though. What do other people think?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]