Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/7458#discussion_r35726525
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/functions.scala
---
@@ -0,0 +1,292 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions.aggregate
+
+import org.apache.spark.sql.catalyst.dsl.expressions._
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.types._
+
+case class Average(child: Expression) extends AlgebraicAggregate {
+
+ override def children: Seq[Expression] = child :: Nil
+
+ override def nullable: Boolean = true
+
+ // Return data type.
+ override def dataType: DataType = resultType
+
+ // Expected input data type.
+ // TODO: Once we remove the old code path, we can use our analyzer to
cast NullType
+ // to the default data type of the NumericType.
+ override def inputTypes: Seq[AbstractDataType] =
Seq(TypeCollection(NumericType, NullType))
+
+ private val resultType = child.dataType match {
+ case DecimalType.Fixed(precision, scale) =>
+ DecimalType(precision + 4, scale + 4)
+ case DecimalType.Unlimited => DecimalType.Unlimited
+ case _ => DoubleType
+ }
+
+ private val sumDataType = child.dataType match {
+ case _ @ DecimalType() => DecimalType.Unlimited
+ case _ => DoubleType
+ }
+
+ private val currentSum = AttributeReference("currentSum", sumDataType)()
+ private val currentCount = AttributeReference("currentCount", LongType)()
+
+ override val bufferAttributes = currentSum :: currentCount :: Nil
+
+ override val initialValues = Seq(
+ /* currentSum = */ Cast(Literal(0), sumDataType),
+ /* currentCount = */ Literal(0L)
+ )
+
+ override val updateExpressions = Seq(
+ /* currentSum = */
+ Add(
+ currentSum,
+ Coalesce(Cast(child, sumDataType) :: Cast(Literal(0), sumDataType)
:: Nil)),
+ /* currentCount = */ If(IsNull(child), currentCount, currentCount + 1L)
+ )
+
+ override val mergeExpressions = Seq(
+ /* currentSum = */ currentSum.left + currentSum.right,
+ /* currentCount = */ currentCount.left + currentCount.right
+ )
+
+ // If all input are nulls, currentCount will be 0 and we will get null
after the division.
+ override val evaluateExpression = Cast(currentSum, resultType) /
Cast(currentCount, resultType)
+}
+
+case class Count(child: Expression) extends AlgebraicAggregate {
+ override def children: Seq[Expression] = child :: Nil
+
+ override def nullable: Boolean = false
+
+ // Return data type.
+ override def dataType: DataType = LongType
+
+ // Expected input data type.
+ override def inputTypes: Seq[AbstractDataType] = Seq(AnyDataType)
+
+ private val currentCount = AttributeReference("currentCount", LongType)()
+
+ override val bufferAttributes = currentCount :: Nil
+
+ override val initialValues = Seq(
+ /* currentCount = */ Literal(0L)
+ )
+
+ override val updateExpressions = Seq(
+ /* currentCount = */ If(IsNull(child), currentCount, currentCount + 1L)
+ )
+
+ override val mergeExpressions = Seq(
+ /* currentCount = */ currentCount.left + currentCount.right
+ )
+
+ override val evaluateExpression = Cast(currentCount, LongType)
+}
+
+case class First(child: Expression) extends AlgebraicAggregate {
+
+ override def children: Seq[Expression] = child :: Nil
+
+ override def nullable: Boolean = true
+
+ // First is not a deterministic function.
+ override def deterministic: Boolean = false
--- End diff --
@cloud-fan An `Aggregate` operator handles the project list by itself.
There is no Project on top of an `Aggregate` to generate results. So, the first
two rules you mentioned will not fire.
My question is why we do not need to handle non-deterministic aggregation
function specially and why should not use `deterministic` for an aggregate
function? Let's say we have a query `sqlContext.sql("select i, f, f, f from
(select i, first(j) as f, rand() from t group by i, rand()) tmp")` and let's
also assume project collapse works with `Aggregate`. It still makes sense to
evaluate first once instead of three times, right? It is probably true that
`first` will give you the same answer even if you call it three times, but what
about other non-deterministic aggregate functions? Regarding predicate
pushdown, we cannot pushdown predicates using the result of an aggregate
function through an aggregate operator, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]