rdblue commented on a change in pull request #24559: URL: https://github.com/apache/spark/pull/24559#discussion_r603643390
########## File path: sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/functions/AggregateFunction.java ########## @@ -0,0 +1,94 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.spark.sql.connector.catalog.functions; + +import org.apache.spark.sql.catalyst.InternalRow; +import org.apache.spark.sql.types.DataType; + +import java.io.Serializable; + +/** + * Interface for a function that produces a result value by aggregating over multiple input rows. + * <p> + * For each input row, Spark will call an update method that corresponds to the + * {@link #inputTypes() input data types}. The expected JVM argument types must be the types used by + * Spark's InternalRow API. If no direct method is found or when not using codegen, Spark will call + * update with {@link InternalRow}. + * <p> + * The JVM type of result values produced by this function must be the type used by Spark's + * InternalRow API for the {@link DataType SQL data type} returned by {@link #resultType()}. + * <p> + * All implementations must support partial aggregation by implementing {@link #merge(S, S)} so + * that Spark can partially aggregate and shuffle intermediate results, instead of shuffling all + * rows for an aggregate. This reduces the impact of data skew and the amount of data shuffled to + * produce the result. + * <p> + * Intermediate aggregation state must be {@link Serializable} so that state produced by parallel + * tasks can be sent to a single executor and merged to produce a final result. Review comment: This is describing the behavior for a single group. That's why the final aggregation is done on a single executor. Each group is shuffled to one executor. While I think it is correct, I like your version better so I'll update it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
