[
https://issues.apache.org/jira/browse/FLINK-3226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15137409#comment-15137409
]
ASF GitHub Bot commented on FLINK-3226:
---------------------------------------
Github user vasia commented on a diff in the pull request:
https://github.com/apache/flink/pull/1600#discussion_r52204752
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/api/table/plan/functions/aggregate/SumAggregate.scala
---
@@ -0,0 +1,130 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.api.table.plan.functions.aggregate
+
+abstract class SumAggregate[T] extends Aggregate[T]{
+
+}
+
+// TinyInt sum aggregate return Int as aggregated value.
+class TinyIntSumAggregate extends SumAggregate[Int] {
+
+ private var sumValue: Int = 0
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+
+ override def getAggregated(): Int = {
+ sumValue
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Byte]
+ }
+}
+
+// SmallInt sum aggregate return Int as aggregated value.
+class SmallIntSumAggregate extends SumAggregate[Int] {
+
+ private var sumValue: Int = 0
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+ override def getAggregated(): Int = {
+ sumValue
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Short]
+ }
+}
+
+// Int sum aggregate return Int as aggregated value.
+class IntSumAggregate extends SumAggregate[Int] {
+
+ private var sumValue: Int = 0
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+
+ override def getAggregated(): Int = {
+ sumValue
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Int]
+ }
+}
+
+// Long sum aggregate return Long as aggregated value.
+class LongSumAggregate extends SumAggregate[Long] {
+
+ private var sumValue: Long = 0L
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Long]
+ }
+
+ override def getAggregated(): Long = {
+ sumValue
+ }
+}
+
+// Float sum aggregate return Float as aggregated value.
+class FloatSumAggregate extends SumAggregate[Float] {
+ private var sumValue: Float = 0
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Float]
+ }
+
+ override def getAggregated(): Float = {
+ sumValue
+ }
+}
+
+// Double sum aggregate return Double as aggregated value.
+class DoubleSumAggregate extends SumAggregate[Double] {
+ private var sumValue: Double = 0
+
+ override def initiateAggregate: Unit = {
+ sumValue = 0
+ }
+
+ override def aggregate(value: Any): Unit = {
+ sumValue += value.asInstanceOf[Double]
+ }
+
+ override def getAggregated(): Double = {
+ sumValue
+ }
+}
--- End diff --
I like @tillrohrmann's suggestion. Any reason why not use Scala's `Numeric`
though?
> Translate optimized logical Table API plans into physical plans representing
> DataSet programs
> ---------------------------------------------------------------------------------------------
>
> Key: FLINK-3226
> URL: https://issues.apache.org/jira/browse/FLINK-3226
> Project: Flink
> Issue Type: Sub-task
> Components: Table API
> Reporter: Fabian Hueske
> Assignee: Chengxiang Li
>
> This issue is about translating an (optimized) logical Table API (see
> FLINK-3225) query plan into a physical plan. The physical plan is a 1-to-1
> representation of the DataSet program that will be executed. This means:
> - Each Flink RelNode refers to exactly one Flink DataSet or DataStream
> operator.
> - All (join and grouping) keys of Flink operators are correctly specified.
> - The expressions which are to be executed in user-code are identified.
> - All fields are referenced with their physical execution-time index.
> - Flink type information is available.
> - Optional: Add physical execution hints for joins
> The translation should be the final part of Calcite's optimization process.
> For this task we need to:
> - implement a set of Flink DataSet RelNodes. Each RelNode corresponds to one
> Flink DataSet operator (Map, Reduce, Join, ...). The RelNodes must hold all
> relevant operator information (keys, user-code expression, strategy hints,
> parallelism).
> - implement rules to translate optimized Calcite RelNodes into Flink
> RelNodes. We start with a straight-forward mapping and later add rules that
> merge several relational operators into a single Flink operator, e.g., merge
> a join followed by a filter. Timo implemented some rules for the first SQL
> implementation which can be used as a starting point.
> - Integrate the translation rules into the Calcite optimization process
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)