[ 
https://issues.apache.org/jira/browse/BEAM-12100?focusedWorklogId=646524&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-646524
 ]

ASF GitHub Bot logged work on BEAM-12100:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 03/Sep/21 23:35
            Start Date: 03/Sep/21 23:35
    Worklog Time Spent: 10m 
      Work Description: benWize commented on a change in pull request #15174:
URL: https://github.com/apache/beam/pull/15174#discussion_r702199547



##########
File path: 
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/impl/rel/BeamAggregationRel.java
##########
@@ -243,55 +247,72 @@ private Transform(
       if (windowFn != null) {
         windowedStream = assignTimestampsAndWindow(upstream);
       }
-
       validateWindowIsSupported(windowedStream);
+      // Check if have fields to be grouped
+      if (groupSetCount > 0) {
+        
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row> 
byFields =
+            
org.apache.beam.sdk.schemas.transforms.Group.byFieldIds(keyFieldsIds);
+        PTransform<PCollection<Row>, PCollection<Row>> combiner = 
createCombiner(byFields);
+        boolean verifyRowValues =
+            
pinput.getPipeline().getOptions().as(BeamSqlPipelineOptions.class).getVerifyRowValues();
+        return windowedStream
+            .apply(combiner)
+            .apply(
+                "mergeRecord",
+                ParDo.of(
+                    mergeRecord(outputSchema, windowFieldIndex, ignoreValues, 
verifyRowValues)))
+            .setRowSchema(outputSchema);
+      }
+      
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row> 
globally =
+          org.apache.beam.sdk.schemas.transforms.Group.globally();
+      PTransform<PCollection<Row>, PCollection<Row>> combiner = 
createCombiner(globally);
+      return windowedStream.apply(combiner).setRowSchema(outputSchema);
+    }
+
+    private PTransform<PCollection<Row>, PCollection<Row>> createCombiner(
+        
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row>
+            initialCombiner) {
 
-      org.apache.beam.sdk.schemas.transforms.Group.ByFields<Row> byFields =
-          
org.apache.beam.sdk.schemas.transforms.Group.byFieldIds(keyFieldsIds);
-      org.apache.beam.sdk.schemas.transforms.Group.CombineFieldsByFields<Row> 
combined = null;
+      
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row> 
combined = null;
       for (FieldAggregation fieldAggregation : fieldAggregations) {
         List<Integer> inputs = fieldAggregation.inputs;
         CombineFn combineFn = fieldAggregation.combineFn;
-        if (inputs.size() > 1 || inputs.isEmpty()) {
-          // In this path we extract a Row (an empty row if inputs.isEmpty).
+        if (inputs.size() == 1) {
+          // Combining over a single field, so extract just that field.
           combined =
               (combined == null)
-                  ? byFields.aggregateFieldsById(inputs, combineFn, 
fieldAggregation.outputField)
-                  : combined.aggregateFieldsById(inputs, combineFn, 
fieldAggregation.outputField);
+                  ? initialCombiner.aggregateField(
+                      inputs.get(0), combineFn, fieldAggregation.outputField)
+                  : combined.aggregateField(inputs.get(0), combineFn, 
fieldAggregation.outputField);
         } else {
-          // Combining over a single field, so extract just that field.
+          // In this path we extract a Row (an empty row if inputs.isEmpty).
           combined =
               (combined == null)
-                  ? byFields.aggregateField(inputs.get(0), combineFn, 
fieldAggregation.outputField)
-                  : combined.aggregateField(inputs.get(0), combineFn, 
fieldAggregation.outputField);
+                  ? initialCombiner.aggregateFieldsById(
+                      inputs, combineFn, fieldAggregation.outputField)
+                  : combined.aggregateFieldsById(inputs, combineFn, 
fieldAggregation.outputField);
         }
       }
 
-      PTransform<PCollection<Row>, PCollection<Row>> combiner = combined;
-      boolean ignoreValues = false;
+      PTransform<PCollection<Row>, PCollection<Row>> combiner =
+          (PTransform<PCollection<Row>, PCollection<Row>>) combined;

Review comment:
       I did this, but I had to create a new method to make an instance of 
`CombineFieldsGlobally` with null `schemaAggregateFn` because that param is 
initialized here 
https://github.com/apache/beam/blob/22e2adbc05cdb3cb2f9800508108926df67f829b/sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/impl/rel/BeamAggregationRel.java#L284




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 646524)
    Time Spent: 8.5h  (was: 8h 20m)

> SUM should error when overflow/underflow occurs.
> ------------------------------------------------
>
>                 Key: BEAM-12100
>                 URL: https://issues.apache.org/jira/browse/BEAM-12100
>             Project: Beam
>          Issue Type: Bug
>          Components: dsl-sql-zetasql
>            Reporter: Kyle Weaver
>            Assignee: Benjamin Gonzalez
>            Priority: P3
>          Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> SELECT SUM(col1) FROM (SELECT CAST(9223372036854775807 as int64) as col1 
> UNION ALL SELECT CAST(1 as int64))
> should return an error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to