[
https://issues.apache.org/jira/browse/BEAM-12100?focusedWorklogId=645535&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-645535
]
ASF GitHub Bot logged work on BEAM-12100:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 01/Sep/21 20:00
Start Date: 01/Sep/21 20:00
Worklog Time Spent: 10m
Work Description: benWize commented on a change in pull request #15174:
URL: https://github.com/apache/beam/pull/15174#discussion_r700532707
##########
File path:
sdks/java/extensions/sql/src/main/java/org/apache/beam/sdk/extensions/sql/impl/rel/BeamAggregationRel.java
##########
@@ -243,55 +247,72 @@ private Transform(
if (windowFn != null) {
windowedStream = assignTimestampsAndWindow(upstream);
}
-
validateWindowIsSupported(windowedStream);
+ // Check if have fields to be grouped
+ if (groupSetCount > 0) {
+
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row>
byFields =
+
org.apache.beam.sdk.schemas.transforms.Group.byFieldIds(keyFieldsIds);
+ PTransform<PCollection<Row>, PCollection<Row>> combiner =
createCombiner(byFields);
+ boolean verifyRowValues =
+
pinput.getPipeline().getOptions().as(BeamSqlPipelineOptions.class).getVerifyRowValues();
+ return windowedStream
+ .apply(combiner)
+ .apply(
+ "mergeRecord",
+ ParDo.of(
+ mergeRecord(outputSchema, windowFieldIndex, ignoreValues,
verifyRowValues)))
+ .setRowSchema(outputSchema);
+ }
+
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row>
globally =
+ org.apache.beam.sdk.schemas.transforms.Group.globally();
+ PTransform<PCollection<Row>, PCollection<Row>> combiner =
createCombiner(globally);
+ return windowedStream.apply(combiner).setRowSchema(outputSchema);
+ }
+
+ private PTransform<PCollection<Row>, PCollection<Row>> createCombiner(
+
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row>
+ initialCombiner) {
- org.apache.beam.sdk.schemas.transforms.Group.ByFields<Row> byFields =
-
org.apache.beam.sdk.schemas.transforms.Group.byFieldIds(keyFieldsIds);
- org.apache.beam.sdk.schemas.transforms.Group.CombineFieldsByFields<Row>
combined = null;
+
org.apache.beam.sdk.schemas.transforms.Group.AggregateCombinerInterface<Row>
combined = null;
for (FieldAggregation fieldAggregation : fieldAggregations) {
List<Integer> inputs = fieldAggregation.inputs;
CombineFn combineFn = fieldAggregation.combineFn;
- if (inputs.size() > 1 || inputs.isEmpty()) {
- // In this path we extract a Row (an empty row if inputs.isEmpty).
+ if (inputs.size() == 1) {
+ // Combining over a single field, so extract just that field.
combined =
(combined == null)
- ? byFields.aggregateFieldsById(inputs, combineFn,
fieldAggregation.outputField)
- : combined.aggregateFieldsById(inputs, combineFn,
fieldAggregation.outputField);
+ ? initialCombiner.aggregateField(
+ inputs.get(0), combineFn, fieldAggregation.outputField)
+ : combined.aggregateField(inputs.get(0), combineFn,
fieldAggregation.outputField);
} else {
- // Combining over a single field, so extract just that field.
+ // In this path we extract a Row (an empty row if inputs.isEmpty).
combined =
(combined == null)
- ? byFields.aggregateField(inputs.get(0), combineFn,
fieldAggregation.outputField)
- : combined.aggregateField(inputs.get(0), combineFn,
fieldAggregation.outputField);
+ ? initialCombiner.aggregateFieldsById(
+ inputs, combineFn, fieldAggregation.outputField)
+ : combined.aggregateFieldsById(inputs, combineFn,
fieldAggregation.outputField);
}
}
- PTransform<PCollection<Row>, PCollection<Row>> combiner = combined;
- boolean ignoreValues = false;
+ PTransform<PCollection<Row>, PCollection<Row>> combiner =
+ (PTransform<PCollection<Row>, PCollection<Row>>) combined;
Review comment:
I tried to extend from PTransform, but I got several conflicts because
of the different output types for `Global`, `CombineFieldsGlobally` and
`CombineFieldsByFields` classes.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 645535)
Time Spent: 7h 20m (was: 7h 10m)
> SUM should error when overflow/underflow occurs.
> ------------------------------------------------
>
> Key: BEAM-12100
> URL: https://issues.apache.org/jira/browse/BEAM-12100
> Project: Beam
> Issue Type: Bug
> Components: dsl-sql-zetasql
> Reporter: Kyle Weaver
> Assignee: Benjamin Gonzalez
> Priority: P3
> Time Spent: 7h 20m
> Remaining Estimate: 0h
>
> SELECT SUM(col1) FROM (SELECT CAST(9223372036854775807 as int64) as col1
> UNION ALL SELECT CAST(1 as int64))
> should return an error.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)