[
https://issues.apache.org/jira/browse/BEAM-10143?focusedWorklogId=450572&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-450572
]
ASF GitHub Bot logged work on BEAM-10143:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 24/Jun/20 18:33
Start Date: 24/Jun/20 18:33
Worklog Time Spent: 10m
Work Description: TheNeuralBit commented on a change in pull request
#12076:
URL: https://github.com/apache/beam/pull/12076#discussion_r445091826
##########
File path: sdks/python/apache_beam/transforms/sql_test.py
##########
@@ -157,6 +157,21 @@ def test_zetasql_generate_data(self):
dialect="zetasql")
assert_that(out, equal_to([(1, "foo", 3.14)]))
+ def test_windowing_before_sql(self):
+ with TestPipeline() as p:
+ windowed = (
+ p | beam.Create([
+ SimpleRow(5, "foo", 1.),
+ SimpleRow(15, "bar", 2.),
+ SimpleRow(25, "baz", 3.)
+ ])
+ | beam.Map(lambda v: beam.window.TimestampedValue(v, v.id)).
+ with_output_types(SimpleRow)
Review comment:
@udim I'm curious if you have any suggestions for a better way to create
a PCollection with timestamps that also has an output type annotation (in this
case it's required for Beam schemas).
This way is confusing since there's a mismatch between the return value of
the function and the type in `with_output_types`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 450572)
Time Spent: 40m (was: 0.5h)
> ClassCastException in GROUP BY with non-global window
> -----------------------------------------------------
>
> Key: BEAM-10143
> URL: https://issues.apache.org/jira/browse/BEAM-10143
> Project: Beam
> Issue Type: Bug
> Components: dsl-sql, sdk-py-core
> Reporter: Maximilian Michels
> Priority: P1
> Time Spent: 40m
> Remaining Estimate: 0h
>
> I'm using the SqlTransform as an external transform from within a Python
> pipeline. I apply windowing before a GROUP BY query as mentioned as the first
> option in
> https://beam.apache.org/documentation/dsls/sql/extensions/windowing-and-triggering/:
> {code:python}
> input
> | "Window" >> beam.WindowInto(window.FixedWindows(30))
> | "Aggregate" >>
> SqlTransform("""Select field, count(field) from PCOLLECTION
> WHERE ...
> GROUP BY field
> """)
> {code}
> This results in an exception:
> {noformat}
> Caused by: java.lang.ClassCastException:
> org.apache.beam.sdk.transforms.windowing.IntervalWindow cannot be cast to
> org.apache.beam.sdk.transforms.windowing.GlobalWindow
> at
> org.apache.beam.sdk.transforms.windowing.GlobalWindow$Coder.encode(GlobalWindow.java:59)
> at
> org.apache.beam.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:98)
> at
> org.apache.beam.sdk.coders.IterableLikeCoder.encode(IterableLikeCoder.java:60)
> at
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:588)
> at
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581)
> at
> org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541)
> at
> org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109)
> at
> org.apache.beam.fn.harness.BeamFnDataWriteRunner.consume(BeamFnDataWriteRunner.java:154)
> at
> org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:216)
> at
> org.apache.beam.fn.harness.data.PCollectionConsumerRegistry$MetricTrackingFnDataReceiver.accept(PCollectionConsumerRegistry.java:179)
> at
> org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:178)
> at
> org.apache.beam.runners.fnexecution.control.FnApiControlClient$ResponseStreamObserver.onNext(FnApiControlClient.java:158)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.stub.ServerCalls$StreamingServerCallHandler$StreamingServerCallListener.onMessage(ServerCalls.java:251)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.ForwardingServerCallListener.onMessage(ForwardingServerCallListener.java:33)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.Contexts$ContextualizedServerCallListener.onMessage(Contexts.java:76)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailableInternal(ServerCallImpl.java:309)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.messagesAvailable(ServerCallImpl.java:292)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1MessagesAvailable.runInContext(ServerImpl.java:782)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
> at
> org.apache.beam.vendor.grpc.v1p26p0.io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> ... 1 more
> {noformat}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)