[jira] [Updated] (BEAM-10225) Add message when starting job server
[ https://issues.apache.org/jira/browse/BEAM-10225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10225: --- Component/s: (was: jobserver) java-fn-execution > Add message when starting job server > - > > Key: BEAM-10225 > URL: https://issues.apache.org/jira/browse/BEAM-10225 > Project: Beam > Issue Type: Improvement > Components: java-fn-execution >Reporter: Anna Qin >Assignee: Anna Qin >Priority: P4 > > Currently, the job server blocks while waiting for jobs, but the terminal > outputs a misleading percentage indicator that stops at 98%. Add a message to > clarify when jobs are ready to be submitted and that the build only > terminates upon error or ctrl+c -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9852) C-ares status is not ARES_SUCCESS: Misformatted domain name
[ https://issues.apache.org/jira/browse/BEAM-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-9852: -- Description: This affects all portable runners (Flink, Spark, Dataflow Python streaming). It does not appear to cause pipelines to fail. Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/local/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", line 545, in target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", line 528, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/site-packages/grpc/channel.py", line 388, in __next_ return self._next() File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 365, in _next raise self grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "DNS resolution failed" debug_error_string = "{"created":"@1587426512.443144965","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"@1587426512.443142363","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":263,"referenced_errors":[{"created":"@1587426512.443141313","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":357,"grpc_status":14,"referenced_errors":[{"created":"@1587426512.443136986","description":"C-ares status is not ARES_SUCCESS: Misformatted domain name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[ {"created":"@1587426512.443126564","description":"C-ares status is not ARES_SUCCESS: Misformatted domain name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244} ]}]}]}]}" > was: This affects both Flink and Spark portable runners. It does not appear to cause pipelines to fail. Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/local/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", line 545, in target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", line 528, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/site-packages/grpc/channel.py", line 388, in __next_ return self._next() File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 365, in _next raise self grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "DNS resolution failed" debug_error_string = "{"created":"@1587426512.443144965","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"@1587426512.443142363","description":"Resolver transient failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":263,"referenced_errors":[{"created":"@1587426512.443141313","description":"DNS resolution failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":357,"grpc_status":14,"referenced_errors":[{"created":"@1587426512.443136986","description":"C-ares status is not ARES_SUCCESS: Misformatted domain name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[ {"created":"@1587426512.443126564","description":"C-ares status is not ARES_SUCCESS: Misformatted domain name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244} ]}]}]}]}" > > C-ares status is not ARES_SUCCESS: Misformatted domain name > --- > > Key: BEAM-9852 > URL: https://issues.apache.org/jira/browse/BEAM-9852 > Project: Beam > Issue Type: Sub-task > Components: runner-flink, runner-spark >Reporter: Kyle Weaver >Priority: P2 > Labels: portability-flink, portability-spark > > This affects all portable runners (Flink, Spark, Dataflow Python streaming). > It does not appear to cause pipelines
[jira] [Updated] (BEAM-10207) Beam ZetaSQL supports pure SQL user-defined scalar functions
[ https://issues.apache.org/jira/browse/BEAM-10207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10207: --- Description: One naive example is {code:sql} CREATE FUNCTION fun (a INT64, b INT64) AS (a + b); {code} > Beam ZetaSQL supports pure SQL user-defined scalar functions > > > Key: BEAM-10207 > URL: https://issues.apache.org/jira/browse/BEAM-10207 > Project: Beam > Issue Type: Task > Components: dsl-sql-zetasql >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > One naive example is > {code:sql} > CREATE FUNCTION fun (a INT64, b INT64) AS (a + b); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10207) Beam ZetaSQL supports pure SQL user-defined scalar functions
Kyle Weaver created BEAM-10207: -- Summary: Beam ZetaSQL supports pure SQL user-defined scalar functions Key: BEAM-10207 URL: https://issues.apache.org/jira/browse/BEAM-10207 Project: Beam Issue Type: Task Components: dsl-sql-zetasql Reporter: Kyle Weaver Assignee: Kyle Weaver -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10194) Could not get unknown property 'println'
Kyle Weaver created BEAM-10194: -- Summary: Could not get unknown property 'println' Key: BEAM-10194 URL: https://issues.apache.org/jira/browse/BEAM-10194 Project: Beam Issue Type: Bug Components: katas Reporter: Kyle Weaver Assignee: Kyle Weaver When a test fails, in addition to printing the expected error, the console logs an additional (unexpected) error: Could not get unknown property 'println' for task ':Core_Transforms-Map-MapElements:test' of type org.gradle.api.tasks.testing.Test. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10188) Automate Github release
Kyle Weaver created BEAM-10188: -- Summary: Automate Github release Key: BEAM-10188 URL: https://issues.apache.org/jira/browse/BEAM-10188 Project: Beam Issue Type: Improvement Components: build-system Reporter: Kyle Weaver Currently, we push the tag to Github and fill in the release notes in separate steps. For feeds consuming these updates, it would be better to do both in the same step using the Github API. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10187) build_release_candidate.sh does not push tag to Github
Kyle Weaver created BEAM-10187: -- Summary: build_release_candidate.sh does not push tag to Github Key: BEAM-10187 URL: https://issues.apache.org/jira/browse/BEAM-10187 Project: Beam Issue Type: Bug Components: build-system Reporter: Kyle Weaver Assignee: Kyle Weaver -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10096) Spark runners are numbered 1,2,2
[ https://issues.apache.org/jira/browse/BEAM-10096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10096. Fix Version/s: Not applicable Resolution: Fixed > Spark runners are numbered 1,2,2 > > > Key: BEAM-10096 > URL: https://issues.apache.org/jira/browse/BEAM-10096 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P4 > Fix For: Not applicable > > Time Spent: 20m > Remaining Estimate: 0h > > https://beam.apache.org/documentation/runners/spark/ > 1. A legacy Runner... > 2. An Structured Streaming Spark Runner... > 2. A portable Runner... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10168) Add Github "publish release" to release guide
[ https://issues.apache.org/jira/browse/BEAM-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10168. Fix Version/s: Not applicable Resolution: Fixed > Add Github "publish release" to release guide > - > > Key: BEAM-10168 > URL: https://issues.apache.org/jira/browse/BEAM-10168 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Fix For: Not applicable > > Time Spent: 0.5h > Remaining Estimate: 0h > > Github does not recognize tags as full-fledged releases unless they are > published through the Github API/UI. We need to add this step to the release > guide. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10167) Fix 2.21.0 downloads link in blog post
[ https://issues.apache.org/jira/browse/BEAM-10167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10167. Fix Version/s: Not applicable Resolution: Fixed > Fix 2.21.0 downloads link in blog post > -- > > Key: BEAM-10167 > URL: https://issues.apache.org/jira/browse/BEAM-10167 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P4 > Fix For: Not applicable > > Time Spent: 20m > Remaining Estimate: 0h > > Right now it goes to > [https://beam.apache.org/get-started/downloads/#-], which is a valid > URL, but not exactly the one we want. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10154) Stray version number in SQL overview
[ https://issues.apache.org/jira/browse/BEAM-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10154. Fix Version/s: Not applicable Resolution: Fixed > Stray version number in SQL overview > > > Key: BEAM-10154 > URL: https://issues.apache.org/jira/browse/BEAM-10154 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P4 > Fix For: Not applicable > > Time Spent: 20m > Remaining Estimate: 0h > > Not clear what it means. Probably should just delete it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10177) Remove "Review Release Notes in JIRA"
[ https://issues.apache.org/jira/browse/BEAM-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10177: --- Fix Version/s: (was: 2.22.0) Not applicable > Remove "Review Release Notes in JIRA" > - > > Key: BEAM-10177 > URL: https://issues.apache.org/jira/browse/BEAM-10177 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Brian Hulette >Priority: P3 > Fix For: Not applicable > > Time Spent: 20m > Remaining Estimate: 0h > > Release guide: "You should verify that the issues listed automatically by > JIRA are appropriate to appear in the Release Notes." > I think it's safe to remove that now since a) the volume of jiras > (>150/release) makes that infeasible and b) we have CHANGES.md which should > replace the autogenerated release notes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10177) Remove "Review Release Notes in JIRA"
[ https://issues.apache.org/jira/browse/BEAM-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10177. Fix Version/s: 2.22.0 Resolution: Fixed > Remove "Review Release Notes in JIRA" > - > > Key: BEAM-10177 > URL: https://issues.apache.org/jira/browse/BEAM-10177 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Brian Hulette >Priority: P3 > Fix For: 2.22.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Release guide: "You should verify that the issues listed automatically by > JIRA are appropriate to appear in the Release Notes." > I think it's safe to remove that now since a) the volume of jiras > (>150/release) makes that infeasible and b) we have CHANGES.md which should > replace the autogenerated release notes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10177) Remove "Review Release Notes in JIRA"
Kyle Weaver created BEAM-10177: -- Summary: Remove "Review Release Notes in JIRA" Key: BEAM-10177 URL: https://issues.apache.org/jira/browse/BEAM-10177 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Release guide: "You should verify that the issues listed automatically by JIRA are appropriate to appear in the Release Notes." I think it's safe to remove that now since a) the volume of jiras (>150/release) makes that infeasible and b) we have CHANGES.md which should replace the autogenerated release notes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-8547) Portable Wordcount fails with on stadalone Flink cluster
[ https://issues.apache.org/jira/browse/BEAM-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-8547. --- Fix Version/s: Not applicable Resolution: Won't Fix Closing as "won't fix" because in general we can't support local filesystem writes in a distributed environment. BEAM-5440 should address the case where we want to simulate a distributed environment on a single machine. > Portable Wordcount fails with on stadalone Flink cluster > - > > Key: BEAM-8547 > URL: https://issues.apache.org/jira/browse/BEAM-8547 > Project: Beam > Issue Type: Bug > Components: runner-flink, sdk-py-harness >Reporter: Valentyn Tymofieiev >Assignee: Kyle Weaver >Priority: P2 > Labels: stale-P2 > Fix For: Not applicable > > > Repro: > # git checkout origin/release-2.16.0 > # ./flink-1.8.2/bin/start-cluster.sh > # gradlew :runners:flink:1.8:job-server:runShadow > -PflinkMasterUrl=localhost:8081 > # python -m apache_beam.examples.wordcount --input=/etc/profile > --output=/tmp/py-wordcount-direct --runner=PortableRunner > --experiments=worker_threads=100 --parallelism=1 > --shutdown_sources_on_final_watermark --sdk_worker_parallelism=1 > --environment_cache_millis=6 --job_endpoint=localhost:8099 > This causes the runner to crash with: > {noformat} > Traceback (most recent call last): > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 158, in _execute > response = task() > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 191, in > self._execute(lambda: worker.do_instruction(work), work) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 343, in do_instruction > request.instruction_id) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 369, in process_bundle > bundle_processor.process_bundle(instruction_id)) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", > line 663, in process_bundle > data.ptransform_id].process_encoded(data.data) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", > line 143, in process_encoded > self.output(decoded_value) > File "apache_beam/runners/worker/operations.py", line 255, in > apache_beam.runners.worker.operations.Operation.output > File "apache_beam/runners/worker/operations.py", line 256, in > apache_beam.runners.worker.operations.Operation.output > File "apache_beam/runners/worker/operations.py", line 143, in > apache_beam.runners.worker.operations.SingletonConsumerSet.receive > File "apache_beam/runners/worker/operations.py", line 593, in > apache_beam.runners.worker.operations.DoOperation.process > File "apache_beam/runners/worker/operations.py", line 594, in > apache_beam.runners.worker.operations.DoOperation.process > File "apache_beam/runners/common.py", line 776, in > apache_beam.runners.common.DoFnRunner.receive > File "apache_beam/runners/common.py", line 782, in > apache_beam.runners.common.DoFnRunner.process > File "apache_beam/runners/common.py", line 849, in > apache_beam.runners.common.DoFnRunner._reraise_augmented > File "/usr/local/lib/python3.7/site-packages/future/utils/__init__.py", > line 421, in raise_with_traceback > raise exc.with_traceback(traceback) > File "apache_beam/runners/common.py", line 780, in > apache_beam.runners.common.DoFnRunner.process > File "apache_beam/runners/common.py", line 587, in > apache_beam.runners.common.PerWindowInvoker.invoke_process > File "apache_beam/runners/common.py", line 660, in > apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window > File "/usr/local/lib/python3.7/site-packages/apache_beam/io/iobase.py", > line 1042, in process > self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/options/value_provider.py", > line 137, in _f > return fnc(self, *args, **kwargs) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filebasedsink.py", > line 186, in open_writer > return FileBasedSinkWriter(self, writer_path) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filebasedsink.py", > line 390, in __init__ > self.temp_handle = self.sink.open(temp_shard_path) > File "/usr/local/lib/python3.7/site-packages/apache_beam/io/textio.py", > line 391, in open > file_handle = super(_TextSink, self).open(temp_path) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/options/value_provider.py", > line 137, in _f > return fnc(self,
[jira] [Work started] (BEAM-8547) Portable Wordcount fails with on stadalone Flink cluster
[ https://issues.apache.org/jira/browse/BEAM-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on BEAM-8547 started by Kyle Weaver. - > Portable Wordcount fails with on stadalone Flink cluster > - > > Key: BEAM-8547 > URL: https://issues.apache.org/jira/browse/BEAM-8547 > Project: Beam > Issue Type: Bug > Components: runner-flink, sdk-py-harness >Reporter: Valentyn Tymofieiev >Assignee: Kyle Weaver >Priority: P2 > Labels: stale-P2 > > Repro: > # git checkout origin/release-2.16.0 > # ./flink-1.8.2/bin/start-cluster.sh > # gradlew :runners:flink:1.8:job-server:runShadow > -PflinkMasterUrl=localhost:8081 > # python -m apache_beam.examples.wordcount --input=/etc/profile > --output=/tmp/py-wordcount-direct --runner=PortableRunner > --experiments=worker_threads=100 --parallelism=1 > --shutdown_sources_on_final_watermark --sdk_worker_parallelism=1 > --environment_cache_millis=6 --job_endpoint=localhost:8099 > This causes the runner to crash with: > {noformat} > Traceback (most recent call last): > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 158, in _execute > response = task() > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 191, in > self._execute(lambda: worker.do_instruction(work), work) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 343, in do_instruction > request.instruction_id) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/sdk_worker.py", > line 369, in process_bundle > bundle_processor.process_bundle(instruction_id)) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", > line 663, in process_bundle > data.ptransform_id].process_encoded(data.data) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/bundle_processor.py", > line 143, in process_encoded > self.output(decoded_value) > File "apache_beam/runners/worker/operations.py", line 255, in > apache_beam.runners.worker.operations.Operation.output > File "apache_beam/runners/worker/operations.py", line 256, in > apache_beam.runners.worker.operations.Operation.output > File "apache_beam/runners/worker/operations.py", line 143, in > apache_beam.runners.worker.operations.SingletonConsumerSet.receive > File "apache_beam/runners/worker/operations.py", line 593, in > apache_beam.runners.worker.operations.DoOperation.process > File "apache_beam/runners/worker/operations.py", line 594, in > apache_beam.runners.worker.operations.DoOperation.process > File "apache_beam/runners/common.py", line 776, in > apache_beam.runners.common.DoFnRunner.receive > File "apache_beam/runners/common.py", line 782, in > apache_beam.runners.common.DoFnRunner.process > File "apache_beam/runners/common.py", line 849, in > apache_beam.runners.common.DoFnRunner._reraise_augmented > File "/usr/local/lib/python3.7/site-packages/future/utils/__init__.py", > line 421, in raise_with_traceback > raise exc.with_traceback(traceback) > File "apache_beam/runners/common.py", line 780, in > apache_beam.runners.common.DoFnRunner.process > File "apache_beam/runners/common.py", line 587, in > apache_beam.runners.common.PerWindowInvoker.invoke_process > File "apache_beam/runners/common.py", line 660, in > apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window > File "/usr/local/lib/python3.7/site-packages/apache_beam/io/iobase.py", > line 1042, in process > self.writer = self.sink.open_writer(init_result, str(uuid.uuid4())) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/options/value_provider.py", > line 137, in _f > return fnc(self, *args, **kwargs) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filebasedsink.py", > line 186, in open_writer > return FileBasedSinkWriter(self, writer_path) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filebasedsink.py", > line 390, in __init__ > self.temp_handle = self.sink.open(temp_shard_path) > File "/usr/local/lib/python3.7/site-packages/apache_beam/io/textio.py", > line 391, in open > file_handle = super(_TextSink, self).open(temp_path) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/options/value_provider.py", > line 137, in _f > return fnc(self, *args, **kwargs) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filebasedsink.py", > line 129, in open > return FileSystems.create(temp_path, self.mime_type, > self.compression_type) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/io/filesystems.py", line > 203, in
[jira] [Assigned] (BEAM-6257) Can we deprecate the side input paths through PAssert?
[ https://issues.apache.org/jira/browse/BEAM-6257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-6257: - Assignee: (was: Kyle Weaver) > Can we deprecate the side input paths through PAssert? > -- > > Key: BEAM-6257 > URL: https://issues.apache.org/jira/browse/BEAM-6257 > Project: Beam > Issue Type: Improvement > Components: sdk-java-core >Reporter: Kenneth Knowles >Priority: P2 > Labels: stale-assigned, starter > Time Spent: 3h > Remaining Estimate: 0h > > PAssert has two distinct paths - one uses GBK with a single-firing trigger, > and one uses side inputs. Side inputs are usually a later addition to a > runner, while GBK is one of the first primitives (with a single firing it is > even simple). Filing this against myself to figure out why the side input > version is not deprecated, and if it can be deprecated. > Marking this as a "starter" task because finding and eliminating side input > version of PAssert should be fairly easy. You might need help but can ask on > dev@ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-9541) Single source of truth for supported Flink versions
[ https://issues.apache.org/jira/browse/BEAM-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-9541: - Assignee: Kyle Weaver > Single source of truth for supported Flink versions > --- > > Key: BEAM-9541 > URL: https://issues.apache.org/jira/browse/BEAM-9541 > Project: Beam > Issue Type: Improvement > Components: runner-flink >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > Currently, there is a large number of hard-coded references to the supported > Flink versions (either the newest version, or a list of all supported > versions). These should be condensed to a single source of truth to make > upgrades easier and more robust. > Previously, we awked the list of Flink subdirectories, but this is not > reliable because old, unsupported Flink versions can leave build directories > behind. It would be better to read versions from a handwritten file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9541) Single source of truth for supported Flink versions
[ https://issues.apache.org/jira/browse/BEAM-9541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-9541: -- Labels: (was: stale-P2) > Single source of truth for supported Flink versions > --- > > Key: BEAM-9541 > URL: https://issues.apache.org/jira/browse/BEAM-9541 > Project: Beam > Issue Type: Improvement > Components: runner-flink >Reporter: Kyle Weaver >Priority: P2 > > Currently, there is a large number of hard-coded references to the supported > Flink versions (either the newest version, or a list of all supported > versions). These should be condensed to a single source of truth to make > upgrades easier and more robust. > Previously, we awked the list of Flink subdirectories, but this is not > reliable because old, unsupported Flink versions can leave build directories > behind. It would be better to read versions from a handwritten file. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Closed] (BEAM-9087) Release guide doesn't need "run manually" section
[ https://issues.apache.org/jira/browse/BEAM-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver closed BEAM-9087. - Fix Version/s: Not applicable Resolution: Duplicate > Release guide doesn't need "run manually" section > - > > Key: BEAM-9087 > URL: https://issues.apache.org/jira/browse/BEAM-9087 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Priority: P2 > Labels: stale-P2 > Fix For: Not applicable > > > The release guide contains a section on how to do steps manually. This > section mostly just duplicates the contents of build_release_candidate.sh, > meaning a) it doesn't add much utility and b) it constantly has to be kept > up-to-date with build_release_candidate.sh. I propose removing the section > entirely and keeping build_release_candidate.sh as the source of truth for > what needs to happen. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-8141) Add an integration test suite for cross-language transforms for Spark runner
[ https://issues.apache.org/jira/browse/BEAM-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-8141. --- Fix Version/s: (was: 2.15.0) 2.20.0 Resolution: Duplicate > Add an integration test suite for cross-language transforms for Spark runner > > > Key: BEAM-8141 > URL: https://issues.apache.org/jira/browse/BEAM-8141 > Project: Beam > Issue Type: Test > Components: testing >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: stale-assigned > Fix For: 2.20.0 > > > We should add an integration test suite that covers following. > (1) Currently available Java IO connectors that do not use UDFs work for > Python SDK on Spark runner. > (2) Currently available Python IO connectors that do not use UDFs work for > Java SDK on Spark runner. > (3) Currently available Java/Python pipelines work in a scalable manner for > cross-language pipelines (for example, try 10GB, 100GB input for > textio/avroio for Java and Python). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-8739) Consistently use with Pipeline(...) syntax
[ https://issues.apache.org/jira/browse/BEAM-8739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-8739. --- Fix Version/s: 2.20.0 Resolution: Fixed > Consistently use with Pipeline(...) syntax > -- > > Key: BEAM-8739 > URL: https://issues.apache.org/jira/browse/BEAM-8739 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Robert Bradshaw >Assignee: Robert Bradshaw >Priority: P2 > Labels: stale-assigned > Fix For: 2.20.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > I've run into a couple of tests that forgot to do p.run(). In addition, I'm > seeing new tests written in this old style. We should consistently use the > with syntax where possible for our examples and tests. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-8253) (Go SDK) Add worker_region and worker_zone options
[ https://issues.apache.org/jira/browse/BEAM-8253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-8253: - Assignee: (was: Kyle Weaver) > (Go SDK) Add worker_region and worker_zone options > -- > > Key: BEAM-8253 > URL: https://issues.apache.org/jira/browse/BEAM-8253 > Project: Beam > Issue Type: Sub-task > Components: runner-dataflow, sdk-go >Reporter: Kyle Weaver >Priority: P2 > Labels: stale-assigned > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-7790) Make debugging subprocess workers easier
[ https://issues.apache.org/jira/browse/BEAM-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-7790. --- Fix Version/s: Not applicable Resolution: Duplicate > Make debugging subprocess workers easier > > > Key: BEAM-7790 > URL: https://issues.apache.org/jira/browse/BEAM-7790 > Project: Beam > Issue Type: Improvement > Components: sdk-py-harness >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P3 > Labels: stale-assigned > Fix For: Not applicable > > > [ ] The output of the SDK workers is currently invisible due to the output > and logging setup. > [ ] The dockerized version of the Python SDK worker sets up an HTTP server to > let the user view stack traces for all of the worker's threads [1]. It would > be useful if this was available for other execution modes as well. > [x] BEAM-7676 Make the above items more usable with multiple subprocesses by > identifying them with worker ids. > > [1] > [https://github.com/apache/beam/blob/9f4ce1c6fc2fb195e218783a6e9ce6104ddb4d1e/sdks/python/apache_beam/runners/worker/sdk_worker_main.py#L46-L89] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-8270) beam_fn_api: check runner type instead of hard-coded strings
[ https://issues.apache.org/jira/browse/BEAM-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-8270. --- Fix Version/s: 2.17.0 Resolution: Fixed > beam_fn_api: check runner type instead of hard-coded strings > > > Key: BEAM-8270 > URL: https://issues.apache.org/jira/browse/BEAM-8270 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P3 > Labels: stale-assigned > Fix For: 2.17.0 > > > We automatically add the beam_fn_api option, but right now this is done by > checking a hard-coded list of strings. It would be better to instead check > the inheritance of the actual runner instance itself, rather than the option > string. > [https://github.com/apache/beam/blob/f0aa877b8703eed4143957b4cd212aa026238a6e/sdks/python/apache_beam/pipeline.py#L160] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-8270) beam_fn_api: check runner type instead of hard-coded strings
[ https://issues.apache.org/jira/browse/BEAM-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123908#comment-17123908 ] Kyle Weaver commented on BEAM-8270: --- This was fixed in https://github.com/apache/beam/pull/9834. > beam_fn_api: check runner type instead of hard-coded strings > > > Key: BEAM-8270 > URL: https://issues.apache.org/jira/browse/BEAM-8270 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P3 > Labels: stale-assigned > > We automatically add the beam_fn_api option, but right now this is done by > checking a hard-coded list of strings. It would be better to instead check > the inheritance of the actual runner instance itself, rather than the option > string. > [https://github.com/apache/beam/blob/f0aa877b8703eed4143957b4cd212aa026238a6e/sdks/python/apache_beam/pipeline.py#L160] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-7933) Adding timeout to JobServer grpc calls
[ https://issues.apache.org/jira/browse/BEAM-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-7933. --- Fix Version/s: 2.17.0 Resolution: Fixed > Adding timeout to JobServer grpc calls > -- > > Key: BEAM-7933 > URL: https://issues.apache.org/jira/browse/BEAM-7933 > Project: Beam > Issue Type: Improvement > Components: sdk-py-core >Affects Versions: 2.14.0 >Reporter: Enrico Canzonieri >Assignee: Enrico Canzonieri >Priority: P3 > Labels: portability, stale-assigned > Fix For: 2.17.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > grpc calls to the JobServer from the Python SDK do not have timeouts. That > means that the call to pipeline.run()could hang forever if the JobServer is > not running (or failing to start). > E.g. > [https://github.com/apache/beam/blob/master/sdks/python/apache_beam/runners/portability/portable_runner.py#L307] > the call to Prepare() doesn't provide any timeout value and the same applies > to other JobServer requests. > As part of this ticket we could add a default timeout of 60 seconds as the > default timeout for http client. > Additionally, we could consider adding a --job-server-request-timeout to the > [PortableOptions|https://github.com/apache/beam/blob/master/sdks/python/apache_beam/options/pipeline_options.py#L805] > class to be used in the JobServer interactions inside probable_runner.py. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-8164) Correct document for building the python SDK harness container
[ https://issues.apache.org/jira/browse/BEAM-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-8164. --- Fix Version/s: Not applicable Resolution: Fixed > Correct document for building the python SDK harness container > -- > > Key: BEAM-8164 > URL: https://issues.apache.org/jira/browse/BEAM-8164 > Project: Beam > Issue Type: Bug > Components: website >Reporter: sunjincheng >Assignee: sunjincheng >Priority: P2 > Labels: stale-assigned > Fix For: Not applicable > > Time Spent: 2h 40m > Remaining Estimate: 0h > > In the runner document, it is described that we can use the command: > `./gradlew :sdks:python:container:docker` > to Build the SDK harness container, see > ([https://beam.apache.org/documentation/runners/flink/)]. > However, the docker config has been removed with the latest python3 docker > related commit [1] the command would failed with the following error message. > {code:java} > > Task :sdks:python:container:docker FAILED > FAILURE: Build failed with an exception. > * What went wrong: > Execution failed for task ':sdks:python:container:docker'. > > name is a required docker configuration item.{code} > I think we should also adapt the document with command: `./gradlew > :sdks:python:container:py2:docker`? Or add the config when run > `:sdks:python:container:docker` auto run all the python version docker? > > What do you think? > > [1] > [https://github.com/apache/beam/commit/47feeafb21023e2a60ae51737cc4000a2033719c#diff-1bc5883bcfcc9e883ab7df09e4dcddb0L63] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9069) portableWordCount(Flink|Spark)Runner* lack dependencies
[ https://issues.apache.org/jira/browse/BEAM-9069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-9069: -- Summary: portableWordCount(Flink|Spark)Runner* lack dependencies (was: portableWordCount(Flink|Spark)Runner* tasks not guaranteed to pass) > portableWordCount(Flink|Spark)Runner* lack dependencies > --- > > Key: BEAM-9069 > URL: https://issues.apache.org/jira/browse/BEAM-9069 > Project: Beam > Issue Type: Improvement > Components: runner-flink, runner-spark >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: portability-flink, portability-spark, stale-assigned > > These tasks don't themselves depend on the shadow jar tasks > :runners:flink:1.9:job-server:shadowJar and > :runners:spark:job-server:shadowJar, but they need to to ensure they pass. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9953) Beam ZetaSQL supports multiple statements in a query
[ https://issues.apache.org/jira/browse/BEAM-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121333#comment-17121333 ] Kyle Weaver commented on BEAM-9953: --- While it's true we only need to get table names from the last SELECT statement, parsing out the last SELECT statement is non-trivial. analyzeNextStatement depends on the extracted tables. We might be able to do this: # analyze next statement # if analyze succeeded, continue. # if analyze failed due to "table not found," extract tables and re-analyze with the extracted tables. But that seems hacky. > Beam ZetaSQL supports multiple statements in a query > > > Key: BEAM-9953 > URL: https://issues.apache.org/jira/browse/BEAM-9953 > Project: Beam > Issue Type: Task > Components: dsl-sql-zetasql >Reporter: Rui Wang >Assignee: Kyle Weaver >Priority: P2 > > One example of multiple statements query: > {code:java} > CREATE FUNCTION fun_a (param_1 INT64); SELECT fun_a(10); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10168) Add Github "publish release" to release guide
[ https://issues.apache.org/jira/browse/BEAM-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121277#comment-17121277 ] Kyle Weaver commented on BEAM-10168: Also, the 2.21.0 release is "unverified": "The email in this signature doesn’t match the committer email." > Add Github "publish release" to release guide > - > > Key: BEAM-10168 > URL: https://issues.apache.org/jira/browse/BEAM-10168 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > Github does not recognize tags as full-fledged releases unless they are > published through the Github API/UI. We need to add this step to the release > guide. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10168) Add Github "publish release" to release guide
[ https://issues.apache.org/jira/browse/BEAM-10168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121276#comment-17121276 ] Kyle Weaver commented on BEAM-10168: For 2.21.0, I copy-pasted the blog post into the release notes, and as a result the line wrappings were a bit weird, since Github preserved the line breaks. > Add Github "publish release" to release guide > - > > Key: BEAM-10168 > URL: https://issues.apache.org/jira/browse/BEAM-10168 > Project: Beam > Issue Type: Improvement > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > Github does not recognize tags as full-fledged releases unless they are > published through the Github API/UI. We need to add this step to the release > guide. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10168) Add Github "publish release" to release guide
Kyle Weaver created BEAM-10168: -- Summary: Add Github "publish release" to release guide Key: BEAM-10168 URL: https://issues.apache.org/jira/browse/BEAM-10168 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver Github does not recognize tags as full-fledged releases unless they are published through the Github API/UI. We need to add this step to the release guide. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10167) Fix 2.21.0 downloads link in blog post
Kyle Weaver created BEAM-10167: -- Summary: Fix 2.21.0 downloads link in blog post Key: BEAM-10167 URL: https://issues.apache.org/jira/browse/BEAM-10167 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver Right now it goes to [https://beam.apache.org/get-started/downloads/#-], which is a valid URL, but not exactly the one we want. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Reopened] (BEAM-8278) Get output from PROCESS environment
[ https://issues.apache.org/jira/browse/BEAM-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reopened BEAM-8278: --- Re-opening because I believe the logs should be printed by default. > Get output from PROCESS environment > --- > > Key: BEAM-8278 > URL: https://issues.apache.org/jira/browse/BEAM-8278 > Project: Beam > Issue Type: Improvement > Components: java-fn-execution >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Fix For: Not applicable > > > When a worker process fails to start up, we get the exit code, but no other > information as to why it failed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10156) Fill in SQL roadmap, or remove it
Kyle Weaver created BEAM-10156: -- Summary: Fill in SQL roadmap, or remove it Key: BEAM-10156 URL: https://issues.apache.org/jira/browse/BEAM-10156 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Right now it's only a couple links, which is pretty unhelpful. [https://beam.apache.org/roadmap/sql/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10155) Remove portability from the roadmap and rewrite as documentation
Kyle Weaver created BEAM-10155: -- Summary: Remove portability from the roadmap and rewrite as documentation Key: BEAM-10155 URL: https://issues.apache.org/jira/browse/BEAM-10155 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver At this point, the portability API and the runners (especially the Flink runner) that support it are relatively stable. As such, it should transition from the roadmap to regular documentation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10154) Stray version number in SQL overview
[ https://issues.apache.org/jira/browse/BEAM-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10154: --- Status: Open (was: Triage Needed) > Stray version number in SQL overview > > > Key: BEAM-10154 > URL: https://issues.apache.org/jira/browse/BEAM-10154 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Kyle Weaver >Priority: P4 > Time Spent: 10m > Remaining Estimate: 0h > > Not clear what it means. Probably should just delete it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-10154) Stray version number in SQL overview
[ https://issues.apache.org/jira/browse/BEAM-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10154: -- Assignee: Kyle Weaver > Stray version number in SQL overview > > > Key: BEAM-10154 > URL: https://issues.apache.org/jira/browse/BEAM-10154 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P4 > Time Spent: 10m > Remaining Estimate: 0h > > Not clear what it means. Probably should just delete it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10154) Stray version number in SQL overview
Kyle Weaver created BEAM-10154: -- Summary: Stray version number in SQL overview Key: BEAM-10154 URL: https://issues.apache.org/jira/browse/BEAM-10154 Project: Beam Issue Type: Bug Components: website Reporter: Kyle Weaver Not clear what it means. Probably should just delete it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-9953) Beam ZetaSQL supports multiple statements in a query
[ https://issues.apache.org/jira/browse/BEAM-9953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-9953: - Assignee: Kyle Weaver > Beam ZetaSQL supports multiple statements in a query > > > Key: BEAM-9953 > URL: https://issues.apache.org/jira/browse/BEAM-9953 > Project: Beam > Issue Type: Task > Components: dsl-sql-zetasql >Reporter: Rui Wang >Assignee: Kyle Weaver >Priority: P2 > > One example of multiple statements query: > {code:java} > CREATE FUNCTION fun_a (param_1 INT64); SELECT fun_a(10); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10152) Add a diagram to the portability page
Kyle Weaver created BEAM-10152: -- Summary: Add a diagram to the portability page Key: BEAM-10152 URL: https://issues.apache.org/jira/browse/BEAM-10152 Project: Beam Issue Type: New Feature Components: website Reporter: Kyle Weaver A diagram would help a lot to explain the portability architecture. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10151) Document how to run a Java (xlang) pipeline on the Spark portable runner
Kyle Weaver created BEAM-10151: -- Summary: Document how to run a Java (xlang) pipeline on the Spark portable runner Key: BEAM-10151 URL: https://issues.apache.org/jira/browse/BEAM-10151 Project: Beam Issue Type: New Feature Components: runner-spark, website Reporter: Kyle Weaver While users will probably still prefer the classic Spark runner for most Java pipelines, the Spark portable runner enables cross-language transforms. We can start by providing instructions for plain Java, then add cross-language (Java -> Python). -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9852) C-ares status is not ARES_SUCCESS: Misformatted domain name
[ https://issues.apache.org/jira/browse/BEAM-9852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119077#comment-17119077 ] Kyle Weaver commented on BEAM-9852: --- The error message was improved in GRPC, so hopefully we will get more debugging information in the future. https://github.com/grpc/grpc/pull/22865 In the mean time, it's difficult to ascertain the root cause. > C-ares status is not ARES_SUCCESS: Misformatted domain name > --- > > Key: BEAM-9852 > URL: https://issues.apache.org/jira/browse/BEAM-9852 > Project: Beam > Issue Type: Sub-task > Components: runner-flink, runner-spark >Reporter: Kyle Weaver >Priority: P2 > Labels: portability-flink, portability-spark > > This affects both Flink and Spark portable runners. It does not appear to > cause pipelines to fail. > Exception in thread read_grpc_client_inputs: > Traceback (most recent call last): > File "/usr/local/lib/python3.7/threading.py", line 926, in _bootstrap_inner > self.run() > File "/usr/local/lib/python3.7/threading.py", line 870, in run > self._target(*self._args, **self._kwargs) > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", > line 545, in > target=lambda: self._read_inputs(elements_iterator), > File > "/usr/local/lib/python3.7/site-packages/apache_beam/runners/worker/data_plane.py", > line 528, in _read_inputs > for elements in elements_iterator: > File "/usr/local/lib/python3.7/site-packages/grpc/channel.py", line 388, in > __next_ > return self._next() > File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 365, in > _next > raise self > grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > status = StatusCode.UNAVAILABLE > details = "DNS resolution failed" > debug_error_string = > "{"created":"@1587426512.443144965","description":"Failed to pick > subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3876,"referenced_errors":[{"created":"@1587426512.443142363","description":"Resolver > transient > failure","file":"src/core/ext/filters/client_channel/resolving_lb_policy.cc","file_line":263,"referenced_errors":[{"created":"@1587426512.443141313","description":"DNS > resolution > failed","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/dns_resolver_ares.cc","file_line":357,"grpc_status":14,"referenced_errors":[{"created":"@1587426512.443136986","description":"C-ares > status is not ARES_SUCCESS: Misformatted domain > name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244,"referenced_errors":[ > {"created":"@1587426512.443126564","description":"C-ares status is not > ARES_SUCCESS: Misformatted domain > name","file":"src/core/ext/filters/client_channel/resolver/dns/c_ares/grpc_ares_wrapper.cc","file_line":244} > ]}]}]}]}" > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9082) "Socket closed" Spurious GRPC errors in Flink/Spark runner log output
[ https://issues.apache.org/jira/browse/BEAM-9082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17119056#comment-17119056 ] Kyle Weaver commented on BEAM-9082: --- Thanks Daniel. FYI I filed a separate issue for 1. (BEAM-9852). > "Socket closed" Spurious GRPC errors in Flink/Spark runner log output > - > > Key: BEAM-9082 > URL: https://issues.apache.org/jira/browse/BEAM-9082 > Project: Beam > Issue Type: Sub-task > Components: runner-flink, runner-spark >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: portability-flink, portability-spark > > We often see "Socket closed" errors on job shutdown, even though the pipeline > has finished successfully. They are misleading and especially annoying at > scale. > ERROR:root:Failed to read inputs in the data plane. > ... > grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that > terminated with: > status = StatusCode.UNAVAILABLE > details = "Socket closed" > debug_error_string = > "{"created":"@1578597616.309419460","description":"Error received from peer > ipv6:[::1]:37211","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Socket > closed","grpc_status":14}" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10146) Clean Docker images after tests
Kyle Weaver created BEAM-10146: -- Summary: Clean Docker images after tests Key: BEAM-10146 URL: https://issues.apache.org/jira/browse/BEAM-10146 Project: Beam Issue Type: Improvement Components: testing Reporter: Kyle Weaver We build Docker images for many tests, but as far as I know we never clean them up. On Jenkins, we prune Docker images every day (https://github.com/apache/beam/blob/b0844c9326841f8ff30950b526015b23e6c3af9b/.test-infra/jenkins/job_Inventory.groovy#L69). But this is an issue for developers' workstations. Over time this can result in O(100GB) disk usage. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10106) Script the deployment of artifacts to pypi
[ https://issues.apache.org/jira/browse/BEAM-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10106. Fix Version/s: 2.22.0 Resolution: Fixed > Script the deployment of artifacts to pypi > -- > > Key: BEAM-10106 > URL: https://issues.apache.org/jira/browse/BEAM-10106 > Project: Beam > Issue Type: Improvement > Components: build-system >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Fix For: 2.22.0 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Right now there's only manual instructions, which are tedious and > error-prone. > https://beam.apache.org/contribute/release-guide/#8-finalize-the-release -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10132) Remove reference to apachebeam/*
Kyle Weaver created BEAM-10132: -- Summary: Remove reference to apachebeam/* Key: BEAM-10132 URL: https://issues.apache.org/jira/browse/BEAM-10132 Project: Beam Issue Type: Bug Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver Flink runner page includes outdated reference to the old Docker hub repo (apachebeam/flink1.9_job_server:latest) https://beam.apache.org/documentation/runners/flink/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-9971) beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file)
[ https://issues.apache.org/jira/browse/BEAM-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-9971. --- Resolution: Fixed > beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file) > -- > > Key: BEAM-9971 > URL: https://issues.apache.org/jira/browse/BEAM-9971 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: portability-spark > Fix For: 2.22.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > This happens sporadically. One time the issue affected 14 tests; another time > it affected 112 tests. > It looks like the ClassLoader is sometimes contaminated with jars from > /tmp/spark-*, which have already been deleted. > 20/05/21 13:54:27 ERROR org.apache.beam.runners.jobsubmission.JobInvocation: > Error during job invocation > pipelinetest0testidentitytransform-kcweaver-0521205426-f4de06c4_51aced77-c171-4842-be1f-6c79226872e5. > java.util.ServiceConfigurationError: > org.apache.beam.runners.core.construction.NativeTransforms$IsNativeTransform: > Error reading configuration file > at java.util.ServiceLoader.fail(ServiceLoader.java:232) > at java.util.ServiceLoader.parse(ServiceLoader.java:309) > at java.util.ServiceLoader.access$200(ServiceLoader.java:185) > at > java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357) > at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393) > at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474) > at > org.apache.beam.runners.core.construction.NativeTransforms.isNative(NativeTransforms.java:50) > at > org.apache.beam.runners.core.construction.graph.QueryablePipeline.isPrimitiveTransform(QueryablePipeline.java:189) > at > org.apache.beam.runners.core.construction.graph.QueryablePipeline.getPrimitiveTransformIds(QueryablePipeline.java:137) > at > org.apache.beam.runners.core.construction.graph.QueryablePipeline.forPrimitivesIn(QueryablePipeline.java:90) > at > org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.(GreedyPipelineFuser.java:67) > at > org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.fuse(GreedyPipelineFuser.java:90) > at > org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:94) > at > org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) > at > org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) > at > org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.io.FileNotFoundException: > /tmp/spark-5e8a8a9a-22d6-48d5-b398-1a4f5582d954/userFiles-ec74cac1-21b5-4127-b764-540636b733d0/beam-runners-core-construction-java-2.22.0-SNAPSHOT-tests.jar > (No such file or directory) > at java.util.zip.ZipFile.open(Native Method) > at java.util.zip.ZipFile.(ZipFile.java:230) > at java.util.zip.ZipFile.(ZipFile.java:155) > at java.util.jar.JarFile.(JarFile.java:167) > at java.util.jar.JarFile.(JarFile.java:104) > at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) > at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) > at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:84) > at > sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) > at > sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:152) > at java.net.URL.openStream(URL.java:1045) > at java.util.ServiceLoader.parse(ServiceLoader.java:304) > ... 18 more -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10015) output timestamp not properly propagated through the Dataflow runner
[ https://issues.apache.org/jira/browse/BEAM-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10015: --- Fix Version/s: (was: 2.21.0) 2.22.0 > output timestamp not properly propagated through the Dataflow runner > > > Key: BEAM-10015 > URL: https://issues.apache.org/jira/browse/BEAM-10015 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Reuven Lax >Assignee: Reuven Lax >Priority: P1 > Fix For: 2.22.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Dataflow runner does not propagate the output timestamp into timer firing, > resulting in incorrect default timestamps when outputting from a processTimer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-9993) Add option defaults for Flink Python tests
[ https://issues.apache.org/jira/browse/BEAM-9993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-9993. --- Fix Version/s: 2.23.0 Resolution: Fixed > Add option defaults for Flink Python tests > -- > > Key: BEAM-9993 > URL: https://issues.apache.org/jira/browse/BEAM-9993 > Project: Beam > Issue Type: Improvement > Components: runner-flink >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P3 > Labels: portability-flink > Fix For: 2.23.0 > > Time Spent: 20m > Remaining Estimate: 0h > > I want to run a single Flink Python test: > python -m apache_beam.runners.portability.flink_runner_test > FlinkRunnerTest.test_metrics > But I get this error: > TypeError: expected str, bytes or os.PathLike object, not NoneType > Turns out flink_job_server_jar isn't set, and there's no default value. We > should set a default. > We should also change the default environment type to LOOPBACK for basic > testing purposes because it requires the least setup. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10038) Add script to mass-comment Jenkins triggers on PR
[ https://issues.apache.org/jira/browse/BEAM-10038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10038. Fix Version/s: Not applicable Resolution: Fixed > Add script to mass-comment Jenkins triggers on PR > - > > Key: BEAM-10038 > URL: https://issues.apache.org/jira/browse/BEAM-10038 > Project: Beam > Issue Type: Improvement > Components: build-system >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Fix For: Not applicable > > Time Spent: 1h 50m > Remaining Estimate: 0h > > This is a work in progress, it just needs to be touched up and added to the > Beam repo: > https://gist.github.com/Ardagan/13e6031e8d1c9ebbd3029bf365c1a517 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-10048) Remove "manual steps" from release guide.
[ https://issues.apache.org/jira/browse/BEAM-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-10048. Fix Version/s: Not applicable Resolution: Fixed > Remove "manual steps" from release guide. > - > > Key: BEAM-10048 > URL: https://issues.apache.org/jira/browse/BEAM-10048 > Project: Beam > Issue Type: Improvement > Components: build-system, website >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Fix For: Not applicable > > Time Spent: 1h 20m > Remaining Estimate: 0h > > release-guide.md contains most of the same instructions as > build_release_candidate.sh ("(Alternative) Run all steps manually"). This is > not ideal: > - Mirroring the instructions in release-guide.md doesn't add any value. > - Every single change to the process requires two identical changes to each > file, and this makes it unnecessarily difficult to keep the two in sync. > - All the extra instructions make release-guide.md harder to read, obscuring > information that the release manager actually does need to know. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10109) Fix context classloader in Spark portable runner
Kyle Weaver created BEAM-10109: -- Summary: Fix context classloader in Spark portable runner Key: BEAM-10109 URL: https://issues.apache.org/jira/browse/BEAM-10109 Project: Beam Issue Type: Improvement Components: runner-spark Reporter: Kyle Weaver Assignee: Kyle Weaver Spark is setting the context class loader to support dynamic class loading, leading to unpredictable behavior with duplicate jars being found on the class path. We need to see if there is a way to disable this behavior so we can use the context class loader deterministically. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10108) publish_docker_images.sh has out of date Flink versions
Kyle Weaver created BEAM-10108: -- Summary: publish_docker_images.sh has out of date Flink versions Key: BEAM-10108 URL: https://issues.apache.org/jira/browse/BEAM-10108 Project: Beam Issue Type: Bug Components: build-system Reporter: Kyle Weaver Assignee: Kyle Weaver Is 1.7, 1.8, 1.9. Should be 1.8, 1.9, 1.10. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10107) beam website PR listed twice in release guide with contradictory instructions
Kyle Weaver created BEAM-10107: -- Summary: beam website PR listed twice in release guide with contradictory instructions Key: BEAM-10107 URL: https://issues.apache.org/jira/browse/BEAM-10107 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver The Beam website update PR is mentioned twice, once in 5. with the new instructions and again in 6. with the old instructions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10106) Script the deployment of artifacts to pypi
[ https://issues.apache.org/jira/browse/BEAM-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10106: --- Description: Right now there's only manual instructions, which are tedious and error-prone. https://beam.apache.org/contribute/release-guide/#8-finalize-the-release (was: Right now there's only manual instructions, which are tedious and error-prone. http://localhost:1313/contribute/release-guide/#8-finalize-the-release) > Script the deployment of artifacts to pypi > -- > > Key: BEAM-10106 > URL: https://issues.apache.org/jira/browse/BEAM-10106 > Project: Beam > Issue Type: Improvement > Components: build-system >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > Right now there's only manual instructions, which are tedious and > error-prone. > https://beam.apache.org/contribute/release-guide/#8-finalize-the-release -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10106) Script the deployment of artifacts to pypi
Kyle Weaver created BEAM-10106: -- Summary: Script the deployment of artifacts to pypi Key: BEAM-10106 URL: https://issues.apache.org/jira/browse/BEAM-10106 Project: Beam Issue Type: Improvement Components: build-system Reporter: Kyle Weaver Assignee: Kyle Weaver Right now there's only manual instructions, which are tedious and error-prone. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10106) Script the deployment of artifacts to pypi
[ https://issues.apache.org/jira/browse/BEAM-10106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10106: --- Description: Right now there's only manual instructions, which are tedious and error-prone. http://localhost:1313/contribute/release-guide/#8-finalize-the-release (was: Right now there's only manual instructions, which are tedious and error-prone.) > Script the deployment of artifacts to pypi > -- > > Key: BEAM-10106 > URL: https://issues.apache.org/jira/browse/BEAM-10106 > Project: Beam > Issue Type: Improvement > Components: build-system >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > > Right now there's only manual instructions, which are tedious and > error-prone. > http://localhost:1313/contribute/release-guide/#8-finalize-the-release -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10096) Spark runners are numbered 1,2,2
Kyle Weaver created BEAM-10096: -- Summary: Spark runners are numbered 1,2,2 Key: BEAM-10096 URL: https://issues.apache.org/jira/browse/BEAM-10096 Project: Beam Issue Type: Bug Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver https://beam.apache.org/documentation/runners/spark/ 1. A legacy Runner... 2. An Structured Streaming Spark Runner... 2. A portable Runner... -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10095) Add hyperlinks to the beam-overview page.
Kyle Weaver created BEAM-10095: -- Summary: Add hyperlinks to the beam-overview page. Key: BEAM-10095 URL: https://issues.apache.org/jira/browse/BEAM-10095 Project: Beam Issue Type: Improvement Components: website Reporter: Kyle Weaver Assignee: Kyle Weaver - Java, Python, and Go should be hyperlinked to respective quickstart guides. - Runners listed should be hyperlinked to respective runner pages. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10094) Spark failing testFlattenWithDifferentInputAndOutputCoders2
Kyle Weaver created BEAM-10094: -- Summary: Spark failing testFlattenWithDifferentInputAndOutputCoders2 Key: BEAM-10094 URL: https://issues.apache.org/jira/browse/BEAM-10094 Project: Beam Issue Type: Bug Components: test-failures Reporter: Kyle Weaver Assignee: Maximilian Michels Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing newly added test org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10094) Spark failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10094: --- Description: Spark portable validates runner is failing on newly added test org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) was: Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing newly added test org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) > Spark failing testFlattenWithDifferentInputAndOutputCoders2 > --- > > Key: BEAM-10094 > URL: https://issues.apache.org/jira/browse/BEAM-10094 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Maximilian Michels >Priority: P2 > > Spark portable validates runner is
[jira] [Updated] (BEAM-7587) Spark portable runner: Streaming mode
[ https://issues.apache.org/jira/browse/BEAM-7587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-7587: -- Description: So far all work on the Spark portable runner has been in batch mode. This is intended as an uber-issue for tracking progress on adding support for streaming. -It might be advantageous to wait for the structured streaming (non-portable) runner to be completed (to some reasonable extent) before undertaking this, rather than using the DStream API.- Since work on the structured streaming runner is blocked by SPARK-26655, we should implement this using DStreams instead. was: So far all work on the Spark portable runner has been in batch mode. This is intended as an uber-issue for tracking progress on adding support for streaming. It might be advantageous to wait for the structured streaming (non-portable) runner to be completed (to some reasonable extent) before undertaking this, rather than using the DStream API. > Spark portable runner: Streaming mode > - > > Key: BEAM-7587 > URL: https://issues.apache.org/jira/browse/BEAM-7587 > Project: Beam > Issue Type: Wish > Components: runner-spark >Reporter: Kyle Weaver >Priority: P2 > Labels: portability-spark > > So far all work on the Spark portable runner has been in batch mode. This is > intended as an uber-issue for tracking progress on adding support for > streaming. > -It might be advantageous to wait for the structured streaming (non-portable) > runner to be completed (to some reasonable extent) before undertaking this, > rather than using the DStream API.- Since work on the structured streaming > runner is blocked by SPARK-26655, we should implement this using DStreams > instead. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10092) Code blocks that specify a language are hidden.
[ https://issues.apache.org/jira/browse/BEAM-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17116891#comment-17116891 ] Kyle Weaver commented on BEAM-10092: It looks like toggled codeblocks now use a different syntax, though I'm not sure if that is relevant to this issue. https://github.com/apache/beam/blob/master/website/CONTRIBUTE.md#code-highlighting > Code blocks that specify a language are hidden. > --- > > Key: BEAM-10092 > URL: https://issues.apache.org/jira/browse/BEAM-10092 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Kyle Weaver >Priority: P2 > > For example, if I want sql syntax highlighting: > ```sql > SELECT * FROM table; > ``` > The code block will appear empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-10065) Docs - Beam "Release guide" template is broken
[ https://issues.apache.org/jira/browse/BEAM-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10065: -- Assignee: Ashwin Ramaswami > Docs - Beam "Release guide" template is broken > -- > > Key: BEAM-10065 > URL: https://issues.apache.org/jira/browse/BEAM-10065 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Ashwin Ramaswami >Assignee: Ashwin Ramaswami >Priority: P2 > Attachments: Screen Shot 2020-05-22 at 9.09.35 AM.png > > Time Spent: 10m > Remaining Estimate: 0h > > It just shows "e>" for the template. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10092) Code blocks that specify a language are hidden.
Kyle Weaver created BEAM-10092: -- Summary: Code blocks that specify a language are hidden. Key: BEAM-10092 URL: https://issues.apache.org/jira/browse/BEAM-10092 Project: Beam Issue Type: Bug Components: website Reporter: Kyle Weaver For example, if I want sql syntax highlighting: ```sql SELECT * FROM table; ``` The code block will appear empty. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10065) Docs - Beam "Release guide" template is broken
[ https://issues.apache.org/jira/browse/BEAM-10065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10065: --- Status: Open (was: Triage Needed) > Docs - Beam "Release guide" template is broken > -- > > Key: BEAM-10065 > URL: https://issues.apache.org/jira/browse/BEAM-10065 > Project: Beam > Issue Type: Bug > Components: website >Reporter: Ashwin Ramaswami >Priority: P2 > Attachments: Screen Shot 2020-05-22 at 9.09.35 AM.png > > Time Spent: 10m > Remaining Estimate: 0h > > It just shows "e>" for the template. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-9971) beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file)
[ https://issues.apache.org/jira/browse/BEAM-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-9971: -- Description: This happens sporadically. One time the issue affected 14 tests; another time it affected 112 tests. It looks like the ClassLoader is sometimes contaminated with jars from /tmp/spark-*, which have already been deleted. 20/05/21 13:54:27 ERROR org.apache.beam.runners.jobsubmission.JobInvocation: Error during job invocation pipelinetest0testidentitytransform-kcweaver-0521205426-f4de06c4_51aced77-c171-4842-be1f-6c79226872e5. java.util.ServiceConfigurationError: org.apache.beam.runners.core.construction.NativeTransforms$IsNativeTransform: Error reading configuration file at java.util.ServiceLoader.fail(ServiceLoader.java:232) at java.util.ServiceLoader.parse(ServiceLoader.java:309) at java.util.ServiceLoader.access$200(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357) at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393) at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474) at org.apache.beam.runners.core.construction.NativeTransforms.isNative(NativeTransforms.java:50) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.isPrimitiveTransform(QueryablePipeline.java:189) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.getPrimitiveTransformIds(QueryablePipeline.java:137) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.forPrimitivesIn(QueryablePipeline.java:90) at org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.(GreedyPipelineFuser.java:67) at org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.fuse(GreedyPipelineFuser.java:90) at org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:94) at org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: /tmp/spark-5e8a8a9a-22d6-48d5-b398-1a4f5582d954/userFiles-ec74cac1-21b5-4127-b764-540636b733d0/beam-runners-core-construction-java-2.22.0-SNAPSHOT-tests.jar (No such file or directory) at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:230) at java.util.zip.ZipFile.(ZipFile.java:155) at java.util.jar.JarFile.(JarFile.java:167) at java.util.jar.JarFile.(JarFile.java:104) at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:84) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:152) at java.net.URL.openStream(URL.java:1045) at java.util.ServiceLoader.parse(ServiceLoader.java:304) ... 18 more was: This happens sporadically. One time the issue affected 14 tests; another time it affected 112 tests. 20/05/21 13:54:27 ERROR org.apache.beam.runners.jobsubmission.JobInvocation: Error during job invocation pipelinetest0testidentitytransform-kcweaver-0521205426-f4de06c4_51aced77-c171-4842-be1f-6c79226872e5. java.util.ServiceConfigurationError: org.apache.beam.runners.core.construction.NativeTransforms$IsNativeTransform: Error reading configuration file at java.util.ServiceLoader.fail(ServiceLoader.java:232) at java.util.ServiceLoader.parse(ServiceLoader.java:309) at java.util.ServiceLoader.access$200(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357) at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393) at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474) at org.apache.beam.runners.core.construction.NativeTransforms.isNative(NativeTransforms.java:50) at
[jira] [Updated] (BEAM-9971) beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file)
[ https://issues.apache.org/jira/browse/BEAM-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-9971: -- Description: This happens sporadically. One time the issue affected 14 tests; another time it affected 112 tests. 20/05/21 13:54:27 ERROR org.apache.beam.runners.jobsubmission.JobInvocation: Error during job invocation pipelinetest0testidentitytransform-kcweaver-0521205426-f4de06c4_51aced77-c171-4842-be1f-6c79226872e5. java.util.ServiceConfigurationError: org.apache.beam.runners.core.construction.NativeTransforms$IsNativeTransform: Error reading configuration file at java.util.ServiceLoader.fail(ServiceLoader.java:232) at java.util.ServiceLoader.parse(ServiceLoader.java:309) at java.util.ServiceLoader.access$200(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:357) at java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393) at java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474) at org.apache.beam.runners.core.construction.NativeTransforms.isNative(NativeTransforms.java:50) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.isPrimitiveTransform(QueryablePipeline.java:189) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.getPrimitiveTransformIds(QueryablePipeline.java:137) at org.apache.beam.runners.core.construction.graph.QueryablePipeline.forPrimitivesIn(QueryablePipeline.java:90) at org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.(GreedyPipelineFuser.java:67) at org.apache.beam.runners.core.construction.graph.GreedyPipelineFuser.fuse(GreedyPipelineFuser.java:90) at org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:94) at org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.FileNotFoundException: /tmp/spark-5e8a8a9a-22d6-48d5-b398-1a4f5582d954/userFiles-ec74cac1-21b5-4127-b764-540636b733d0/beam-runners-core-construction-java-2.22.0-SNAPSHOT-tests.jar (No such file or directory) at java.util.zip.ZipFile.open(Native Method) at java.util.zip.ZipFile.(ZipFile.java:230) at java.util.zip.ZipFile.(ZipFile.java:155) at java.util.jar.JarFile.(JarFile.java:167) at java.util.jar.JarFile.(JarFile.java:104) at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93) at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69) at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:84) at sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122) at sun.net.www.protocol.jar.JarURLConnection.getInputStream(JarURLConnection.java:152) at java.net.URL.openStream(URL.java:1045) at java.util.ServiceLoader.parse(ServiceLoader.java:304) ... 18 more was: This happens sporadically. One time the issue affected 14 tests; another time it affected 112 tests. java.lang.RuntimeException: The Runner experienced the following error during execution: java.io.FileNotFoundException: /tmp/spark-0812a463-8d6b-4c97-be4b-de43baf67108/userFiles-b90ca2e1-2041-442d-ae78-c8e9c30bff49/beam-runners-spark-2.22.0-SNAPSHOT.jar (No such file or directory) at org.apache.beam.runners.portability.JobServicePipelineResult.propagateErrors(JobServicePipelineResult.java:165) at org.apache.beam.runners.portability.JobServicePipelineResult.waitUntilFinish(JobServicePipelineResult.java:110) at org.apache.beam.runners.portability.testing.TestPortableRunner.run(TestPortableRunner.java:83) at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331) at org.apache.beam.runners.core.metrics.MetricsPusherTest.pushesUserMetrics(MetricsPusherTest.java:70) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
[jira] [Updated] (BEAM-10055) Add --region to 3 of the python examples
[ https://issues.apache.org/jira/browse/BEAM-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10055: --- Status: Open (was: Triage Needed) > Add --region to 3 of the python examples > > > Key: BEAM-10055 > URL: https://issues.apache.org/jira/browse/BEAM-10055 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ted Romer >Assignee: Ted Romer >Priority: P3 > Original Estimate: 1h > Time Spent: 10m > Remaining Estimate: 50m > > Proposed fix: > {color:#FF}[https://github.com/tedromer/beam/compare/tedromer:ef811fe...tedromer:1f39865]{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9971) beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file)
[ https://issues.apache.org/jira/browse/BEAM-9971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113425#comment-17113425 ] Kyle Weaver commented on BEAM-9971: --- Thanks for pointing out the Java dependencies change Brian -- I somehow completely missed that connection. I'll fix it as soon as I can. > beam_PostCommit_Java_PVR_Spark_Batch flakes (no such file) > -- > > Key: BEAM-9971 > URL: https://issues.apache.org/jira/browse/BEAM-9971 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: portability-spark > Fix For: 2.22.0 > > > This happens sporadically. One time the issue affected 14 tests; another time > it affected 112 tests. > java.lang.RuntimeException: The Runner experienced the following error during > execution: > java.io.FileNotFoundException: > /tmp/spark-0812a463-8d6b-4c97-be4b-de43baf67108/userFiles-b90ca2e1-2041-442d-ae78-c8e9c30bff49/beam-runners-spark-2.22.0-SNAPSHOT.jar > (No such file or directory) > at > org.apache.beam.runners.portability.JobServicePipelineResult.propagateErrors(JobServicePipelineResult.java:165) > at > org.apache.beam.runners.portability.JobServicePipelineResult.waitUntilFinish(JobServicePipelineResult.java:110) > at > org.apache.beam.runners.portability.testing.TestPortableRunner.run(TestPortableRunner.java:83) > at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317) > at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350) > at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331) > at > org.apache.beam.runners.core.metrics.MetricsPusherTest.pushesUserMetrics(MetricsPusherTest.java:70) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) > at > org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) > at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) > at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) > at org.junit.runners.ParentRunner.run(ParentRunner.java:412) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38) > at > org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.GeneratedMethodAccessor161.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at >
[jira] [Commented] (BEAM-10055) Add --region to 3 of the python examples
[ https://issues.apache.org/jira/browse/BEAM-10055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113421#comment-17113421 ] Kyle Weaver commented on BEAM-10055: Sorry I missed those. Please feel free to add me as a reviewer when you have a PR ready. > Add --region to 3 of the python examples > > > Key: BEAM-10055 > URL: https://issues.apache.org/jira/browse/BEAM-10055 > Project: Beam > Issue Type: Bug > Components: sdk-py-core >Reporter: Ted Romer >Assignee: Ted Romer >Priority: P3 > Original Estimate: 1h > Remaining Estimate: 1h > > Proposed fix: > {color:#FF}[https://github.com/tedromer/beam/compare/tedromer:ef811fe...tedromer:1f39865]{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113383#comment-17113383 ] Kyle Weaver commented on BEAM-10016: I'm pretty sure it's only failing for Beam 2.22.0 because it was added after the 2.21.0 release cut. I haven't tried it but I suspect the test would fail on previous Beam releases if backported. > Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2 > --- > > Key: BEAM-10016 > URL: https://issues.apache.org/jira/browse/BEAM-10016 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Maximilian Michels >Priority: P2 > Fix For: 2.22.0 > > > Both beam_PostCommit_Java_PVR_Flink_Batch and > beam_PostCommit_Java_PVR_Flink_Streaming are failing newly added test > org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. > SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, > FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map > (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: > PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) > (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be > cast to [B > at > org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) > at > org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) > at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) > at > org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) > at > org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) > at > org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) > at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10025: --- Fix Version/s: (was: 2.22.0) > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Hai Lu >Priority: P2 > Labels: currently-failing > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113376#comment-17113376 ] Kyle Weaver commented on BEAM-10025: I'm guessing the cause is similar to BEAM-10024, so no, it should not be a release blocker. > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Hai Lu >Priority: P2 > Labels: currently-failing > Fix For: 2.22.0 > > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10024) Spark runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10024: --- Fix Version/s: (was: 2.22.0) > Spark runner failing testOutputTimestampDefault > --- > > Key: BEAM-10024 > URL: https://issues.apache.org/jira/browse/BEAM-10024 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: currently-failing > Time Spent: 50m > Remaining Estimate: 0h > > This is causing postcommit to fail > java.lang.UnsupportedOperationException: Found TimerId annotations on > org.apache.beam.sdk.transforms.ParDoTest$TimerTests$12, but DoFn cannot yet > be used with timers in the SparkRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10024) Spark runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113375#comment-17113375 ] Kyle Weaver commented on BEAM-10024: No. A new test was added without being properly annotated, causing it to run where it should have been skipped. > Spark runner failing testOutputTimestampDefault > --- > > Key: BEAM-10024 > URL: https://issues.apache.org/jira/browse/BEAM-10024 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: currently-failing > Fix For: 2.22.0 > > Time Spent: 50m > Remaining Estimate: 0h > > This is causing postcommit to fail > java.lang.UnsupportedOperationException: Found TimerId annotations on > org.apache.beam.sdk.transforms.ParDoTest$TimerTests$12, but DoFn cannot yet > be used with timers in the SparkRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113372#comment-17113372 ] Kyle Weaver commented on BEAM-10016: This test was newly added, so this is probably not a regression. It does seem like an important issue we should look into, but not sure if it should be considered a blocker. > Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2 > --- > > Key: BEAM-10016 > URL: https://issues.apache.org/jira/browse/BEAM-10016 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Maximilian Michels >Priority: P2 > Fix For: 2.22.0 > > > Both beam_PostCommit_Java_PVR_Flink_Batch and > beam_PostCommit_Java_PVR_Flink_Streaming are failing newly added test > org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. > SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, > FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map > (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: > PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) > (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be > cast to [B > at > org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) > at > org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) > at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) > at > org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) > at > org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) > at > org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) > at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10049) Add licenses to Go SDK containers
[ https://issues.apache.org/jira/browse/BEAM-10049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10049: --- Component/s: build-system > Add licenses to Go SDK containers > - > > Key: BEAM-10049 > URL: https://issues.apache.org/jira/browse/BEAM-10049 > Project: Beam > Issue Type: Improvement > Components: build-system, sdk-go >Reporter: Kyle Weaver >Priority: P2 > > This will be a prerequisite to publishing Go SDK containers as part of the > release again. See BEAM-9685 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10049) Add licenses to Go SDK containers
Kyle Weaver created BEAM-10049: -- Summary: Add licenses to Go SDK containers Key: BEAM-10049 URL: https://issues.apache.org/jira/browse/BEAM-10049 Project: Beam Issue Type: Improvement Components: sdk-go Reporter: Kyle Weaver This will be a prerequisite to publishing Go SDK containers as part of the release again. See BEAM-9685 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (BEAM-9261) Add LICENSE and NOTICE to Docker images
[ https://issues.apache.org/jira/browse/BEAM-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver resolved BEAM-9261. --- Fix Version/s: 2.21.0 Resolution: Fixed > Add LICENSE and NOTICE to Docker images > --- > > Key: BEAM-9261 > URL: https://issues.apache.org/jira/browse/BEAM-9261 > Project: Beam > Issue Type: Improvement > Components: build-system >Reporter: Alan Myrvold >Assignee: Alan Myrvold >Priority: P3 > Fix For: 2.21.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Add LICENSE and NOTICE to Docker images, to help with the binary distribution > requirements at [http://www.apache.org/dev/licensing-howto.html] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10048) Remove "manual steps" from release guide.
Kyle Weaver created BEAM-10048: -- Summary: Remove "manual steps" from release guide. Key: BEAM-10048 URL: https://issues.apache.org/jira/browse/BEAM-10048 Project: Beam Issue Type: Improvement Components: build-system, website Reporter: Kyle Weaver Assignee: Kyle Weaver release-guide.md contains most of the same instructions as build_release_candidate.sh ("(Alternative) Run all steps manually"). This is not ideal: - Mirroring the instructions in release-guide.md doesn't add any value. - Every single change to the process requires two identical changes to each file, and this makes it unnecessarily difficult to keep the two in sync. - All the extra instructions make release-guide.md harder to read, obscuring information that the release manager actually does need to know. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10038) Add script to mass-comment Jenkins triggers on PR
Kyle Weaver created BEAM-10038: -- Summary: Add script to mass-comment Jenkins triggers on PR Key: BEAM-10038 URL: https://issues.apache.org/jira/browse/BEAM-10038 Project: Beam Issue Type: Improvement Components: build-system Reporter: Kyle Weaver Assignee: Kyle Weaver This is a work in progress, it just needs to be touched up and added to the Beam repo: https://gist.github.com/Ardagan/13e6031e8d1c9ebbd3029bf365c1a517 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-10015) output timestamp not properly propagated through the Dataflow runner
[ https://issues.apache.org/jira/browse/BEAM-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10015: -- Assignee: Kyle Weaver > output timestamp not properly propagated through the Dataflow runner > > > Key: BEAM-10015 > URL: https://issues.apache.org/jira/browse/BEAM-10015 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Reuven Lax >Assignee: Kyle Weaver >Priority: P1 > Fix For: 2.21.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Dataflow runner does not propagate the output timestamp into timer firing, > resulting in incorrect default timestamps when outputting from a processTimer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-10015) output timestamp not properly propagated through the Dataflow runner
[ https://issues.apache.org/jira/browse/BEAM-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10015: -- Assignee: Reuven Lax (was: Kyle Weaver) > output timestamp not properly propagated through the Dataflow runner > > > Key: BEAM-10015 > URL: https://issues.apache.org/jira/browse/BEAM-10015 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Reuven Lax >Assignee: Reuven Lax >Priority: P1 > Fix For: 2.21.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Dataflow runner does not propagate the output timestamp into timer firing, > resulting in incorrect default timestamps when outputting from a processTimer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10015) output timestamp not properly propagated through the Dataflow runner
[ https://issues.apache.org/jira/browse/BEAM-10015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111554#comment-17111554 ] Kyle Weaver commented on BEAM-10015: Sorry I missed this. It was pointed out on the mailing list that this issue is not fixed in 2.21.0. I invite someone who has more context on the bug to clarify its severity. I'm also curious whether or not this was a regression from previous Beam releases. > output timestamp not properly propagated through the Dataflow runner > > > Key: BEAM-10015 > URL: https://issues.apache.org/jira/browse/BEAM-10015 > Project: Beam > Issue Type: Bug > Components: runner-dataflow >Reporter: Reuven Lax >Priority: P1 > Fix For: 2.21.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Dataflow runner does not propagate the output timestamp into timer firing, > resulting in incorrect default timestamps when outputting from a processTimer. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111452#comment-17111452 ] Kyle Weaver commented on BEAM-10025: pr/11739 still fails Samza test. You will have to figure out a fix. > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Hai Lu >Priority: P2 > Labels: currently-failing > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9950) cannot find symbol javax.annotation.Generated
[ https://issues.apache.org/jira/browse/BEAM-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111408#comment-17111408 ] Kyle Weaver commented on BEAM-9950: --- yes > cannot find symbol javax.annotation.Generated > - > > Key: BEAM-9950 > URL: https://issues.apache.org/jira/browse/BEAM-9950 > Project: Beam > Issue Type: Bug > Components: build-system >Reporter: Kyle Weaver >Priority: P3 > Fix For: Not applicable > > > This happens when I run through Intellij but not when I run the same command > on the command line, so it is presumably an issue with my Intellij setup. I > am using Intellij 2019.2.4. > ./gradlew :runners:flink:1.10:test --tests > "org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapperTest$ParameterizedUnboundedSourceWrapperTest.testWatermarkEmission[*]" > .../apache/beam/model/pipeline/build/generated/source/proto/main/grpc/org/apache/beam/model/pipeline/v1/TestStreamServiceGrpc.java:20: > error: cannot find symbol > @javax.annotation.Generated( > ^ > symbol: class Generated > location: package javax.annotation > 1 error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111406#comment-17111406 ] Kyle Weaver commented on BEAM-10025: I expect https://github.com/apache/beam/pull/11739 will fix this -- running the test to make sure. > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Hai Lu >Priority: P2 > Labels: currently-failing > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10016: --- Description: Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing newly added test org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) was: Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) > Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2 > --- > > Key: BEAM-10016 > URL: https://issues.apache.org/jira/browse/BEAM-10016 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Maximilian Michels >
[jira] [Assigned] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10016: -- Assignee: Maximilian Michels (was: Kyle Weaver) > Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2 > --- > > Key: BEAM-10016 > URL: https://issues.apache.org/jira/browse/BEAM-10016 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Maximilian Michels >Priority: P2 > > Both beam_PostCommit_Java_PVR_Flink_Batch and > beam_PostCommit_Java_PVR_Flink_Streaming are failing > org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. > SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, > FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map > (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: > PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) > (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be > cast to [B > at > org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) > at > org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) > at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) > at > org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) > at > org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) > at > org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) > at > org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) > at > org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) > at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) > at > org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) > at java.lang.Thread.run(Thread.java:748) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
[ https://issues.apache.org/jira/browse/BEAM-10016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10016: --- Description: Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. SEVERE: Error in task code: CHAIN MapPartition (MapPartition at [6]{Values, FlatMapElements, PAssert$0}) -> FlatMap (FlatMap at ExtractOutput[0]) -> Map (Key Extractor) -> GroupCombine (GroupCombine at GroupCombine: PAssert$0/GroupGlobally/GatherAllOutputs/GroupByKey) -> Map (Key Extractor) (2/2) java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.sdk.coders.ByteArrayCoder.encode(ByteArrayCoder.java:41) at org.apache.beam.sdk.coders.LengthPrefixCoder.encode(LengthPrefixCoder.java:56) at org.apache.beam.sdk.coders.Coder.encode(Coder.java:136) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:590) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:581) at org.apache.beam.sdk.util.WindowedValue$FullWindowedValueCoder.encode(WindowedValue.java:541) at org.apache.beam.sdk.fn.data.BeamFnDataSizeBasedBufferingOutboundObserver.accept(BeamFnDataSizeBasedBufferingOutboundObserver.java:109) at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$CountingFnDataReceiver.accept(SdkHarnessClient.java:667) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.processElements(FlinkExecutableStageFunction.java:271) at org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction.mapPartition(FlinkExecutableStageFunction.java:203) at org.apache.flink.runtime.operators.MapPartitionDriver.run(MapPartitionDriver.java:103) at org.apache.flink.runtime.operators.BatchTask.run(BatchTask.java:504) at org.apache.flink.runtime.operators.BatchTask.invoke(BatchTask.java:369) at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705) at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530) at java.lang.Thread.run(Thread.java:748) was: Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. java.lang.RuntimeException: The Runner experienced the following error during execution: java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.runners.portability.JobServicePipelineResult.propagateErrors(JobServicePipelineResult.java:165) at org.apache.beam.runners.portability.JobServicePipelineResult.waitUntilFinish(JobServicePipelineResult.java:110) at org.apache.beam.runners.portability.testing.TestPortableRunner.run(TestPortableRunner.java:83) at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331) at org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2(FlattenTest.java:397) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:266) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at
[jira] [Closed] (BEAM-9950) cannot find symbol javax.annotation.Generated
[ https://issues.apache.org/jira/browse/BEAM-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver closed BEAM-9950. - Fix Version/s: Not applicable Resolution: Not A Bug > cannot find symbol javax.annotation.Generated > - > > Key: BEAM-9950 > URL: https://issues.apache.org/jira/browse/BEAM-9950 > Project: Beam > Issue Type: Bug > Components: build-system >Reporter: Kyle Weaver >Priority: P3 > Fix For: Not applicable > > > This happens when I run through Intellij but not when I run the same command > on the command line, so it is presumably an issue with my Intellij setup. I > am using Intellij 2019.2.4. > ./gradlew :runners:flink:1.10:test --tests > "org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapperTest$ParameterizedUnboundedSourceWrapperTest.testWatermarkEmission[*]" > .../apache/beam/model/pipeline/build/generated/source/proto/main/grpc/org/apache/beam/model/pipeline/v1/TestStreamServiceGrpc.java:20: > error: cannot find symbol > @javax.annotation.Generated( > ^ > symbol: class Generated > location: package javax.annotation > 1 error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (BEAM-9950) cannot find symbol javax.annotation.Generated
[ https://issues.apache.org/jira/browse/BEAM-9950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110484#comment-17110484 ] Kyle Weaver commented on BEAM-9950: --- The problem is that Intellij has its own setting for the Gradle JVM, and mine was configured to Java 11. Settings -> Build, Execution, Deployment -> Build Tools -> Gradle -> Gradle projects -> beam -> Gradle JVM > cannot find symbol javax.annotation.Generated > - > > Key: BEAM-9950 > URL: https://issues.apache.org/jira/browse/BEAM-9950 > Project: Beam > Issue Type: Bug > Components: build-system >Reporter: Kyle Weaver >Priority: P3 > > This happens when I run through Intellij but not when I run the same command > on the command line, so it is presumably an issue with my Intellij setup. I > am using Intellij 2019.2.4. > ./gradlew :runners:flink:1.10:test --tests > "org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapperTest$ParameterizedUnboundedSourceWrapperTest.testWatermarkEmission[*]" > .../apache/beam/model/pipeline/build/generated/source/proto/main/grpc/org/apache/beam/model/pipeline/v1/TestStreamServiceGrpc.java:20: > error: cannot find symbol > @javax.annotation.Generated( > ^ > symbol: class Generated > location: package javax.annotation > 1 error -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver reassigned BEAM-10025: -- Assignee: Hai Lu (was: Kyle Weaver) > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Hai Lu >Priority: P2 > Labels: currently-failing > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (BEAM-10025) Samza runner failing testOutputTimestampDefault
[ https://issues.apache.org/jira/browse/BEAM-10025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kyle Weaver updated BEAM-10025: --- Description: This is causing postcommit to fail java.lang.AssertionError: Expected 1 successful assertions, but found 0. Expected: is <1L> but: was <0L> was: This is causing postcommit to fail java.lang.UnsupportedOperationException: Found TimerId annotations on org.apache.beam.sdk.transforms.ParDoTest$TimerTests$12, but DoFn cannot yet be used with timers in the SparkRunner. > Samza runner failing testOutputTimestampDefault > --- > > Key: BEAM-10025 > URL: https://issues.apache.org/jira/browse/BEAM-10025 > Project: Beam > Issue Type: Bug > Components: test-failures >Reporter: Kyle Weaver >Assignee: Kyle Weaver >Priority: P2 > Labels: currently-failing > > This is causing postcommit to fail > java.lang.AssertionError: Expected 1 successful assertions, but found 0. > Expected: is <1L> > but: was <0L> -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10025) Samza runner failing testOutputTimestampDefault
Kyle Weaver created BEAM-10025: -- Summary: Samza runner failing testOutputTimestampDefault Key: BEAM-10025 URL: https://issues.apache.org/jira/browse/BEAM-10025 Project: Beam Issue Type: Bug Components: test-failures Reporter: Kyle Weaver Assignee: Kyle Weaver This is causing postcommit to fail java.lang.UnsupportedOperationException: Found TimerId annotations on org.apache.beam.sdk.transforms.ParDoTest$TimerTests$12, but DoFn cannot yet be used with timers in the SparkRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10024) Spark runner failing testOutputTimestampDefault
Kyle Weaver created BEAM-10024: -- Summary: Spark runner failing testOutputTimestampDefault Key: BEAM-10024 URL: https://issues.apache.org/jira/browse/BEAM-10024 Project: Beam Issue Type: Bug Components: test-failures Reporter: Kyle Weaver Assignee: Kyle Weaver This is causing postcommit to fail java.lang.UnsupportedOperationException: Found TimerId annotations on org.apache.beam.sdk.transforms.ParDoTest$TimerTests$12, but DoFn cannot yet be used with timers in the SparkRunner. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (BEAM-10016) Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2
Kyle Weaver created BEAM-10016: -- Summary: Flink postcommits failing testFlattenWithDifferentInputAndOutputCoders2 Key: BEAM-10016 URL: https://issues.apache.org/jira/browse/BEAM-10016 Project: Beam Issue Type: Bug Components: test-failures Reporter: Kyle Weaver Both beam_PostCommit_Java_PVR_Flink_Batch and beam_PostCommit_Java_PVR_Flink_Streaming are failing org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2. java.lang.RuntimeException: The Runner experienced the following error during execution: java.lang.ClassCastException: org.apache.beam.sdk.values.KV cannot be cast to [B at org.apache.beam.runners.portability.JobServicePipelineResult.propagateErrors(JobServicePipelineResult.java:165) at org.apache.beam.runners.portability.JobServicePipelineResult.waitUntilFinish(JobServicePipelineResult.java:110) at org.apache.beam.runners.portability.testing.TestPortableRunner.run(TestPortableRunner.java:83) at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:350) at org.apache.beam.sdk.testing.TestPipeline.run(TestPipeline.java:331) at org.apache.beam.sdk.transforms.FlattenTest.testFlattenWithDifferentInputAndOutputCoders2(FlattenTest.java:397) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.beam.sdk.testing.TestPipeline$1.evaluate(TestPipeline.java:319) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:266) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at org.junit.runners.ParentRunner.run(ParentRunner.java:412) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58) at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38) at org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62) at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) at sun.reflect.GeneratedMethodAccessor112.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) at org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) at org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) at org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118) at sun.reflect.GeneratedMethodAccessor111.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at