See <https://builds.apache.org/job/beam_PostCommit_XVR_Spark/443/display/redirect>
Changes: ------------------------------------------ [...truncated 935.84 KB...] File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/testing/test_pipeline.py",> line 114, in run state = result.wait_until_finish() File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/runners/portability/portable_runner.py",> line 545, in wait_until_finish (self._job_id, self._state, self._last_error_message())) RuntimeError: Pipeline BeamApp-jenkins-0325193532-38ff3a6_dcc02e9c-e578-44da-b768-3fc636f913b9 failed in state FAILED: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) -------------------- >> begin captured logging << -------------------- root: WARNING: Make sure that locally built Python SDK docker image has Python 2.7 interpreter. root: INFO: Using Python SDK docker image: apache/beam_python2.7_sdk:2.21.0.dev. If the image is not available at local, we will try to pull from hub.docker.com apache_beam.runners.portability.fn_api_runner.translations: INFO: ==================== <function lift_combiners at 0x7fb920862320> ==================== apache_beam.runners.portability.fn_api_runner.translations: DEBUG: 35 [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] apache_beam.runners.portability.fn_api_runner.translations: DEBUG: Stages: ['ref_AppliedPTransform_Create/Impulse_3\n Create/Impulse:beam:transform:impulse:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/FlatMap(<lambda at core.py:2644>)_4\n Create/FlatMap(<lambda at core.py:2644>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/AddRandomKeys_7\n Create/MaybeReshuffle/Reshuffle/AddRandomKeys:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps)_9\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey_10\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey:beam:transform:group_by_key:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps)_14\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys_15\n Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/Map(decode)_16\n Create/Map(decode):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'external_9_AppliedPTransform_ExternalTransform(beam:transforms:xlang:test:partition)/Partition(CallableWrapperPartitionFn)/ParDo(ApplyPartitionFnFn)/ParDo(ApplyPartitionFnFn)_5\n ExternalTransform(beam:transforms:xlang:test:partition)/Partition(CallableWrapperPartitionFn)/ParDo(ApplyPartitionFnFn)/ParDo(ApplyPartitionFnFn):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'external_9_AppliedPTransform_ExternalTransform(beam:transforms:xlang:test:partition)/Map(<lambda at expansion_service_test.py:216>)_6\n ExternalTransform(beam:transforms:xlang:test:partition)/Map(<lambda at expansion_service_test.py:216>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'external_9_AppliedPTransform_ExternalTransform(beam:transforms:xlang:test:partition)/Map(<lambda at expansion_service_test.py:217>)_7\n ExternalTransform(beam:transforms:xlang:test:partition)/Map(<lambda at expansion_service_test.py:217>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Create/Impulse_20\n check_even/Create/Impulse:beam:transform:impulse:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Create/FlatMap(<lambda at core.py:2644>)_21\n check_even/Create/FlatMap(<lambda at core.py:2644>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Create/Map(decode)_23\n check_even/Create/Map(decode):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/WindowInto(WindowIntoFn)_24\n check_even/WindowInto(WindowIntoFn):beam:transform:window_into:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/ToVoidKey_25\n check_even/ToVoidKey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Group/pair_with_0_27\n check_even/Group/pair_with_0:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Group/pair_with_1_28\n check_even/Group/pair_with_1:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Group/Flatten_29\n check_even/Group/Flatten:beam:transform:flatten:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Group/GroupByKey_30\n check_even/Group/GroupByKey:beam:transform:group_by_key:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Group/Map(_merge_tagged_vals_under_key)_34\n check_even/Group/Map(_merge_tagged_vals_under_key):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Unkey_35\n check_even/Unkey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_even/Match_36\n check_even/Match:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Create/Impulse_39\n check_odd/Create/Impulse:beam:transform:impulse:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Create/FlatMap(<lambda at core.py:2644>)_40\n check_odd/Create/FlatMap(<lambda at core.py:2644>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Create/Map(decode)_42\n check_odd/Create/Map(decode):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/WindowInto(WindowIntoFn)_43\n check_odd/WindowInto(WindowIntoFn):beam:transform:window_into:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/ToVoidKey_44\n check_odd/ToVoidKey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Group/pair_with_0_46\n check_odd/Group/pair_with_0:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Group/pair_with_1_47\n check_odd/Group/pair_with_1:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Group/Flatten_48\n check_odd/Group/Flatten:beam:transform:flatten:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Group/GroupByKey_49\n check_odd/Group/GroupByKey:beam:transform:group_by_key:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Group/Map(_merge_tagged_vals_under_key)_53\n check_odd/Group/Map(_merge_tagged_vals_under_key):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Unkey_54\n check_odd/Unkey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_check_odd/Match_55\n check_odd/Match:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>'] apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'job_name' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'runner' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'temp_location' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'experiments' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'streaming' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'dataflow_kms_key' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'enable_streaming_engine' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'project' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'worker_region' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'worker_zone' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'zone' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'environment_cache_millis' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'files_to_stage' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'job_endpoint' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'output_executable_path' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'sdk_worker_parallelism' was already added apache_beam.runners.portability.portable_runner: INFO: Job state changed to STOPPED apache_beam.runners.portability.portable_runner: INFO: Job state changed to STARTING apache_beam.runners.portability.portable_runner: INFO: Job state changed to RUNNING root: DEBUG: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2483) at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2479) at scala.Option.foreach(Option.scala:257) at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2479) at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2568) at org.apache.spark.SparkContext.<init>(SparkContext.scala:85) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) at org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) at org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) at org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) root: ERROR: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) apache_beam.runners.portability.portable_runner: INFO: Job state changed to FAILED --------------------- >> end captured logging << --------------------- ====================================================================== ERROR: test_prefix (apache_beam.transforms.validate_runner_xlang_test.ValidateRunnerXlangTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/transforms/validate_runner_xlang_test.py",> line 69, in test_prefix assert_that(res, equal_to(['0a', '0b'])) File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/pipeline.py",> line 525, in __exit__ self.run().wait_until_finish() File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/testing/test_pipeline.py",> line 114, in run state = result.wait_until_finish() File "<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/apache_beam/runners/portability/portable_runner.py",> line 545, in wait_until_finish (self._job_id, self._state, self._last_error_message())) RuntimeError: Pipeline BeamApp-jenkins-0325193534-a7e1aa16_be3c0378-d3a3-4924-9398-cd4179ae391a failed in state FAILED: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) -------------------- >> begin captured logging << -------------------- root: WARNING: Make sure that locally built Python SDK docker image has Python 2.7 interpreter. root: INFO: Using Python SDK docker image: apache/beam_python2.7_sdk:2.21.0.dev. If the image is not available at local, we will try to pull from hub.docker.com apache_beam.runners.portability.fn_api_runner.translations: INFO: ==================== <function lift_combiners at 0x7fb920862320> ==================== apache_beam.runners.portability.fn_api_runner.translations: DEBUG: 21 [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1] apache_beam.runners.portability.fn_api_runner.translations: DEBUG: Stages: ['ref_AppliedPTransform_Create/Impulse_3\n Create/Impulse:beam:transform:impulse:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/FlatMap(<lambda at core.py:2644>)_4\n Create/FlatMap(<lambda at core.py:2644>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/AddRandomKeys_7\n Create/MaybeReshuffle/Reshuffle/AddRandomKeys:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps)_9\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey_10\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey:beam:transform:group_by_key:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps)_14\n Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys_15\n Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_Create/Map(decode)_16\n Create/Map(decode):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'external_10_AppliedPTransform_ExternalTransform(beam:transforms:xlang:test:prefix)/TestLabel_3\n ExternalTransform(beam:transforms:xlang:test:prefix)/TestLabel:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Create/Impulse_20\n assert_that/Create/Impulse:beam:transform:impulse:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Create/FlatMap(<lambda at core.py:2644>)_21\n assert_that/Create/FlatMap(<lambda at core.py:2644>):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Create/Map(decode)_23\n assert_that/Create/Map(decode):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/WindowInto(WindowIntoFn)_24\n assert_that/WindowInto(WindowIntoFn):beam:transform:window_into:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/ToVoidKey_25\n assert_that/ToVoidKey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Group/pair_with_0_27\n assert_that/Group/pair_with_0:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Group/pair_with_1_28\n assert_that/Group/pair_with_1:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Group/Flatten_29\n assert_that/Group/Flatten:beam:transform:flatten:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Group/GroupByKey_30\n assert_that/Group/GroupByKey:beam:transform:group_by_key:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Group/Map(_merge_tagged_vals_under_key)_34\n assert_that/Group/Map(_merge_tagged_vals_under_key):beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Unkey_35\n assert_that/Unkey:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>', 'ref_AppliedPTransform_assert_that/Match_36\n assert_that/Match:beam:transform:pardo:v1\n must follow: \n downstream_side_inputs: <unknown>'] apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'job_name' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'runner' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'temp_location' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'experiments' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'streaming' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'dataflow_kms_key' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'enable_streaming_engine' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'project' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'worker_region' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'worker_zone' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'zone' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'environment_cache_millis' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'files_to_stage' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'job_endpoint' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'output_executable_path' was already added apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'sdk_worker_parallelism' was already added apache_beam.runners.portability.portable_runner: INFO: Job state changed to STOPPED apache_beam.runners.portability.portable_runner: INFO: Job state changed to STARTING apache_beam.runners.portability.portable_runner: INFO: Job state changed to RUNNING root: DEBUG: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2483) at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2479) at scala.Option.foreach(Option.scala:257) at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2479) at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2568) at org.apache.spark.SparkContext.<init>(SparkContext.scala:85) at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) at org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) at org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) at org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) at org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) root: ERROR: org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at: org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58) org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98) org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64) org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:108) org.apache.beam.runners.fnexecution.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57) org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) java.lang.Thread.run(Thread.java:748) apache_beam.runners.portability.portable_runner: INFO: Job state changed to FAILED --------------------- >> end captured logging << --------------------- ---------------------------------------------------------------------- XML: nosetests-xlangValidateRunner.xml ---------------------------------------------------------------------- XML: <https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/sdks/python/nosetests.xml> ---------------------------------------------------------------------- Ran 10 tests in 44.858s FAILED (errors=6) > Task :runners:spark:job-server:validatesCrossLanguageRunnerPythonUsingPython > FAILED > Task :runners:spark:job-server:validatesCrossLanguageRunnerCleanup > Task :runners:spark:job-server:sparkJobServerCleanup FAILURE: Build completed with 4 failures. 1: Task failed with an exception. ----------- * What went wrong: Execution failed for task ':runners:spark:job-server:validatesCrossLanguageRunnerJavaUsingJava'. > There were failing tests. See the report at: > file://<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/runners/spark/job-server/build/reports/tests/validatesCrossLanguageRunnerJavaUsingJava/index.html> * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. ============================================================================== 2: Task failed with an exception. ----------- * What went wrong: Execution failed for task ':runners:spark:job-server:validatesCrossLanguageRunnerJavaUsingPython'. > There were failing tests. See the report at: > file://<https://builds.apache.org/job/beam_PostCommit_XVR_Spark/ws/src/runners/spark/job-server/build/reports/tests/validatesCrossLanguageRunnerJavaUsingPython/index.html> * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. ============================================================================== 3: Task failed with an exception. ----------- * What went wrong: Execution failed for task ':runners:spark:job-server:validatesCrossLanguageRunnerPythonUsingJava'. > Process 'command 'sh'' finished with non-zero exit value 1 * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. ============================================================================== 4: Task failed with an exception. ----------- * What went wrong: Execution failed for task ':runners:spark:job-server:validatesCrossLanguageRunnerPythonUsingPython'. > Process 'command 'sh'' finished with non-zero exit value 1 * Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights. ============================================================================== * Get more help at https://help.gradle.org Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings BUILD FAILED in 18m 11s 104 actionable tasks: 81 executed, 21 from cache, 2 up-to-date Publishing build scan... https://gradle.com/s/mq7kax3mjqehk Build step 'Invoke Gradle script' changed build result to FAILURE Build step 'Invoke Gradle script' marked build as failure --------------------------------------------------------------------- To unsubscribe, e-mail: builds-unsubscr...@beam.apache.org For additional commands, e-mail: builds-h...@beam.apache.org