See 
<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/1/display/redirect>

Changes:


------------------------------------------
[...truncated 949.31 KB...]
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'worker_zone' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'zone' 
was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'files_to_stage' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'streaming' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'experiments' was already added
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
STOPPED
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
STARTING
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
RUNNING
root: DEBUG: org.apache.spark.SparkException: Only one SparkContext may be 
running in this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts = true. The currently running SparkContext 
was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
        at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2483)
        at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2479)
        at scala.Option.foreach(Option.scala:257)
        at 
org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2479)
        at 
org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2568)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:85)
        at 
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at 
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
        at 
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
        at 
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
        at 
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

root: ERROR: org.apache.spark.SparkException: Only one SparkContext may be 
running in this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts = true. The currently running SparkContext 
was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
FAILED
--------------------- >> end captured logging << ---------------------

======================================================================
ERROR: test_filter (apache_beam.transforms.sql_test.SqlTransformTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File 
"<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/python/apache_beam/transforms/sql_test.py";,>
 line 92, in test_filter
    assert_that(out, equal_to([(2, "bar", 1.414)]))
  File 
"<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/python/apache_beam/pipeline.py";,>
 line 547, in __exit__
    self.run().wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/python/apache_beam/testing/test_pipeline.py";,>
 line 114, in run
    state = result.wait_until_finish()
  File 
"<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/python/apache_beam/runners/portability/portable_runner.py";,>
 line 583, in wait_until_finish
    raise self._runtime_exception
RuntimeError: Pipeline 
BeamApp-jenkins-0608233723-1f9c8164_445b31be-0c13-4d93-a642-ade65748b274 failed 
in state FAILED: org.apache.spark.SparkException: Only one SparkContext may be 
running in this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts = true. The currently running SparkContext 
was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
-------------------- >> begin captured logging << --------------------
apache_beam.utils.subprocess_server: INFO: Using pre-built snapshot at 
<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/java/extensions/sql/expansion-service/build/libs/beam-sdks-java-extensions-sql-expansion-service-2.23.0-SNAPSHOT.jar>
apache_beam.utils.subprocess_server: INFO: Starting service with ['java' '-jar' 
'<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/java/extensions/sql/expansion-service/build/libs/beam-sdks-java-extensions-sql-expansion-service-2.23.0-SNAPSHOT.jar'>
 '44533']
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
apache_beam.utils.subprocess_server: INFO: Starting expansion service at 
localhost:44533
apache_beam.utils.subprocess_server: INFO:      beam:external:java:sql:v1: 
org.apache.beam.sdk.expansion.service.ExpansionService$ExternalTransformRegistrarLoader$$Lambda$1/1327763628@593634ad
apache_beam.utils.subprocess_server: INFO:      
beam:external:java:generate_sequence:v1: 
org.apache.beam.sdk.expansion.service.ExpansionService$ExternalTransformRegistrarLoader$$Lambda$1/1327763628@20fa23c1
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
root: DEBUG: Waiting for grpc channel to be ready at localhost:44533.
apache_beam.utils.subprocess_server: INFO: Jun 08, 2020 11:37:16 PM 
org.apache.beam.sdk.expansion.service.ExpansionService expand
apache_beam.utils.subprocess_server: INFO: INFO: Expanding 
'SqlTransform(beam:external:java:sql:v1)' with URN 'beam:external:java:sql:v1'
apache_beam.utils.subprocess_server: INFO: Jun 08, 2020 11:37:18 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
apache_beam.utils.subprocess_server: INFO: INFO: SQL:
apache_beam.utils.subprocess_server: INFO: SELECT `PCOLLECTION`.`id`, 
`PCOLLECTION`.`str`, `PCOLLECTION`.`flt`
apache_beam.utils.subprocess_server: INFO: FROM `beam`.`PCOLLECTION` AS 
`PCOLLECTION`
apache_beam.utils.subprocess_server: INFO: WHERE `PCOLLECTION`.`str` = 'bar'
apache_beam.utils.subprocess_server: INFO: Jun 08, 2020 11:37:18 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
apache_beam.utils.subprocess_server: INFO: INFO: SQLPlan>
apache_beam.utils.subprocess_server: INFO: LogicalProject(id=[$0], str=[$1], 
flt=[$2])
apache_beam.utils.subprocess_server: INFO:   LogicalFilter(condition=[=($1, 
'bar')])
apache_beam.utils.subprocess_server: INFO:     BeamIOSourceRel(table=[[beam, 
PCOLLECTION]])
apache_beam.utils.subprocess_server: INFO: 
apache_beam.utils.subprocess_server: INFO: Jun 08, 2020 11:37:18 PM 
org.apache.beam.sdk.extensions.sql.impl.CalciteQueryPlanner convertToBeamRel
apache_beam.utils.subprocess_server: INFO: INFO: BEAMPlan>
apache_beam.utils.subprocess_server: INFO: BeamCalcRel(expr#0..2=[{inputs}], 
expr#3=['bar':VARCHAR], expr#4=[=($t1, $t3)], proj#0..2=[{exprs}], 
$condition=[$t4])
apache_beam.utils.subprocess_server: INFO:   BeamIOSourceRel(table=[[beam, 
PCOLLECTION]])
apache_beam.utils.subprocess_server: INFO: 
root: DEBUG: Sending SIGINT to job_server
root: WARNING: Make sure that locally built Python SDK docker image has Python 
2.7 interpreter.
root: INFO: Using Python SDK docker image: 
apache/beam_python2.7_sdk:2.23.0.dev. If the image is not available at local, 
we will try to pull from hub.docker.com
root: WARNING: Make sure that locally built Python SDK docker image has Python 
2.7 interpreter.
root: INFO: Using Python SDK docker image: 
apache/beam_python2.7_sdk:2.23.0.dev. If the image is not available at local, 
we will try to pull from hub.docker.com
apache_beam.runners.portability.fn_api_runner.translations: INFO: 
==================== <function lift_combiners at 0x7f0bd86bd320> 
====================
apache_beam.runners.portability.fn_api_runner.translations: DEBUG: 22 [1, 1, 1, 
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
apache_beam.runners.portability.fn_api_runner.translations: DEBUG: Stages: 
['ref_AppliedPTransform_Create/Impulse_3\n  
Create/Impulse:beam:transform:impulse:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/FlatMap(<lambda at core.py:2623>)_4\n  
Create/FlatMap(<lambda at core.py:2623>):beam:transform:pardo:v1\n  must 
follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/AddRandomKeys_7\n  
Create/MaybeReshuffle/Reshuffle/AddRandomKeys:beam:transform:pardo:v1\n  must 
follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps)_9\n
  
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/Map(reify_timestamps):beam:transform:pardo:v1\n
  must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey_10\n
  
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/GroupByKey:beam:transform:group_by_key:v1\n
  must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps)_11\n
  
Create/MaybeReshuffle/Reshuffle/ReshufflePerKey/FlatMap(restore_timestamps):beam:transform:pardo:v1\n
  must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys_12\n  
Create/MaybeReshuffle/Reshuffle/RemoveRandomKeys:beam:transform:pardo:v1\n  
must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_Create/Map(decode)_13\n  
Create/Map(decode):beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'external_2SqlTransform(beam:external:java:sql:v1)/BeamIOSourceRel_3/Convert.ConvertTransform/ParDo(Anonymous)/ParMultiDo(Anonymous)\n
  
SqlTransform(beam:external:java:sql:v1)/BeamIOSourceRel_3/Convert.ConvertTransform/ParDo(Anonymous)/ParMultiDo(Anonymous):beam:transform:pardo:v1\n
  must follow: \n  downstream_side_inputs: <unknown>', 
'external_2SqlTransform(beam:external:java:sql:v1)/BeamCalcRel_30/ParDo(Calc)/ParMultiDo(Calc)\n
  
SqlTransform(beam:external:java:sql:v1)/BeamCalcRel_30/ParDo(Calc)/ParMultiDo(Calc):beam:transform:pardo:v1\n
  must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Create/Impulse_17\n  
assert_that/Create/Impulse:beam:transform:impulse:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Create/FlatMap(<lambda at 
core.py:2623>)_18\n  assert_that/Create/FlatMap(<lambda at 
core.py:2623>):beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Create/Map(decode)_20\n  
assert_that/Create/Map(decode):beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/WindowInto(WindowIntoFn)_21\n  
assert_that/WindowInto(WindowIntoFn):beam:transform:window_into:v1\n  must 
follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/ToVoidKey_22\n  
assert_that/ToVoidKey:beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Group/pair_with_0_24\n  
assert_that/Group/pair_with_0:beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Group/pair_with_1_25\n  
assert_that/Group/pair_with_1:beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Group/Flatten_26\n  
assert_that/Group/Flatten:beam:transform:flatten:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Group/GroupByKey_27\n  
assert_that/Group/GroupByKey:beam:transform:group_by_key:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Group/Map(_merge_tagged_vals_under_key)_28\n 
 assert_that/Group/Map(_merge_tagged_vals_under_key):beam:transform:pardo:v1\n  
must follow: \n  downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Unkey_29\n  
assert_that/Unkey:beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>', 
'ref_AppliedPTransform_assert_that/Match_30\n  
assert_that/Match:beam:transform:pardo:v1\n  must follow: \n  
downstream_side_inputs: <unknown>']
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'job_name' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'runner' 
was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'temp_location' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'environment_cache_millis' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'job_endpoint' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'output_executable_path' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'sdk_worker_parallelism' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'dataflow_kms_key' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'enable_streaming_engine' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'project' 
was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'worker_region' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'worker_zone' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 'zone' 
was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'files_to_stage' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'streaming' was already added
apache_beam.runners.portability.portable_runner: DEBUG: Runner option 
'experiments' was already added
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
STOPPED
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
STARTING
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
RUNNING
root: DEBUG: org.apache.spark.SparkException: Only one SparkContext may be 
running in this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts = true. The currently running SparkContext 
was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
        at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2483)
        at 
org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$2.apply(SparkContext.scala:2479)
        at scala.Option.foreach(Option.scala:257)
        at 
org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2479)
        at 
org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:2568)
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:85)
        at 
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
        at 
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
        at 
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
        at 
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
        at 
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
        at 
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

root: ERROR: org.apache.spark.SparkException: Only one SparkContext may be 
running in this JVM (see SPARK-2243). To ignore this error, set 
spark.driver.allowMultipleContexts = true. The currently running SparkContext 
was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:58)
org.apache.beam.runners.spark.translation.SparkContextFactory.createSparkContext(SparkContextFactory.java:98)
org.apache.beam.runners.spark.translation.SparkContextFactory.getSparkContext(SparkContextFactory.java:64)
org.apache.beam.runners.spark.SparkPipelineRunner.run(SparkPipelineRunner.java:109)
org.apache.beam.runners.jobsubmission.JobInvocation.runPipeline(JobInvocation.java:83)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)
apache_beam.runners.portability.portable_runner: INFO: Job state changed to 
FAILED
--------------------- >> end captured logging << ---------------------

----------------------------------------------------------------------
XML: nosetests-xlangSqlValidateRunner.xml
----------------------------------------------------------------------
XML: 
<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/sdks/python/nosetests.xml>
----------------------------------------------------------------------
Ran 6 tests in 106.714s

FAILED (errors=2)

> Task 
> :runners:flink:1.10:job-server:validatesCrossLanguageRunnerPythonUsingSql 
> FAILED

> Task :runners:flink:1.10:job-server:flinkJobServerCleanup
Stopping job server pid: 814.

FAILURE: Build completed with 4 failures.

1: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task 
':runners:flink:1.10:job-server:validatesCrossLanguageRunnerJavaUsingJava'.
> There were failing tests. See the report at: 
> file://<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/runners/flink/1.10/job-server/build/reports/tests/validatesCrossLanguageRunnerJavaUsingJava/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.
==============================================================================

2: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task 
':runners:flink:1.10:job-server:validatesCrossLanguageRunnerJavaUsingPython'.
> There were failing tests. See the report at: 
> file://<https://ci-beam.apache.org/job/beam_PostCommit_XVR_Flink/ws/src/runners/flink/1.10/job-server/build/reports/tests/validatesCrossLanguageRunnerJavaUsingPython/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.
==============================================================================

3: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task 
':runners:flink:1.10:job-server:validatesCrossLanguageRunnerPythonUsingJava'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.
==============================================================================

4: Task failed with an exception.
-----------
* What went wrong:
Execution failed for task 
':runners:flink:1.10:job-server:validatesCrossLanguageRunnerPythonUsingSql'.
> Process 'command 'sh'' finished with non-zero exit value 1

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.
==============================================================================

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See 
https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 22m 47s
139 actionable tasks: 104 executed, 33 from cache, 2 up-to-date

Publishing build scan...
https://gradle.com/s/ihfj3xl4zor7g

Build step 'Invoke Gradle script' changed build result to FAILURE
Build step 'Invoke Gradle script' marked build as failure

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to