[
https://issues.apache.org/jira/browse/BEAM-10083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17157745#comment-17157745
]
Ismaël Mejía commented on BEAM-10083:
-------------------------------------
Sorry I missed this message, Spark 2.x does NOT support Java 11 so we cannot be
Java 11 compatible until Spark 3 support is merged in.
Ongoing work on support of Java 11 and Spark 3 is happening here BEAM-7093
Current status: If users provide their own spark 3 dependency things do work ok
(with some minor test error) for both the Classic and Portable runners, however
the Structured Streaming runner has compatibility issues because it relies on
unstable Spark APIs and should have a different implementation for Spark 3.
I have ongoing work on breaking the Spark runner module into two (2.x and 3.x),
with this in place we could include the 3.x module only for the Java 11
precommit. I will bring the info back here when that happens.
> Spark Runner Tests failing [Java 11]
> ------------------------------------
>
> Key: BEAM-10083
> URL: https://issues.apache.org/jira/browse/BEAM-10083
> Project: Beam
> Issue Type: Sub-task
> Components: runner-spark
> Reporter: Pawel Pasterz
> Priority: P2
>
> Gradle task *_:runners:spark:test_* fails during Java 11 Precommit job
>
> Example stack trace:
> {code:java}
> > Task :runners:spark:test
> 20/05/26 07:26:31 INFO
> org.apache.beam.runners.spark.metrics.MetricsAccumulator: Instantiated
> metrics accumulator: {
> "metrics": {
> }
> }
> org.apache.beam.runners.spark.structuredstreaming.StructuredStreamingPipelineStateTest
> > testBatchPipelineRunningState STANDARD_ERROR
> 20/05/26 07:26:32 INFO
> org.apache.beam.runners.spark.structuredstreaming.SparkStructuredStreamingRunner:
> *** SparkStructuredStreamingRunner is based on spark structured streaming
> framework and is no more
> based on RDD/DStream API. See
>
> https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
> It is still experimental, its coverage of the Beam model is partial. ***
> org.apache.beam.runners.spark.SparkPortableExecutionTest > testExecution
> STANDARD_ERROR
> 20/05/26 07:26:33 WARN
> org.apache.beam.runners.spark.translation.GroupNonMergingWindowsFunctions:
> Either coder LengthPrefixCoder(ByteArrayCoder) or GlobalWindow$Coder is not
> consistent with equals. That might cause issues on some runners.
> org.apache.beam.runners.spark.structuredstreaming.translation.batch.FlattenTest
> > testFlatten STANDARD_ERROR
> 20/05/26 07:26:34 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to
> load native-hadoop library for your platform... using builtin-java classes
> where applicable
> org.apache.beam.runners.spark.SparkPortableExecutionTest > testExecution
> STANDARD_ERROR
> 20/05/26 07:26:34 ERROR
> org.apache.beam.runners.jobsubmission.JobInvocation: Error during job
> invocation fakeId.
> java.lang.IllegalArgumentException: Unsupported class file major version
> 55
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:166)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:148)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:136)
> at org.apache.xbean.asm6.ClassReader.<init>(ClassReader.java:237)
> at
> org.apache.spark.util.ClosureCleaner$.getClassReader(ClosureCleaner.scala:49)
> at
> org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:517)
> at
> org.apache.spark.util.FieldAccessFinder$$anon$3$$anonfun$visitMethodInsn$2.apply(ClosureCleaner.scala:500)
> at
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
> at
> scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at
> scala.collection.mutable.HashMap$$anon$1$$anonfun$foreach$2.apply(HashMap.scala:134)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:236)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
> at scala.collection.mutable.HashMap$$anon$1.foreach(HashMap.scala:134)
> at
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
> at
> org.apache.spark.util.FieldAccessFinder$$anon$3.visitMethodInsn(ClosureCleaner.scala:500)
> at org.apache.xbean.asm6.ClassReader.readCode(ClassReader.java:2175)
> at org.apache.xbean.asm6.ClassReader.readMethod(ClassReader.java:1238)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:631)
> at org.apache.xbean.asm6.ClassReader.accept(ClassReader.java:355)
> at
> org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:307)
> at
> org.apache.spark.util.ClosureCleaner$$anonfun$org$apache$spark$util$ClosureCleaner$$clean$14.apply(ClosureCleaner.scala:306)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at
> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:306)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:162)
> at org.apache.spark.SparkContext.clean(SparkContext.scala:2326)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2100)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
> at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:990)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:385)
> at org.apache.spark.rdd.RDD.collect(RDD.scala:989)
> at
> org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:361)
> at
> org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
> at
> org.apache.beam.runners.spark.translation.BoundedDataset.getBytes(BoundedDataset.java:76)
> at
> org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInput(SparkBatchPortablePipelineTranslator.java:354)
> at
> org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.broadcastSideInputs(SparkBatchPortablePipelineTranslator.java:338)
> at
> org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translateExecutableStage(SparkBatchPortablePipelineTranslator.java:216)
> at
> org.apache.beam.runners.spark.translation.SparkBatchPortablePipelineTranslator.translate(SparkBatchPortablePipelineTranslator.java:138)
> at
> org.apache.beam.runners.spark.SparkPipelineRunner.lambda$run$1(SparkPipelineRunner.java:122)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> org.apache.beam.runners.spark.SparkPortableExecutionTest > testExecution
> FAILED
> java.lang.AssertionError: expected:<DONE> but was:<FAILED>
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.failNotEquals(Assert.java:835)
> at org.junit.Assert.assertEquals(Assert.java:120)
> at org.junit.Assert.assertEquals(Assert.java:146)
> at
> org.apache.beam.runners.spark.SparkPortableExecutionTest.testExecution(SparkPortableExecutionTest.java:159)
> org.apache.beam.runners.spark.SparkPortableExecutionTest >
> testExecStageWithMultipleOutputs STANDARD_ERROR
> 20/05/26 07:26:35 INFO
> org.apache.beam.runners.jobsubmission.JobInvocation: Starting job invocation
> testExecStageWithMultipleOutputs
> 20/05/26 07:26:36 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
> PipelineOptions.filesToStage was not specified. Defaulting to files from the
> classpath
> 20/05/26 07:26:36 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
> Will stage 289 files. (Enable logging at DEBUG level to see which files will
> be staged.)
> 20/05/26 07:26:36 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
> Running job testExecStageWithMultipleOutputs on Spark master local[4]
> 20/05/26 07:26:37 INFO org.apache.beam.runners.spark.SparkPipelineRunner:
> Job testExecStageWithMultipleOutputs: Pipeline translated successfully.
> Computing outputs
> Gradle Test Executor 114 started executing tests.
> {code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)