Hello there,

I didn't get any response/help from the user list for the following
question and thought people on the dev list might be able to help?:

I upgraded to spark 2.3.1 from spark 2.2.1, ran my streaming workload and
got the error (java.lang.AbstractMethodError) never seen before; see the
error exception stack attached in (a) bellow.

anyone knows if  spark 2.3.1 works well with kafka
spark-streaming-kafka-0-10?

this link of spark kafka integration page doesn't say anything about any
limitation:
https://spark.apache.org/docs/2.3.1/streaming-kafka-integration.html
<https://urldefense.proofpoint.com/v2/url?u=https-3A__spark.apache.org_docs_2.3.1_streaming-2Dkafka-2Dintegration.html&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=9YF85k6Q86ELSbXl40mkGw&m=BhWNMxHTJ5GKalIFY0_1N-6KmUXTHe6wwnFCkVkJZJU&s=LORvnW8MyMYoQuiRJpv4qo1519IpQ57V-MxUgro0vP0&e=>

but this discussion seems to say there is indeed an issue when upgrading to
spark 2.3.1:
https://stackoverflow.com/questions/49180931/abstractmethoderror-creating-kafka-stream
<https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_49180931_abstractmethoderror-2Dcreating-2Dkafka-2Dstream&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=9YF85k6Q86ELSbXl40mkGw&m=BhWNMxHTJ5GKalIFY0_1N-6KmUXTHe6wwnFCkVkJZJU&s=xIl7y-rgXq2AUuNg8cKeBIRApmRMLnrJbPsf4kc5Zzg&e=>

i also rebuilt the workload with some spark 2.3.1 jars (see (b) below). it
doesn't seem to help.

Would be great if anyone could kindly share any insights here.

Thanks!

Peter

(a) the exception
Exception in thread "stream execution thread for [id =
5adae836-268a-4ebf-adc4-e3cc9fbe5acf, runId =
70e78d5c-665e-4c6f-a0cc-41a56e488e30]" java.lang.AbstractMethodError
        at
org.apache.spark.internal.Logging$class.initializeLogIfNecessary(Logging.scala:99)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider$.initializeLogIfNecessary(KafkaSourceProvider.scala:369)
        at org.apache.spark.internal.Logging$class.log(Logging.scala:46)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider$.log(KafkaSourceProvider.scala:369)
        at
org.apache.spark.internal.Logging$class.logDebug(Logging.scala:58)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider$.logDebug(KafkaSourceProvider.scala:369)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider$ConfigUpdater.set(KafkaSourceProvider.scala:439)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider$.kafkaParamsForDriver(KafkaSourceProvider.scala:394)
        at
org.apache.spark.sql.kafka010.KafkaSourceProvider.createSource(KafkaSourceProvider.scala:90)
        at
org.apache.spark.sql.execution.datasources.DataSource.createSource(DataSource.scala:277)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:80)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1$$anonfun$applyOrElse$1.apply(MicroBatchExecution.scala:77)
        at
scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
        at
scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:77)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:75)
        at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at
org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:267)
        at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
        at
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:266)
        at
org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:256)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:75)
        at
org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:61)
        at org.apache.spark.sql.execution.streaming.StreamExecution.org
<https://urldefense.proofpoint.com/v2/url?u=http-3A__org.apache.spark.sql.execution.streaming.StreamExecution.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=9YF85k6Q86ELSbXl40mkGw&m=BhWNMxHTJ5GKalIFY0_1N-6KmUXTHe6wwnFCkVkJZJU&s=lU8N2IsjW321Bu4T4BHLHvj1OSsKdPI_R_ahmOgq9rA&e=>
$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:265)
        at
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)

(b)* the build script update:*

[pgl@datanode20 SparkStreamingBenchmark-RemoteConsumer-Spk231]$ diff
build.sbt spk211-build.sbt.original
10,11c10,11
< libraryDependencies += "org.apache.spark" % "spark-sql_2.11" %* "2.3.1"*
< libraryDependencies += "org.apache.spark" % "spark-core_2.11" %* "2.3.1"*
---
> libraryDependencies += "org.apache.spark" % "spark-sql_2.11" % "2.2.1"
> libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.1"
[pgl@datanode20 SparkStreamingBenchmark-RemoteConsumer-Spk231]$

Reply via email to