ok that is good

Yours is basically simple streaming with Kafka (publishing topic) and your
Spark streaming. use the following as blueprint

// Create a local StreamingContext with two working thread and batch
interval of 2 seconds.
val sparkConf = new SparkConf().
             setAppName("CEP_streaming").
             setMaster("local[2]").
             set("spark.executor.memory", "4G").
             set("spark.cores.max", "2").
             set("spark.streaming.concurrentJobs", "2").
             set("spark.driver.allowMultipleContexts", "true").
             set("spark.hadoop.validateOutputSpecs", "false")
val ssc = new StreamingContext(sparkConf, Seconds(2))
ssc.checkpoint("checkpoint")
val kafkaParams = Map[String, String]("bootstrap.servers" ->
"rhes564:9092", "schema.registry.url" -> "http://rhes564:8081";,
"zookeeper.connect" -> "rhes564:2181", "group.id" -> "CEP_streaming" )
val topics = Set("newtopic")
val dstream = KafkaUtils.createDirectStream[String, String, StringDecoder,
StringDecoder](ssc, kafkaParams, topics)
dstream.cache()

val lines = dstream.map(_._2)
val price = lines.map(_.split(',').view(2)).map(_.toFloat)
// window length - The duration of the window below that must be multiple
of batch interval n in = > StreamingContext(sparkConf, Seconds(n))
val windowLength = 4
// sliding interval - The interval at which the window operation is
performed in other words data is collected within this "previous interval'
val slidingInterval = 2  // keep this the same as batch window for
continuous streaming. You are aggregating data that you are collecting over
the  batch Window
val countByValueAndWindow = price.filter(_ >
95.0).countByValueAndWindow(Seconds(windowLength), Seconds(slidingInterval))
countByValueAndWindow.print()
//
ssc.start()
ssc.awaitTermination()

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 7 June 2016 at 10:58, Dominik Safaric <dominiksafa...@gmail.com> wrote:

> Dear Mich,
>
> Thank you for the reply.
>
> By running the following command in the command line:
>
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic
> <topic_name> --from-beginning
>
> I do indeed retrieve all messages of a topic.
>
> Any indication onto what might cause the issue?
>
> An important note to make,  I’m using the default configuration of both
> Kafka and Zookeeper.
>
> On 07 Jun 2016, at 11:39, Mich Talebzadeh <mich.talebza...@gmail.com>
> wrote:
>
> I assume you zookeeper is up and running
>
> can you confirm that you are getting topics from kafka independently for
> example on the command line
>
> ${KAFKA_HOME}/bin/kafka-console-consumer.sh --zookeeper rhes564:2181
> --from-beginning --topic newtopic
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 7 June 2016 at 10:06, Dominik Safaric <dominiksafa...@gmail.com> wrote:
>
>> As I am trying to integrate Kafka into Spark, the following exception
>> occurs:
>>
>> org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
>> org.apache.spark.SparkException: Couldn't find leader offsets for
>> Set([*<topicName>*,0])
>>         at
>>
>> org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
>>         at
>>
>> org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
>>         at scala.util.Either.fold(Either.scala:97)
>>         at
>>
>> org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:365)
>>         at
>>
>> org.apache.spark.streaming.kafka.KafkaUtils$.getFromOffsets(KafkaUtils.scala:222)
>>         at
>>
>> org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:484)
>>         at org.mediasoft.spark.Driver$.main(Driver.scala:42)
>>         at .<init>(<console>:11)
>>         at .<clinit>(<console>)
>>         at .<init>(<console>:7)
>>         at .<clinit>(<console>)
>>         at $print(<console>)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:483)
>>         at
>> scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:734)
>>         at
>> scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:983)
>>         at
>> scala.tools.nsc.interpreter.IMain.loadAndRunReq$1(IMain.scala:573)
>>         at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:604)
>>         at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:568)
>>         at
>> scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:760)
>>         at
>> scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:805)
>>         at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:717)
>>         at
>> scala.tools.nsc.interpreter.ILoop.processLine$1(ILoop.scala:581)
>>         at scala.tools.nsc.interpreter.ILoop.innerLoop$1(ILoop.scala:588)
>>         at scala.tools.nsc.interpreter.ILoop.loop(ILoop.scala:591)
>>         at
>>
>> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:882)
>>         at
>>
>> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
>>         at
>>
>> scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:837)
>>         at
>>
>> scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
>>         at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:837)
>>         at scala.tools.nsc.interpreter.ILoop.main(ILoop.scala:904)
>>         at
>>
>> org.jetbrains.plugins.scala.compiler.rt.ConsoleRunner.main(ConsoleRunner.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>         at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:483)
>>         at
>> com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
>>
>> As for the Spark configuration:
>>
>>    val conf: SparkConf = new
>> SparkConf().setAppName("AppName").setMaster("local[2]")
>>
>>     val confParams: Map[String, String] = Map(
>>       "metadata.broker.list" -> "<IP_ADDRESS>:9092",
>>       "auto.offset.reset" -> "largest"
>>     )
>>
>>     val topics: Set[String] = Set("<topic_name>")
>>
>>     val context: StreamingContext = new StreamingContext(conf, Seconds(1))
>>     val kafkaStream = KafkaUtils.createDirectStream(context,confParams,
>> topics)
>>
>>     kafkaStream.foreachRDD(rdd => {
>>       rdd.collect().foreach(println)
>>     })
>>
>>     context.awaitTermination()
>>     context.start()
>>
>> The Kafka topic does exist, Kafka server is up and running and I am able
>> to
>> produce messages to that particular topic using the Confluent REST API.
>>
>> What might the problem actually be?
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Apache-Spark-Kafka-Integration-org-apache-spark-SparkException-Couldn-t-find-leader-offsets-for-Set-tp27103.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com
>> <http://nabble.com>.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>
>

Reply via email to