You cannot have two Spark Contexts in the same JVM active at the same time.
Just create one SparkContext and then use it for both purpose.

TD

On Fri, Feb 6, 2015 at 8:49 PM, VISHNU SUBRAMANIAN <
johnfedrickena...@gmail.com> wrote:

> Can you try creating just a single spark context  and then try your code.
> If you want to use it for streaming pass the same sparkcontext object
> instead of conf.
>
> Note: Instead of just replying to me , try to use reply to all so that the
> post is visible for the community . That way you can expect immediate
> responses.
>
>
> On Fri, Feb 6, 2015 at 6:09 AM, aanilpala <aanilp...@gmail.com> wrote:
>
>> I have the following code:
>>
>>
>> SparkConf conf = new
>> SparkConf().setAppName("streamer").setMaster("local[2]");
>>                 conf.set("spark.driver.allowMultipleContexts", "true");
>>             JavaStreamingContext ssc = new JavaStreamingContext(conf, new
>> Duration(batch_interval));
>>                 ssc.checkpoint("/tmp/spark/checkpoint");
>>
>>                 SparkConf conf2 = new
>> SparkConf().setAppName("classifier").setMaster("local[1]");
>>                 conf2.set("spark.driver.allowMultipleContexts", "true");
>>                 JavaSparkContext sc = new JavaSparkContext(conf);
>>
>>         JavaReceiverInputDStream<String> stream =
>> ssc.socketTextStream("localhost", 9999);
>>
>>             // String to Tuple3 Conversion
>>             JavaDStream<Tuple3&lt;Long, String, String>> tuple_stream =
>> stream.map(new Function<String, Tuple3&lt;Long, String, String>>() {
>>          ... });
>>
>>             JavaPairDStream<Integer, DictionaryEntry>
>> raw_dictionary_stream =
>> tuple_stream.filter(new Function<Tuple3&lt;Long, String,String>,
>> Boolean>()
>> {
>>
>>                         @Override
>>                         public Boolean call(Tuple3<Long, String,String>
>> tuple) throws Exception {
>>                                 if((tuple._1()/Time.scaling_factor %
>> training_interval) > training_dur)
>> NaiveBayes.train(sc.parallelize(training_set).rdd());
>>
>>                                 return true;
>>                         }
>>
>>
>>                 }).
>>
>> I am working on a text mining project and I want to use
>> NaiveBayesClassifier
>> of MLlib to classify some stream items. So, I have two Spark contexts one
>> of
>> which is a streaming context. The call to NaiveBayes.train causes the
>> following exception.
>>
>> Any ideas?
>>
>>
>>  Exception in thread "main" org.apache.spark.SparkException: Job aborted
>> due
>> to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure:
>> Lost task 0.0 in stage 0.0 (TID 0, localhost):
>> java.lang.ClassCastException:
>> org.apache.spark.SparkContext$$anonfun$runJob$4 cannot be cast to
>> org.apache.spark.ShuffleDependency
>>         at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:60)
>>         at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>>         at org.apache.spark.scheduler.Task.run(Task.scala:56)
>>         at
>> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>         at
>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>         at java.lang.Thread.run(Thread.java:745)
>>
>>     Driver stacktrace:
>>         at
>> org.apache.spark.scheduler.DAGScheduler.org
>> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
>>         at
>>
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>         at
>> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
>>         at scala.Option.foreach(Option.scala:236)
>>         at
>>
>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
>>         at
>>
>> org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
>>         at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
>>         at
>>
>> org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
>>         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>>         at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>>         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>>         at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>>         at
>>
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
>>         at
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>         at
>>
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>>         at
>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>         at
>>
>> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>>
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/NaiveBayes-classifier-causes-ShuffleDependency-class-cast-exception-tp21529.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>

Reply via email to