​Hi,

If i delete the spark libraries and copy my deep spark assembly in 
interpreter/spark folder, spark interpreter is not working at all. Interpreter 
it self not running for spark shell. And if i keep spark library and still copy 
the deep spark assembly, i am facing the same old serialization error, that i 
was speaking earlier.


Thanks,

Maruthi.


Maruthi Donthi
Java Developer
[aeverie-logo-med-res- signature size]
250 Parkway Drive Suite 150
Lincolnshire, Illinois 60069
203-218-6949(M)
[email protected]<mailto:[email protected]>
http://www.aeverie.com/
________________________________
From: Kevin Kim (Sangwoo) <[email protected]>
Sent: Wednesday, February 25, 2015 5:48 AM
To: [email protected]
Subject: Re: Zeppelin with Stratio DeepSpark

Hi Maruthi,

What's the result if you delete all spark related dependencies in Zeppelin by
rm zeppelin/interpreter/spark/*spark*
and copy 'your' spark assembly jar into zeppelin/interpreter/spark/ ?
I think JL is right, the version, more exactly, the 'build' might be the 
problem.

Kevin

On Wed, Feb 25, 2015 at 2:09 AM [email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

I am running external spark cluster provided by startio-deep spark. My spark 
cluster details are

spar.master = spark://averie001-edt-loc:7077

SPARK_EXECUTOR_URI="/root/deep_spark1.1.1_alljars/spark-deep-distribution-0.6.3.tgz"​

spark.home = /opt/stratio/deep

spark.repl.class.uri=http://10.0.9.13:55276


I am running spark cluster of 1.1.1 version so I build zeppelin also with the 
same version. And then getting that serialization error.


Any help on this issue would be appreciated.



Thanks,


Maruthi Donthi
Java Developer
[aeverie-logo-med-res- signature size]
250 Parkway Drive Suite 150
Lincolnshire, Illinois 60069
203-218-6949(M)
[email protected]<mailto:[email protected]>
http://www.aeverie.com/
________________________________
From: Jongyoul Lee <[email protected]<mailto:[email protected]>>
Sent: Monday, February 23, 2015 7:09 PM
To: 
[email protected]<mailto:[email protected]>
Subject: Re: Zeppelin with Stratio DeepSpark

Hi,

Do you use a external cluster? what kind? I occurs this similar error which is 
about serialisation when I test spark on mesos cluster. My problem was a 
version issue. The version of spark driver and executors which I set from 
spark.executor.uri are different. Could you please let me know your cluster 
environment?

Regards,
JL

On Tue, Feb 24, 2015 at 7:07 AM, 
[email protected]<mailto:[email protected]> 
<[email protected]<mailto:[email protected]>> wrote:

Hi,

I am using zeppelin to integrate with DeepSparkContext. I am able to build 
zeppelin with independent spark cluster of 1.1.1 version. And gave Spark Master 
URL in conf/zeppelin-env.sh. Using the same procedure, I am trying to integrate 
zeppelin with deepsparkcontext. Where Startio DeepSparkContext which internally 
provides the SparkContext and creates the spark cluster. Now I have given 
spark's master url in zeppelin-env.sh. It was ablt to build it. and able to get 
the notebook and sc.version also working. I mean if i open a notebook and type 
sc.version, I am getting the result as 1.1.1. So scala is working,  but if run 
any RDD and spark operations like as follows, I am having troubles.


val bankText99 = sc.TextFile("/home/dev004/try/Zeppelin_dev/bank/bank-full.csv")

bankText99.count


Here is my logs..


bankText99: org.apache.spark.rdd.RDD[String] = 
/home/dev004/try/Zeppelin_dev/bank/bank-full.csv MappedRDD[3] at textFile at 
<console>:19 org.apache.spark.SparkException: Job aborted due to stage failure: 
Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 
0.0 (TID 5, averie001-edt-loc): java.lang.IllegalStateException: unread block 
data 
java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2421)
 java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1382) 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990) 
java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915) 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798) 
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350) 
java.io.ObjectInputStream.readObject(ObjectInputStream.java:370) 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:160) 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
java.lang.Thread.run(Thread.java:745) Driver stacktrace: at 
org.apache.spark.scheduler.DAGScheduler.org<http://org.apache.spark.scheduler.DAGScheduler.org>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173) at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
 at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
 at scala.Option.foreach(Option.scala:236) at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
 at 
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
 at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) at 
akka.actor.ActorCell.invoke(ActorCell.scala:456) at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) at 
akka.dispatch.Mailbox.run(Mailbox.scala:219) at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


I dont know what is happening. I tried to change the sparkContext with 
deepSparkContext in the code, but getting lot other errors. Please give me some 
help on this. I am struck on this from one month.


Looking forward for a quick support.​



Maruthi Donthi
Java Developer
[aeverie-logo-med-res- signature size]
250 Parkway Drive Suite 150
Lincolnshire, Illinois 60069
203-218-6949(M)
[email protected]<mailto:[email protected]>
http://www.aeverie.com/



--
이종열, Jongyoul Lee, 李宗烈
http://madeng.net

Reply via email to