hi
i installed zeppelin some time before, but it always failed in my server
cluster. i found the z-management Occasionally. I installed and success in
my server. But when i wanna to read in my HDFS file like:

sc.textFile("hdfs://llscluster/tmp/jzyresult/part-04093").count()


it throw the errors in my cluster:Job aborted due to stage failure: Task 15
in stage 6.0 failed 4 times, most recent failure: Lost task 15.3 in stage
6.0 (TID 386, lls7): java.io.EOFException

when i modify it to the local model, it could read HDFS file successfully.
My cluster is Spark1.3.0 Hadoop2.0.0-CDH4.5.0. but the install options just
have Spark1.3.0 and Hadoop2.0.0-CDH-4.7.0. Is this the cause to read HDFS
file failed?
Look forward to your reply!
THANK YOU!
JZY

Reply via email to