hi
        To install the zeppelin with z-manager is a helpless method, because i 
install the zeppelin in manually way and failed.I have tried many many times.
My cluster is spark1.3.0 hadoop 2.0.0-cdh4.5.0,and the model is standalone.
        I will install the zeppelin manually right now so i wanna you check my 
steps:

1. git clone the repository from the github.
2. mvn clean package
3. mvn install -DskipTests -Dspark.version=1.3.0 -Dhadoop.version=2.0.0-cdh4.5.0
(did zeppelin support cdh4.5.0)
Should i have to do the custom built spark like(-Dspark.version=1.1.0-Custom)
4.modify my master  spark://...:7077 <spark://...:7077>
Is it completed? or i lost something please tell me.
thanks
jzy

> 在 2015年7月21日,下午5:48,Alexander Bezzubov <abezzu...@nflabs.com> 写道:
> 
> Hi,
> 
> thank you for your interest in the project!
> 
> It seems like the best way to get Zeppelin up and running in your case
> would be to build it manually with relevant Spark\Hadoop options as
> described here http://zeppelin.incubator.apache.org/docs/install/install.html
> 
> Please, let me know if that helps.
> 
> --
> BR,
> Alex
> 
> On Tue, Jul 21, 2015 at 11:35 AM, 江之源 <jiangzhiy...@liulishuo.com> wrote:
>> hi
>> i installed zeppelin some time before, but it always failed in my server
>> cluster. i found the z-management Occasionally. I installed and success in
>> my server. But when i wanna to read in my HDFS file like:
>> 
>> sc.textFile("hdfs://llscluster/tmp/jzyresult/part-04093").count()
>> 
>> 
>> it throw the errors in my cluster:Job aborted due to stage failure: Task 15
>> in stage 6.0 failed 4 times, most recent failure: Lost task 15.3 in stage
>> 6.0 (TID 386, lls7): java.io.EOFException
>> 
>> when i modify it to the local model, it could read HDFS file successfully.
>> My cluster is Spark1.3.0 Hadoop2.0.0-CDH4.5.0. but the install options just
>> have Spark1.3.0 and Hadoop2.0.0-CDH-4.7.0. Is this the cause to read HDFS
>> file failed?
>> Look forward to your reply!
>> THANK YOU!
>> JZY
> 
> 
> 
> -- 
> --
> Kind regards,
> Alexander.

Reply via email to