Also, if you are using maven 3.3 you can add -pl '!flink' or any other
interpreter you don't need. It will reduce both the nuild time and size.

בתאריך יום ד׳, 2 בספט׳ 2015, 18:48 מאת Alexander Bezzubov <b...@apache.org>:

> Hi,
>
> thank you for you interest in Zeppelin project!
>
> Yes, by default a build that you did includes a lot of different
> interpreters like Spark, Flink, Lens, etc so that is why the size is quite
> substantial.
>
> In case you are about to use existing Spark\Hadoop - as of
> https://issues.apache.org/jira/browse/ZEPPELIN-160 now, there is an
> option now to build Zeppelin with those dependencies in a provided scope
> (so they are not included in the final archive).
> Then you just need to set SPARK_HOME and HADOOP_HOME to be able to use
> existing Spark\Hadoop.
>
> Please, let me know if that helps!
>
> On Thu, Sep 3, 2015 at 12:38 AM, MrAsanjar . <afsan...@gmail.com> wrote:
>
>> I build zeppelin with following options as it was documented:
>> *mvn clean package  -Pspark-1.4 -Dspark.version=1.4.1
>> -Dhadoop.version=2.4.0 -Phadoop-2.4 -Pyarn -DskipTests -P build-distr*
>>
>> However the generated tarfile in
>> zeppelin-distribution/target/zeppelin-0.6.0-incubating-SNAPSHOT.tar.gz is 
>> *414
>> Meg*, is that correct?
>> I also noticed it does include spark, hadoop, and other tarfiles, do I
>> need them if I am using existing hadoop & spark client configured and
>> functioning?
>>
>
>

Reply via email to