> 2) some users could have spark 1 and spark 2, and they have customized
with
sym link to switch between version easily and to be sure what versions they
was running, instead to inspect SPARK_HOME variable and PATH .

I am under the impression that spark-submit needs SPARK_HOME to run.Can we
call spark-submit without setting SPARK_HOME?

Thanks,
Meisam

On Thu, Mar 8, 2018 at 3:26 AM Matteo Durighetto <[email protected]>
wrote:

> Hello Livy Developers,
>           I am new in livy community, and I start to understand how it
> works under the hood. I see in the source code of livy
> ( server/src/main/scala/org/apache/livy/LivyConf.scala )
> that the command to submit to spark cluster a request is "spark-submit" in
> function sparkSubmit :
>
>   /** Return the path to the spark-submit executable. */
>   def sparkSubmit(): String = {
>     sparkHome().map { _ + File.separator + "bin" + File.separator +
> "spark-submit" }.get
>   }
>
> I think a good idea is to use a variable instead of a literal , for some
> reasons :
> 1) some distributions of hadoop/spark use sym link to spark-submit like
> spark2-submit and so on
> 2) some users could have spark 1 and spark 2, and they have customized with
> sym link to switch between version easily and to be sure what versions they
> was running, instead to inspect SPARK_HOME variable and PATH .
> 3) in the future it's possible they will change the name of the command
>
> What do you think about ? If you are agree, Could I open a feature request
> about this on Jira
>
> https://issues.apache.org/jira/projects/LIVY/issues/LIVY-449?filter=allopenissues
> ?
>
> I try to wrote a little patch based on the branch on github of apache livy,
> I read on https://livy.incubator.apache.org/community/ that to propose a
> patch I have to include tests.
> I try to find a document about hoe to do tests correctly, but I don't find
> it.
>
>
> Could you help me?
>
> Kind Regards
>
> Matteo Durighetto
>

Reply via email to