[ 
https://issues.apache.org/jira/browse/HIVE-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15980076#comment-15980076
 ] 

Sahil Takiar commented on HIVE-16484:
-------------------------------------

Only related failure is {{TestSparkClient.testRemoteClient}} 

The issue is what happens if {{SPARK_HOME}} is not set. The {{SparkClientImpl}} 
has some code to handle the case where {{SPARK_HOME}} is not set; if it isn't 
set, then it runs {{bin/java org.apache.spark.deploy.SparkSubmit}}.

This patch deleted the code in {{SparkClientImpl}} that calls {{bin/java 
org.apache.spark.deploy.SparkSubmit}} since {{SparkLauncher}} is used for all 
Spark job submissions. There was only one test that actually invoked that code 
path ({{TestSparkClient.testRemoteClient}})

{{SparkLauncher}} requires {{SPARK_HOME}} to be set since it calls 
{{bin/spark-submit}}, it doesn't attempt to call 
{{org.apache.spark.deploy.SparkSubmit}} if {{SPARK_HOME}} is not present.

So we could (1) modify {{SparkLauncher}} to not require {{sparkHome}} to be 
set, (2) modify this test so that {{SPARK_HOME}} is set, or (3) refactor the 
code so that it can still directly invoke {{bin/java 
org.apache.spark.deploy.SparkSubmit}} if {{SPARK_HOME}} isn't set.

I'm leaning towards approach 2. [~vanzin] the code to run {{bin/java 
org.apache.spark.deploy.SparkSubmit}} if {{SPARK_HOME}} isn't set was added in 
HIVE-8528 - is there a use case for launching Spark jobs without {{SPARK_HOME}} 
being set, or was it just added for testing?

> Investigate SparkLauncher for HoS as alternative to bin/spark-submit
> --------------------------------------------------------------------
>
>                 Key: HIVE-16484
>                 URL: https://issues.apache.org/jira/browse/HIVE-16484
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Sahil Takiar
>         Attachments: HIVE-16484.1.patch, HIVE-16484.2.patch, 
> HIVE-16484.3.patch
>
>
> The {{SparkClientImpl#startDriver}} currently looks for the {{SPARK_HOME}} 
> directory and invokes the {{bin/spark-submit}} script, which spawns a 
> separate process to run the Spark application.
> {{SparkLauncher}} was added in SPARK-4924 and is a programatic way to launch 
> Spark applications.
> I see a few advantages:
> * No need to spawn a separate process to launch a HoS --> lower startup time
> * Simplifies the code in {{SparkClientImpl}} --> easier to debug
> * {{SparkLauncher#startApplication}} returns a {{SparkAppHandle}} which 
> contains some useful utilities for querying the state of the Spark job
> ** It also allows the launcher to specify a list of job listeners



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to