RE: Disable queuing of spark job on Mesos cluster if sufficient resources are not found

2017-05-29 Thread Mevada, Vatsal
Is there any configurable timeout which controls queuing of the driver in Mesos 
cluster mode or the driver will remain in queue for indefinite until it find 
resource on cluster?

From: Michael Gummelt [mailto:mgumm...@mesosphere.io]
Sent: Friday, May 26, 2017 11:33 PM
To: Mevada, Vatsal 
Cc: user@spark.apache.org
Subject: Re: Disable queuing of spark job on Mesos cluster if sufficient 
resources are not found

Nope, sorry.

On Fri, May 26, 2017 at 4:38 AM, Mevada, Vatsal 
mailto:mev...@sky.optymyze.com>> wrote:

Hello,

I am using Mesos with cluster deployment mode to submit my jobs.

When sufficient resources are not available on Mesos cluster, I can see that my 
jobs are queuing up on Mesos dispatcher UI.

Is it possible to tweak some configuration so that my job submission fails 
gracefully(instead of queuing up) if sufficient resources are not found on 
Mesos cluster?
Regards,
Vatsal



--
Michael Gummelt
Software Engineer
Mesosphere


Disable queuing of spark job on Mesos cluster if sufficient resources are not found

2017-05-26 Thread Mevada, Vatsal
Hello,

I am using Mesos with cluster deployment mode to submit my jobs.

When sufficient resources are not available on Mesos cluster, I can see that my 
jobs are queuing up on Mesos dispatcher UI.

Is it possible to tweak some configuration so that my job submission fails 
gracefully(instead of queuing up) if sufficient resources are not found on 
Mesos cluster?
Regards,
Vatsal


SparkAppHandle returns unknown state forever

2017-01-03 Thread Mevada, Vatsal
Hi,

I am launching a spark Job from Java application using SparkLauncher. My code 
is as follows:

SparkAppHandle jobHandle;
try {
jobHandle = new SparkLauncher()
.setSparkHome("C:\\spark-2.0.0-bin-hadoop2.7")
.setAppResource("hdfs://server/inputs/test.jar")
.setMainClass("com.test.TestJob")
.setMaster("spark://server:6066")
.setVerbose(true)
.setDeployMode("cluster")
.addAppArgs("abc")
.startApplication();

} catch (IOException e) {
throw new RuntimeException(e);
}

while(!jobHandle.getState().isFinal());


I can see my job running on SparkUI and also it is finishing without any errors.

However my java application never terminates since jobHandle.getState() always 
returns UNKNOWN state. what am I missing here? My spark API version is 2.0.0. 
One more detail that might be relevant is that my launcher application is 
running on windows.

Regards,
Vatsal