This is possible with yarn. You also need to think about preemption in case one 
web service starts doing something and after a while another web service wants 
also to do something. 

> On 13 Feb 2016, at 17:40, Eugene Morozov <evgeny.a.moro...@gmail.com> wrote:
> 
> Hi,
> 
> I have several instances of the same web-service that is running some ML 
> algos on Spark (both training and prediction) and do some Spark unrelated 
> job. Each web-service instance creates their on JavaSparkContext, thus 
> they're seen as separate applications by Spark, thus they're configured with 
> separate limits of resources such as cores (I'm not concerned about the 
> memory as much as about cores).
> 
> With this set up, say 3 web service instances, each of them has just 1/3 of 
> cores. But it might happen, than only one instance is going to use Spark, 
> while others are busy with Spark unrelated. I'd like in this case all Spark 
> cores be available for the one that's in need.
> 
> Ideally I'd like Spark cores just be available in total and the first app who 
> needs it, takes as much as required from the available at the moment. Is it 
> possible? I believe Mesos is able to set resources free if they're not in 
> use. Is it possible with YARN?
> 
> I'd appreciate if you could share your thoughts or experience on the subject.
> 
> Thanks.
> --
> Be well!
> Jean Morozov

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to