Re: multi tenant setup

2014-08-04 Thread Niklas Nielsen
Sorry for the tardy reply. Spark seems to be holding on to those resources
then - are you running Spark in coarse grained mode?
I am not too Spark savvy, but running in fine grained mode should allocate
and free resources on-demand rather than allocating a temporary static
partition up front.
Maybe Tim Chen can chime in here.

Niklas



On 1 August 2014 00:14, Gurvinder Singh gurvinder.si...@uninett.no wrote:

 Hi Niklas,

 I am using Apache spark with mesos 0.19.1. I have limited resources and
 when I submit a job which takes all of the resources. This is fine as
 when no one is using them, but when one of my colleagues submit his job,
 I would like mesos allows some part of resources assigned to his job
 when part of my jobs are finished, But it seems currently it waits until
 my whole job is finished before starting the other job. Is it due to
 mesos or you think Spark is the one who is blocking the job.

 - Gurvinder
 On 07/31/2014 06:12 PM, Niklas Nielsen wrote:
  Hi Gurvinder,
 
  The frameworks competing for resources will get their (weighted) fair
  share of the cluster. The allocator in the master uses the Dominant
  Resource Fairness algorithm to do this
  (http://static.usenix.org/event/nsdi11/tech/full_papers/Ghodsi.pdf).
  Regarding FIFO, are you referring to 'local' scheduler policies? How
  tasks are dispatched is up to the individual framework.
 
  Cheers,
  Niklas
 
 
  On 31 July 2014 07:28, Gurvinder Singh gurvinder.si...@uninett.no
  mailto:gurvinder.si...@uninett.no wrote:
 
  Hi,
 
  I am wondering how mesos handle the task scheduling when the resource
  are limited and multiple users want to access them at the same time.
 Is
  there any kind of fair scheduling as I see currently mainly FIFO. If
  there is how can I specify that.
 
  Thanks,
  Gurvinder
 
 




Re: multi tenant setup

2014-08-01 Thread Gurvinder Singh
Hi Niklas,

I am using Apache spark with mesos 0.19.1. I have limited resources and
when I submit a job which takes all of the resources. This is fine as
when no one is using them, but when one of my colleagues submit his job,
I would like mesos allows some part of resources assigned to his job
when part of my jobs are finished, But it seems currently it waits until
my whole job is finished before starting the other job. Is it due to
mesos or you think Spark is the one who is blocking the job.

- Gurvinder
On 07/31/2014 06:12 PM, Niklas Nielsen wrote:
 Hi Gurvinder,
 
 The frameworks competing for resources will get their (weighted) fair
 share of the cluster. The allocator in the master uses the Dominant
 Resource Fairness algorithm to do this
 (http://static.usenix.org/event/nsdi11/tech/full_papers/Ghodsi.pdf).
 Regarding FIFO, are you referring to 'local' scheduler policies? How
 tasks are dispatched is up to the individual framework.
 
 Cheers,
 Niklas
 
 
 On 31 July 2014 07:28, Gurvinder Singh gurvinder.si...@uninett.no
 mailto:gurvinder.si...@uninett.no wrote:
 
 Hi,
 
 I am wondering how mesos handle the task scheduling when the resource
 are limited and multiple users want to access them at the same time. Is
 there any kind of fair scheduling as I see currently mainly FIFO. If
 there is how can I specify that.
 
 Thanks,
 Gurvinder
 
 



Re: multi tenant setup

2014-07-31 Thread Niklas Nielsen
Hi Gurvinder,

The frameworks competing for resources will get their (weighted) fair share
of the cluster. The allocator in the master uses the Dominant Resource
Fairness algorithm to do this (
http://static.usenix.org/event/nsdi11/tech/full_papers/Ghodsi.pdf).
Regarding FIFO, are you referring to 'local' scheduler policies? How tasks
are dispatched is up to the individual framework.

Cheers,
Niklas


On 31 July 2014 07:28, Gurvinder Singh gurvinder.si...@uninett.no wrote:

 Hi,

 I am wondering how mesos handle the task scheduling when the resource
 are limited and multiple users want to access them at the same time. Is
 there any kind of fair scheduling as I see currently mainly FIFO. If
 there is how can I specify that.

 Thanks,
 Gurvinder