What you want here are dynamic reservations and persistent volumes. Take a look 
at our docs for these features. 

@vinodkone

> On Feb 6, 2016, at 10:31 AM, Jagadish Venkatraman <jagadish1...@gmail.com> 
> wrote:
> 
> Hi Guangya,
> 
> Thanks for the response! Let me provide more background to this request. 
> 
> Background:
> I work on Apache Samza , a distributed stream processing framework. Currently 
> Samza supports only Yarn as a resource manager. (there have been requests to 
> run Samza with mesos). A cluster (200 nodes 'ish) runs many Samza Jobs (about 
> 3500). Each Samza Job has its own framework that requests resources 
> (containers) for the job to run. Each such container uses GBs of local state  
> . When such a container(resource) is started on a different host by the 
> framework, the local state must be re-bootstrapped.  (this results in a long 
> bootstrap time, which is essentially down time).
> 
> The same is true for Apache Kafka, a distributed pub-sub logging system.  
> When a Kafka broker must be restarted by the framework, it should ideally be 
> re-started on the same host. (otherwise, each broker has to re-bootstrap 
> several GBs of logs from its peers before it can start to service a request.)
> 
> I'm sure many stateful services have similar requirements.
> 
> >> Is it possible that you update your framework logic as this:
> 1) framework get resoruce offer from mesos master
> 2) framework filter the resource offers based on its preferences
> 
> I can certainly do that. But, here's my concern:
>  Is the offer for resources to frameworks, 'round robin' across the available 
> pool across hosts? I want to ensure that the wait time for a resource-wait is 
> bounded. 
> Are there tunables that we can set to be more 'fair' (in terms of variety of 
> hosts) when Offers are offered? For example, every framework will receive 
> atleast some offers for every host (where resources are available). Or, all 
> available offers are broadcasted to all frameworks. 
> Are there alternatives that I can use to support this usecase and ensure that 
> the wait time for an available resource is limited (say about a minute or 
> two)? . It can still be a best-effort guarantee and not a strict one.
> 
> 
> 
> Thanks again,
> Jagadish 
> 
> --
> Jagadish
> 
> 
> 
>> On Fri, Feb 5, 2016 at 6:46 PM, Guangya Liu <gyliu...@gmail.com> wrote:
>> Hi Jagadish,
>> 
>> Even though Mesos have the interface of "requestResources", it was not 
>> implemented in the built-in allocator at the moment, so the call of 
>> "driver.requestResources (resources);" will not work.
>> 
>> Is it possible that you update your framework logic as this:
>> 1) framework get resoruce offer from mesos master
>> 2) framework filter the resource offers based on its preferences
>> 
>> The problem for such solution is that the framework sometimes may not get 
>> its preferred resources if the preferred resource was offered to other 
>> frameworks.
>> 
>> Can you please file a JIRA ticket to request implement the API of 
>> "requestResources"? It would be great if you can append some background for 
>> your request so that the community can evaluate how to move this forward.
>> 
>> Thanks,
>> 
>> Guangya
>> 
>> 
>>> On Sat, Feb 6, 2016 at 6:45 AM, Jagadish Venkatraman 
>>> <jagadish1...@gmail.com> wrote:
>>> I have fair experience in writing frameworks on Yarn. In the Yarn world,
>>> the amClient supports a method where I can specify the preferredHost with
>>> the resource request.
>>> 
>>> Is there a way to specify a preferred host with the resource request in
>>> Mesos?
>>> 
>>> I currently do:
>>> 
>>> driver.requestResources (resources);
>>> 
>>> I don't find a way to associate a preferred hostname with a resource
>>> request. A code sample will be really helpful. (for example, I want 1G mem,
>>> 1cpu core preferrably on host: xyz.aws.com )
>>> 
>>> Thanks,
>>> Jagadish
>>> 
>>> --
>>> Jagadish V,
>>> Graduate Student,
>>> Department of Computer Science,
>>> Stanford University
> 
> 
> 
> -- 
> Jagadish V,
> Graduate Student,
> Department of Computer Science,
> Stanford University

Reply via email to