Hi Guangya and Klaus,

Thanks for the helpful pointers to the docs. I really appreciate it!

Consider this scenario:

   - A framework F with role 'test'. F reserves resources and creates 3
   persistent volumes on 3 separate hosts. (call them h1, h2, h3) and launches
   tasks. Everything runs fine.
   - Now, The framework (including its scheduler) is restarted (for a
   maintanence, upgrade).
   - When the scheduler comes back up after a restart (with the same role
   'test' and the same principal), Will the scheduler get *Offers*
   corresponding to the volumes(h1,h2,h3) that it had reserved in its previous
   run? (over Offers on some hosts that it does not care about)

More  precisely, I'm trying to ensure that the time for the 'if' check is
bounded -
https://github.com/apache/mesos/blob/master/src/examples/persistent_volume_framework.cpp
 .

case Shard::WAITING:
if (offered.contains(shard.resources)) {
CHECK_EQ(shard.volume.slave, offer.slave_id().value());
Thanks,
Jagadish

On Sat, Feb 6, 2016 at 6:49 PM, Guangya Liu <gyliu...@gmail.com> wrote:

> Hi Jagadish,
>
> Yes, you can take a look at if dynamic reservation and persistent volume
> can help, here is an example framework for persistent volumes:
> https://github.com/apache/mesos/blob/master/src/examples/persistent_volume_framework.cpp
>
> @Vinod,
>
> I think that if we can implement the requestResource API, this case can be
> more straightforward and easy to be implemented by end user, especially for
> someone who want to migrate from YARN to Mesos, what do you say?
>
> Thanks,
>
> Guangya
>
> On Sun, Feb 7, 2016 at 8:00 AM, Vinod Kone <vinodk...@gmail.com> wrote:
>
>> What you want here are dynamic reservations and persistent volumes. Take
>> a look at our docs for these features.
>>
>> @vinodkone
>>
>> On Feb 6, 2016, at 10:31 AM, Jagadish Venkatraman <jagadish1...@gmail.com>
>> wrote:
>>
>> Hi Guangya,
>>
>> Thanks for the response! Let me provide more background to this request.
>>
>> *Background:*
>> I work on Apache Samza <http://samza.apache.org> , a distributed stream
>> processing framework. Currently Samza supports only Yarn as a resource
>> manager. (there have been requests to run Samza with mesos). A cluster (200
>> nodes 'ish) runs many Samza Jobs (about 3500). Each Samza Job has its own
>> framework that requests resources (containers) for the job to run. Each
>> such container uses GBs of local state
>> <http://radar.oreilly.com/2014/07/why-local-state-is-a-fundamental-primitive-in-stream-processing.html>
>>   .
>> When such a container(resource) is started on a different host by the
>> framework, the local state must be re-bootstrapped.  (this results in a
>> long bootstrap time, which is essentially down time).
>>
>> The same is true for Apache Kafka <http://kafka.apache.org/>, a
>> distributed pub-sub logging system.  When a Kafka broker must be restarted
>> by the framework, it should ideally be re-started on the same host.
>> (otherwise, each broker has to re-bootstrap several GBs of logs from its
>> peers before it can start to service a request.)
>>
>> I'm sure many stateful services have similar requirements.
>>
>> >> Is it possible that you update your framework logic as this:
>> 1) framework get resoruce offer from mesos master
>> 2) framework filter the resource offers based on its preferences
>>
>> I can certainly do that. But, here's my concern:
>>
>>    -  Is the offer for resources to frameworks, 'round robin' across the
>>    available pool across hosts? I want to ensure that the wait time for a
>>    resource-wait is bounded.
>>    - Are there tunables that we can set to be more 'fair' (in terms of
>>    variety of hosts) when Offers are offered? For example, every framework
>>    will receive atleast some offers for *every* host (where resources
>>    are available). Or, all available offers are broadcasted to all 
>> frameworks.
>>
>> Are there alternatives that I can use to support this usecase and ensure
>> that the wait time for an available resource is limited (say about a minute
>> or two)? . It can still be a best-effort guarantee and not a strict one.
>>
>>
>>
>> Thanks again,
>> Jagadish
>>
>> --
>> Jagadish
>>
>>
>>
>> On Fri, Feb 5, 2016 at 6:46 PM, Guangya Liu <gyliu...@gmail.com> wrote:
>>
>>> Hi Jagadish,
>>>
>>> Even though Mesos have the interface of "requestResources", it was not
>>> implemented in the built-in allocator at the moment, so the call of 
>>> "driver.requestResources
>>> (resources);" will not work.
>>>
>>> Is it possible that you update your framework logic as this:
>>> 1) framework get resoruce offer from mesos master
>>> 2) framework filter the resource offers based on its preferences
>>>
>>> The problem for such solution is that the framework sometimes may not
>>> get its preferred resources if the preferred resource was offered to other
>>> frameworks.
>>>
>>> Can you please file a JIRA ticket to request implement the API of 
>>> "requestResources"?
>>> It would be great if you can append some background for your request so
>>> that the community can evaluate how to move this forward.
>>>
>>> Thanks,
>>>
>>> Guangya
>>>
>>>
>>> On Sat, Feb 6, 2016 at 6:45 AM, Jagadish Venkatraman <
>>> jagadish1...@gmail.com> wrote:
>>>
>>>> I have fair experience in writing frameworks on Yarn. In the Yarn world,
>>>> the amClient supports a method where I can specify the preferredHost
>>>> with
>>>> the resource request.
>>>>
>>>> Is there a way to specify a preferred host with the resource request in
>>>> Mesos?
>>>>
>>>> I currently do:
>>>>
>>>> driver.requestResources (resources);
>>>>
>>>> I don't find a way to associate a preferred hostname with a resource
>>>> request. A code sample will be really helpful. (for example, I want 1G
>>>> mem,
>>>> 1cpu core preferrably on host: xyz.aws.com )
>>>>
>>>> Thanks,
>>>> Jagadish
>>>>
>>>> --
>>>> Jagadish V,
>>>> Graduate Student,
>>>> Department of Computer Science,
>>>> Stanford University
>>>>
>>>
>>>
>>
>>
>> --
>> Jagadish V,
>> Graduate Student,
>> Department of Computer Science,
>> Stanford University
>>
>>
>


-- 
Jagadish V,
Graduate Student,
Department of Computer Science,
Stanford University

Reply via email to