Hi Bill,

Thanks for the pointer. I will have a look at aurora again but last time I
had a look the docs were pretty scarce so I pretty much just gave up. Maybe
it deserves some more time on my part and digging through the source.

* Maybe some of the aurora devs here might want to publish more
tutorial/docs on the nifty features in aurora :-)

-- Ankur

On Tuesday, September 2, 2014, Bill Farner <[email protected]> wrote:

> Another alternative is for the scheduler to reliably place your instances
> on the same hosts every time.  This comes with its own pitfalls, but isn't
> rolling the dice as much as hoping a whole replica set is not moved.
>  Aurora, for example, implements this with a 'dedicated' scheduling
> constraint specified in the job description.
>
> -=Bill
>
>
> On Tue, Sep 2, 2014 at 9:01 PM, Ankur Chauhan <[email protected]
> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>
>> Hi Vinod,
>>
>> Thanks for your reply, You raise very good points and I realize that
>> mesos is ephemeral. So far I am making the assumption that mesos (based on
>> constraints set) would be responsible of keeping enough replicas alive that
>> data loss does not happen and when a task is killed from a replicaset, it
>> has enough (or atleast one) replica to recover from. The only way to get
>> around this problem completely is to have the MESOS-1554 resolved or use
>> some form of underlying volume that is available to all tasks/slaves where
>> replacement nodes can be started.
>>
>>
>> On Tue, Sep 2, 2014 at 5:35 PM, Vinod Kone <[email protected]
>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>
>>> I'm not aware of any ports of MongoDB on Mesos, but the one gotcha to
>>> keep in mind when porting database frameworks is that the task/executor
>>> sandbox in Mesos is ephemeral. IOW, when an executor exits the sandbox gets
>>> cleaned up (not immediately but after certain time based on the garbage
>>> collection algorithm). So if a MongoDB executor writes its state/db in the
>>> sandbox then that data is irrecoverable if the task/executor terminates
>>> (e.g., LOST).  Having said that, follow
>>> https://issues.apache.org/jira/browse/MESOS-1554 for future work in
>>> this area.
>>>
>>>
>>> On Tue, Sep 2, 2014 at 2:50 PM, Ankur Chauhan <[email protected]
>>> <javascript:_e(%7B%7D,'cvml','[email protected]');>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I apologise for the repost but wanted to get some responses. I am 
>>>> evaluating mesos and wanted to know if anyone has successfully used
>>>> mesos (with or without docker) as a means of managing a mongodb 
>>>> deployment. Currently, I have 8 shards where each shard is a 3x replicated 
>>>> replicaset and contains > 500GB of data.
>>>>
>>>>
>>>> I was wondering if someone has tried to deploy and maintain a mongodb
>>>> deployment using mesos (or maybe tell me why this is not advisable), any 
>>>> gotchas and preferred methodology for deployment.
>>>>
>>>> -- Ankur
>>>>
>>>>
>>>
>>
>

Reply via email to