Thanks. I’m now trying to figure out how to get Proxmox to pass the “-o 
mds_namespace=otherfs” option to its mounting of the filesystem, but that’s a 
bit out of scope for this list (though if anyone has done this please let me 
know!).

> On Mar 31, 2020, at 2:15 PM, Nathan Fish <[email protected]> wrote:
> 
> Yes, standby (as opposed to standby-replay) MDS' form a shared pool
> from which the mons will promote an MDS to the required role.
> 
> On Tue, Mar 31, 2020 at 12:52 PM Jarett DeAngelis <[email protected]> wrote:
>> 
>> So, for the record, this doesn’t appears to work in Nautilus.
>> 
>> 
>> 
>> Does this mean that I should just count on my standby MDS to “step in” when 
>> a new FS is created?
>> 
>>> On Mar 31, 2020, at 3:19 AM, Eugen Block <[email protected]> wrote:
>>> 
>>>> This has changed in Octopus. The above config variables are removed.
>>>> Instead, follow this procedure.:
>>>> 
>>>> https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
>>> 
>>> Thanks for the clarification, IIRC I had troubles applying the mds_standby 
>>> settings in Nautilus already, but I haven't verified yet so I didn't 
>>> mention that in my response. I'll take another look at it.
>>> 
>>> 
>>> Zitat von Patrick Donnelly <[email protected]>:
>>> 
>>>> On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <[email protected]> wrote:
>>>>> For the standby daemon you have to be aware of this:
>>>>> 
>>>>>> By default, if none of these settings are used, all MDS daemons
>>>>>> which do not hold a rank will
>>>>>> be used as 'standbys' for any rank.
>>>>>> [...]
>>>>>> When a daemon has entered the standby replay state, it will only be
>>>>>> used as a standby for
>>>>>> the rank that it is following. If another rank fails, this standby
>>>>>> replay daemon will not be
>>>>>> used as a replacement, even if no other standbys are available.
>>>>> 
>>>>> Some of the mentioned settings are for example:
>>>>> 
>>>>> mds_standby_for_rank
>>>>> mds_standby_for_name
>>>>> mds_standby_for_fscid
>>>>> 
>>>>> The easiest way is to have one standby daemon per CephFS and let them
>>>>> handle the failover.
>>>> 
>>>> This has changed in Octopus. The above config variables are removed.
>>>> Instead, follow this procedure.:
>>>> 
>>>> https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity
>>>> 
>>>> --
>>>> Patrick Donnelly, Ph.D.
>>>> He / Him / His
>>>> Senior Software Engineer
>>>> Red Hat Sunnyvale, CA
>>>> GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
>>> 
>>> 
>>> _______________________________________________
>>> ceph-users mailing list -- [email protected]
>>> To unsubscribe send an email to [email protected]
>> 
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to