On Wed, Mar 7, 2018 at 2:02 PM, Dan van der Ster <d...@vanderster.com> wrote:
> On Wed, Mar 7, 2018 at 2:29 PM, John Spray <jsp...@redhat.com> wrote:
>> On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster <d...@vanderster.com> 
>> wrote:
>>> Hi all,
>>> What is the purpose of
>>>    ceph mds set max_mds <int>
>>> ?
>>> We just used that by mistake on a cephfs cluster when attempting to
>>> decrease from 2 to 1 active mds's.
>>> The correct command to do this is of course
>>>   ceph fs set <fsname> max_mds <int>
>>> So, is `ceph mds set max_mds` useful for something? If not, should it
>>> be removed from the CLI?
>> It's the legacy version of the command from before we had multiple
>> filesystems.  Those commands are marked as obsolete internally so that
>> they're not included in the --help output,
> Ahhh! It is indeed omitted from --help but I hadn't noticed because it
> is still rather helpful if you go ahead and run the command:
> # ceph mds set
> Invalid command:  missing required parameter
> var(max_mds|max_file_size|allow_new_snaps|inline_data|allow_multimds|allow_dirfrags)
> mds set 
> max_mds|max_file_size|allow_new_snaps|inline_data|allow_multimds|allow_dirfrags
> <val> {<confirm>} :  set mds parameter <var> to <val>
> Error EINVAL: invalid command
> I suppose we just need a new generation of operators that would never
> even try these old deprecated commands ;)
>> but they're still handled
>> (applied to the "default" filesystem) if called.
> Hmm... does it apply if we never set the default fs (though only have one) ?
> (How do we even see/get the default fs?)

It'll automatically be set to the first filesystem created.

Now that I go look for the setting, I remember it's actually got the
slightly esoteric internal name of "legacy_client_fscid" (because it's
the filesystem ID that will get mounted by a legacy client that
doesn't know which filesystem it wants).  You set it with "ceph fs
set-default", but it looks like it got left out of FSMap::dump, so
there's no easy way to peek at it.

Created https://github.com/ceph/ceph/pull/20780


> What happened in our case is that I did `ceph mds set max_mds 1` then
> deactivated rank 2. This caused some sort of outage which deadlocked
> the mds's (they recovered after restarting). I assume the outage
> happened because I deactivated rank 2 while we still had max_mds=2 at
> the fs scope (and we had no standbys -- due to the v12.2.2->4 upgrade
> breakage).
> Thanks John!
> Dan
>> The multi-fs stuff went in for Jewel, so maybe we should think about
>> removing the old commands in Mimic: any thoughts Patrick?
>> John
>>> Cheers, Dan
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ceph-users mailing list

Reply via email to