[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-11-02 Thread Edward R Huyer
Glad to help! You don’t need the -m (unless I’m misunderstanding your intent). I used “cephadm shell --name mgr.” to get a shell in an environment that mimics the daemon’s container, and it does appear to physically share the mounts. That’s how I was able to figure out what parts of the

[ceph-users] Re: Best way to add multiple nodes to a cluster?

2021-11-02 Thread DHilsbos
Zakhar; When adding nodes I usually set the following: noin (OSDs register as up, but stay out) norebalance (new placement shouldn't be calculated when the cluster layout changes, I've been bit by this not working as expected, so I also set below) nobackfill (PGs don't move) I then remove noin,

[ceph-users] Re: OSD spend too much time on "waiting for readable" -> slow ops -> laggy pg -> rgw stop -> worst case osd restart

2021-11-02 Thread Peter Lieven
Am 02.11.21 um 15:02 schrieb Sage Weil: On Tue, Nov 2, 2021 at 8:29 AM Manuel Lausch wrote: Hi Sage, The "osd_fast_shutdown" is set to "false" As we upgraded to luminous I also had blocked IO issuses with this enabled. Some weeks ago I tried out the options "osd_fast_shutdown" and

[ceph-users] Re: OSD spend too much time on "waiting for readable" -> slow ops -> laggy pg -> rgw stop -> worst case osd restart

2021-11-02 Thread Sage Weil
On Tue, Nov 2, 2021 at 8:29 AM Manuel Lausch wrote: > Hi Sage, > > The "osd_fast_shutdown" is set to "false" > As we upgraded to luminous I also had blocked IO issuses with this > enabled. > > Some weeks ago I tried out the options "osd_fast_shutdown" and > "osd_fast_shutdown_notify_mon" and

[ceph-users] Re: OSD spend too much time on "waiting for readable" -> slow ops -> laggy pg -> rgw stop -> worst case osd restart

2021-11-02 Thread Manuel Lausch
Hi Sage, The "osd_fast_shutdown" is set to "false" As we upgraded to luminous I also had blocked IO issuses with this enabled. Some weeks ago I tried out the options "osd_fast_shutdown" and "osd_fast_shutdown_notify_mon" and also got slow ops while stopping/starting OSDs. But I didn't ceck if

[ceph-users] Re: Single ceph client usage with multiple ceph cluster

2021-11-02 Thread Markus Baier
Hello, yes you can use a single server to operate multiple clusters. I have a configuration running, with two independent ceph clusters running on the same node (of course multiple nodes for the two clusters) The trick is to work with multiple ceph.conf files, I use two seperate ceph.conf files

[ceph-users] Re: How can user home directory quotas be automatically set on CephFS?

2021-11-02 Thread Magnus HAGDORN
Hi Artur, we did write a script (in fact a series of scripts) that we use to manage our users and their quotas. Our script adds a new user to our LDAP and sets the default quotas for various storage areas. Quota information is kept in the LDAP. Another script periodically scans the LDAP for

[ceph-users] Single ceph client usage with multiple ceph cluster

2021-11-02 Thread Mosharaf Hossain
Hi Users We have two ceph clusters in our lab. We are experimenting to use a single server as a client for two ceph clusters. Can we use the same client server to store keyring for different clusters in ceph.conf file. Regards Mosharaf Hossain ___

[ceph-users] Re: Pg autoscaling and device_health_metrics pool pg sizing

2021-11-02 Thread David Orman
I suggest continuing with manual PG sizing for now. With 16.2.6 we have seen the autoscaler scale up the device health metrics to 16000+ PGs on brand new clusters, which we know is incorrect. It's on our company backlog to investigate, but far down the backlog. It's bitten us enough times in the

[ceph-users] How can user home directory quotas be automatically set on CephFS?

2021-11-02 Thread Artur Kerge
Hello! As I understand CephFS user max file and byte quotas (ceph.quota.max_{files,bytes}) can be set on an MDS (or CephFS client) via setfattr command (https://docs.ceph.com/en/octopus/cephfs/quota/). My question is, how can the quotas be set automatically for every new user's home directory?

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-11-02 Thread Ernesto Puerta
Thanks a lot, Edward, for sharing this thorough description! I filed a tracker to record your findings and improve this set-up process ( https://tracker.ceph.com/issues/53127). Additionally, did you try with "cephadm shell -n mgr. -m "? If I'm not wrong, that should give you a shell where

[ceph-users] Re: Best way to add multiple nodes to a cluster?

2021-11-02 Thread Zakhar Kirpichenko
No issue at all, this is the advice I was looking for :-) Seems that 'norebalance' will do the trick. Thanks! /Z On Tue, Nov 2, 2021 at 11:24 AM Szabo, Istvan (Agoda) < istvan.sz...@agoda.com> wrote: > What's the issue with adding all osd with noout and norebalance and once > all of them up,

[ceph-users] Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-11-02 Thread Beard Lionel
Hi, It is a quite old cluster (hopefully, not the production one), it was created in Luminous if I remember well. Cordialement, Regards, Lionel BEARD CLS - IT & Operations 11 rue Hermès, Parc Technologique du Canal 31520 Ramonville Saint-Agne – France Tél : +33 (0)5 61 39 39 19 -Message

[ceph-users] Re: Best way to add multiple nodes to a cluster?

2021-11-02 Thread Etienne Menguy
Hi, I see 2 ways : Add your OSD with 0 weight, and slowly increase their weight or add OSD 1 by 1. It’s easy but “stupid” as some PG will move many times. Check https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/OKCWC5KNQF2FD3V4WI2IGMQBGOYY2LL2/

[ceph-users] Best way to add multiple nodes to a cluster?

2021-11-02 Thread Zakhar Kirpichenko
Hi! I have a 3-node 16.2.6 cluster with 33 OSDs, and plan to add another 3 nodes of the same configuration to it. What is the best way to add the new nodes and OSDs so that I can avoid a massive rebalance and performance hit until all new nodes and OSDs are in place and operational? I would very