[ceph-users] Feature/Change Request: Don't send alert emails for --sticky muted WARN conditions

2023-05-25 Thread Edward R Huyer
I recently upgraded to Quincy, and toggled on the BULK flag of a few pools. As a result, my cluster has been spending the last several days shuffling data while growing the pool pg counts. That in turn has resulted in a steadily increasing number of pgs being flagged PG_NOT_DEEP_SCRUBBED.

[ceph-users] Re: SMB and ceph question

2022-10-27 Thread Edward R Huyer
There do exist vfs_ceph and vfs_ceph_snapshots modules for Samba, at least in theory. https://www.samba.org/samba/docs/current/man-html/vfs_ceph.8.html https://www.samba.org/samba/docs/current/man-html/vfs_ceph_snapshots.8.html However, they don't exist in, for instance, the version of Samba in

[ceph-users] Re: MDS_CLIENT_LATE_RELEASE after setting up scheduled CephFS snapshots

2022-10-21 Thread Edward R Huyer
Great, thank you both for the confirmation! -Original Message- From: Xiubo Li Sent: Friday, October 21, 2022 8:43 AM To: Rishabh Dave ; Edward R Huyer Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Re: MDS_CLIENT_LATE_RELEASE after setting up scheduled CephFS snapshots On 21/10

[ceph-users] MDS_CLIENT_LATE_RELEASE after setting up scheduled CephFS snapshots

2022-10-19 Thread Edward R Huyer
I recently set up scheduled snapshots on my CephFS filesystem, and ever since the cluster has been intermittently going into HEALTH_WARN with an MDS_CLIENT_LATE_RELEASE notification. Specifically: [WARN] MDS_CLIENT_LATE_RELEASE: 1 clients failing to respond to capability release

[ceph-users] Re: Replacing OSD with DB on shared NVMe

2022-05-25 Thread Edward R Huyer
Sent: Wednesday, May 25, 2022 5:03 PM To: Edward R Huyer Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Replacing OSD with DB on shared NVMe In your example, you can login to the server in question with the OSD, and run "ceph-volume lvm zap --osd-id --destroy" and it will purge

[ceph-users] Replacing OSD with DB on shared NVMe

2022-05-25 Thread Edward R Huyer
Ok, I'm not sure if I'm missing something or if this is a gap in ceph orch functionality, or what: On a given host all the OSDs share a single large NVMe drive for DB/WAL storage and were set up using a simple ceph orch spec file. I'm replacing some of the OSDs. After they've been removed

[ceph-users] Re: Infinite Dashboard 404 Loop On Failed SAML Authentication

2022-01-11 Thread Edward R Huyer
Actually, one other question occurred to me: Was your testing environment bare metal or a cephadm containerized install? It shouldn't matter, and I don't know that it does matter, but my environment is containerized. -- Edward Huyer -Original Message- From: Edward R Huyer

[ceph-users] Re: Infinite Dashboard 404 Loop On Failed SAML Authentication

2022-01-11 Thread Edward R Huyer
ersons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and destroy any copies of this information. From: Ernesto Puerta [mailto:epuer...@redhat.com] Sent: Tuesday, January 11, 2022 11:25 AM To: Edward R Huyer Cc: ceph-users@ceph.io S

[ceph-users] Infinite Dashboard 404 Loop On Failed SAML Authentication

2022-01-06 Thread Edward R Huyer
Ok, I think I've nearly got the dashboard working with SAML/Shibboleth authentication, except for one thing: If a user authenticates via SAML, but a corresponding dashboard user hasn't been created, it triggers a loop where the browser gets redirected to a nonexistent dashboard unauthorized

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Edward R Huyer
> -Original Message- > From: Carlos Mogas da Silva > Sent: Wednesday, December 8, 2021 1:26 PM > To: Edward R Huyer ; Marc ; > ceph-users@ceph.io > Subject: Re: [ceph-users] Re: Migration from CentOS7/Nautilus to CentOS > Stream/Pacific > > On Wed, 2021-12-

[ceph-users] Re: Migration from CentOS7/Nautilus to CentOS Stream/Pacific

2021-12-08 Thread Edward R Huyer
> On Wed, 2021-12-08 at 16:06 +, Marc wrote: > > > > > > It isn't possible to upgrade from CentOS 7 to anything... At least > > > without required massive hacks that may of may not work (and most > > > likely won't). > > > > I meant wipe the os disk, install whatever, install nautilus and put

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-11-02 Thread Edward R Huyer
of the host’s filesystem were visible inside the container, and how the container’s and host’s paths differed. From: Ernesto Puerta Sent: Tuesday, November 2, 2021 6:38 AM To: Edward R Huyer ; Sebastian Wagner Cc: Yury Kirsanov ; ceph-users@ceph.io Subject: Re: [ceph-users] Re: Doing SAML2 Auth

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-10-29 Thread Edward R Huyer
: Edward R Huyer Sent: Wednesday, October 27, 2021 9:31 AM To: 'Ernesto Puerta' Cc: Yury Kirsanov ; ceph-users@ceph.io Subject: RE: [ceph-users] Re: Doing SAML2 Auth With Containerized mgrs Thank you for the reply. Even if there’s a good reason for the CLI tool to not send the contents

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-10-27 Thread Edward R Huyer
specific suggestions as to how to approach this? I’m not familiar enough the details of the cephadm-deploy containers specifically or containers in general to know where to start. From: Ernesto Puerta Sent: Wednesday, October 27, 2021 6:53 AM To: Edward R Huyer Cc: Yury Kirsanov ; ceph-users

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-10-25 Thread Edward R Huyer
No worries. It's a pretty specific problem, and the documentation could be better. -Original Message- From: Yury Kirsanov Sent: Monday, October 25, 2021 12:17 PM To: Edward R Huyer Cc: ceph-users@ceph.io Subject: [ceph-users] Re: Doing SAML2 Auth With Containerized mgrs Hi Edward

[ceph-users] Re: Doing SAML2 Auth With Containerized mgrs

2021-10-25 Thread Edward R Huyer
/cephadm/grafana_crt -i .crt ceph config-key set mgr/cephadm/grafana_key -i .key ceph orch reconfig grafana ceph mgr module enable dashboard Hope this helps! Regards, Yury. On Tue, Oct 26, 2021 at 2:45 AM Edward R Huyer mailto:erh...@rit.edu>> wrote: Continuing my containerized Ceph adve

[ceph-users] Doing SAML2 Auth With Containerized mgrs

2021-10-25 Thread Edward R Huyer
Continuing my containerized Ceph adventures I'm trying to set up SAML2 auth for the dashboard (specifically pointing at the institute Shibboleth service). The service requires the use of the x509 certificates. Following the instructions in the documentation (

[ceph-users] Re: Daemon Version Mismatch (But Not Really?) After Deleting/Recreating OSDs

2021-10-05 Thread Edward R Huyer
Gotcha. Thanks for the input regardless. I suppose I'll continue what I'm doing, and plan on doing an upgrade via quay.io in the near future. -Original Message- From: Gregory Farnum Sent: Monday, October 4, 2021 7:14 PM To: Edward R Huyer Cc: ceph-users@ceph.io Subject: Re: [ceph

[ceph-users] Re: Daemon Version Mismatch (But Not Really?) After Deleting/Recreating OSDs

2021-10-04 Thread Edward R Huyer
ober 4, 2021 2:33 PM To: Edward R Huyer Cc: ceph-users@ceph.io Subject: Re: [ceph-users] Daemon Version Mismatch (But Not Really?) After Deleting/Recreating OSDs On Mon, Oct 4, 2021 at 7:57 AM Edward R Huyer wrote: > > Over the summer, I upgraded my cluster from Nautilus to Pacific, and

[ceph-users] Daemon Version Mismatch (But Not Really?) After Deleting/Recreating OSDs

2021-10-04 Thread Edward R Huyer
Over the summer, I upgraded my cluster from Nautilus to Pacific, and converted to use cephadm after doing so. Over the past couple weeks, I've been converting my OSDs to use NVMe drives for db+wal storage. Schedule a node's worth of OSDs to be removed, wait for that to happen, delete the PVs

[ceph-users] Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

2021-09-27 Thread Edward R Huyer
I also just ran into what seems to be the same problem Chris did. Despite all indicators visible to me saying my NVMe drive is non-rotational (including /sys/block/nvme0n1/queue/rotational ), the Orchestrator would not touch it until I specified it by model. -Original Message- From:

[ceph-users] Re: OSD Service Advanced Specification db_slots

2021-09-27 Thread Edward R Huyer
08G So the limit filter works as expected here. If I don't specify it, I wouldn't get any OSDs because ceph-volume can't fit three DBs of size 3 GB onto the 8 GB disk. Does that help? Regards, Eugen Zitat von Edward R Huyer : > I recently upgraded my existing cluster to Pacific and

[ceph-users] OSD Service Advanced Specification db_slots

2021-09-10 Thread Edward R Huyer
I recently upgraded my existing cluster to Pacific and cephadm, and need to reconfigure all the (rotational) OSDs to use NVMe drives for db storage. I think I have a reasonably good idea how that's going to work, but the use of db_slots and limit in the OSD service specification have me