[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Patrick Donnelly
On Fri, Nov 18, 2022 at 2:32 PM Frank Schilder wrote: > > Hi Patrick, > > we plan to upgrade next year. Can't do any faster. However, distributed > ephemeral pinning was introduced with octopus. It was one of the major new > features and is explained in the octopus documentation in detail. > >

[ceph-users] Issues upgrading cephadm cluster from Octopus.

2022-11-18 Thread Seth T Graham
We have a cluster running Octopus (15.2.17) that I need to get updated and am getting cephadm failures when updating the managers, and have tried both Pacific and Quincy with the same results. The cluster was deployed with cephadm on centos stream 8 using podman and due to network isolation of

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Frank Schilder
Hi Patrick, we plan to upgrade next year. Can't do any faster. However, distributed ephemeral pinning was introduced with octopus. It was one of the major new features and is explained in the octopus documentation in detail. Are you saying that it is actually not implemented? If so, how much

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Patrick Donnelly
On Fri, Nov 18, 2022 at 2:11 PM Frank Schilder wrote: > > Hi Patrick, > > thanks for your super fast answer. > > > I assume you mean "distributed ephemeral pinning"? > > Yes. Just to remove any potential for a misunderstanding from my side, I > enabled it with (copy-paste from the command

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Frank Schilder
Hi Patrick, thanks for your super fast answer. > I assume you mean "distributed ephemeral pinning"? Yes. Just to remove any potential for a misunderstanding from my side, I enabled it with (copy-paste from the command history, /mnt/admin/cephfs/ is the mount point of "/" with all possible

[ceph-users] Re: Any concerns using EC with CLAY in Quincy (or Pacific)?

2022-11-18 Thread Jeremy Austin
Hi Sean, My use of EC is specifically for slow, bulk storage. I did test jerasure some years ago, but I don't think I kept my results. I'm having issues today with arxiv.org which had papers… I wanted to reduce disk usage primarily, and network IO secondarily. In my case, I preferred the reduced

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Patrick Donnelly
On Fri, Nov 18, 2022 at 12:51 PM Frank Schilder wrote: > > Hi Patrick, > > thanks! I did the following but don't know how to interpret the result. The > three directories we have ephemeral pinning set are: > > /shares > /hpc/home > /hpc/groups I assume you mean "distributed ephemeral pinning"?

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Frank Schilder
Hi Patrick, thanks! I did the following but don't know how to interpret the result. The three directories we have ephemeral pinning set are: /shares /hpc/home /hpc/groups If I understand the documentation correctly, everything under /hpc/home/user should be on the same MDS. Trying it out I

[ceph-users] Re: Disable legacy msgr v1

2022-11-18 Thread Murilo Morais
Have you tried setting ms_bind_msgr1 to false? Em sex., 18 de nov. de 2022 às 14:35, Oleksiy Stashok escreveu: > Hey guys, > > Is there a way to disable the legacy msgr v1 protocol for all ceph > services? > > Thank you. > Oleksiy > ___ > ceph-users

[ceph-users] Disable legacy msgr v1

2022-11-18 Thread Oleksiy Stashok
Hey guys, Is there a way to disable the legacy msgr v1 protocol for all ceph services? Thank you. Oleksiy ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: MDS internal op exportdir despite ephemeral pinning

2022-11-18 Thread Patrick Donnelly
On Thu, Nov 17, 2022 at 4:45 AM Frank Schilder wrote: > > Hi Patrick, > > thanks for your explanation. Is there a way to check which directory is > exported? For example, is the inode contained in the messages somewhere? A > readdir would usually happen on log-in and the number of slow exports

[ceph-users] Scheduled RBD volume snapshots without mirrioring (-schedule)

2022-11-18 Thread Tobias Bossert
Dear List I'm searching for a way to automate the snapshot creation/cleanup of RBD volumes. Ideally, there would be something like the "Snapshot Scheduler for cephfs"[1] but I understand this is not as "easy" with RBD devices since ceph has no idea of the overlaying filesystem. So what I

[ceph-users] Re: LVM osds loose connection to disk

2022-11-18 Thread Frank Schilder
Hi Dan and Igor, looks very much like BFQ is indeed the culprit. I rolled back everything to none (high-performance SAS SSDs) and mq-deadline (low-medium performance SATA SSDs) and started a full speed data movement from the slow to the fast disks. The cluster operates as good as in the past

[ceph-users] Re: LVM osds loose connection to disk

2022-11-18 Thread Dan van der Ster
Hi Frank, bfq was definitely broken, deadlocking io for a few CentOS Stream 8 kernels between EL 8.5 and 8.6 -- we also hit that in production and switched over to `none`. I don't recall exactly when the upstream kernel was also broken but apparently this was the fix:

[ceph-users] Re: lost all monitors at the same time

2022-11-18 Thread Eugen Block
I still find it strange that a power outage can break a cluster, we've had multiple outages this year and the cluster recovered sucessfully every time. Although I should add that it's not containerized yet, it's still running on Nautilus. Anyway, did you verify that all directories are there

[ceph-users] Re: failed to decode CephXAuthenticate / andle_auth_bad_method server allowed_methods [2] but i only support [2]

2022-11-18 Thread Eugen Block
Hi, I wonder if it's because you try to start it with the admin keyring instead of the rgw client keyring. Have you tried it? Zitat von Marcus Müller : Hi all, I try to install a new rgw node. After trying to execute this command: /usr/bin/radosgw -f --cluster ceph --name