[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 12/15/21 14:18, Arthur Outhenin-Chalandre wrote: On 12/15/21 13:50, Torkil Svensgaard wrote: Ah, so as long as I don't run the mirror daemons on site-a there is no risk of overwriting production data there? To be perfectly clear there should be no risk whatsoever (as Ilya also said). I sugg

[ceph-users] Re: CephFS Metadata Pool bandwidth usage

2021-12-15 Thread Andras Sali
Hi Xiubo, Thanks very much for looking into this, that does sound like what might be happening in our case. Is this something that can be improved somehow - would disabling pinning or some config change help? Or could this be addressed in a future release? It seems somehow excessive to write so

[ceph-users] NFS-ganesha .debs not on download.ceph.com

2021-12-15 Thread Richard Zak
I've got Ceph running on Ubuntu 20.04 using Ceph-ansible, and I noticed that the .deb files for NFS-ganesha aren't on download.ceph.com. It seems the files should be here: https://download.ceph.com/nfs-ganesha/deb-V3.5-stable/pacific but "deb-V3.5-stable" doesn't exist. Poking around, I can see the

[ceph-users] Re: Experience reducing size 3 to 2 on production cluster?

2021-12-15 Thread Marco Pizzolo
Thanks Linh Vu, so it sounds like i should be prepared to bounce the OSDs and/or Hosts, but I haven't heard anyone yet say that it won't work, so I guess there's that... On Tue, Dec 14, 2021 at 7:48 PM Linh Vu wrote: > I haven't tested this in Nautilus 14.2.22 (or any nautilus) but in > Luminous

[ceph-users] Re: Large latency for single thread

2021-12-15 Thread Mark Nelson
FWIW, we ran single OSD, iodepth=1 O_DSYNC write tests against classic and crimson bluestore OSDs in our Q3 crimson slide deck. You can see the results starting on slide 32 here: https://docs.google.com/presentation/d/1eydyAFKRea8n-VniQzXKW8qkKM9GLVMJt2uDjipJjQA/edit#slide=id.gf880cf6296_1_73

[ceph-users] Re: Large latency for single thread

2021-12-15 Thread Marc
Is this not just inherent to SDS? And wait for the new osd code, I think they are working on it. https://yourcmc.ru/wiki/Ceph_performance > > m-seqwr-004k-001q-001j: (groupid=0, jobs=1): err= 0: pid=46: Wed Dec 15 > 14:05:32 2021 >   write: IOPS=794, BW=3177KiB/s (3254kB/s)(559MiB/180002msec)

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-15 Thread Robert Sander
On 15.12.21 05:59, Linh Vu wrote: May not be directly related to your error, but they slap a DO NOT UPGRADE FROM AN OLDER VERSION label on the Pacific release notes for a reason... https://docs.ceph.com/en/latest/releases/pacific/ This is an unrelated issue (bluestore_fsck_quick_fix_on_mount)

[ceph-users] Large latency for single thread

2021-12-15 Thread norman.kern
I create a rbd pool using only two SATA SSDs(one for data, another for database,WAL), and set the replica size 1. After that, I setup a fio test on Host same with the OSD placed. I found the latency is hundreds micro-seconds(sixty micro-seconds for the raw SATA SSD ). The fio outpus: m-seqw

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-15 Thread Gregory Farnum
Hmm that ticket came from the slightly unusual scenario where you were deploying a *new* Pacific monitor against an Octopus cluster. Michael, is your cluster deployed with cephadm? And is this a new or previously-existing monitor? On Wed, Dec 15, 2021 at 12:09 AM Michael Uleysky wrote: > > Thank

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Arthur Outhenin-Chalandre
On 12/15/21 13:50, Torkil Svensgaard wrote: > Ah, so as long as I don't run the mirror daemons on site-a there is no > risk of overwriting production data there? To be perfectly clear there should be no risk whatsoever (as Ilya also said). I suggested to not run rbd-mirror on site-a so that repli

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 15/12/2021 13.58, Ilya Dryomov wrote: Hi Torkil, Hi Ilya I would recommend sticking to rx-tx to make potential failback back to the primary cluster easier. There shouldn't be any issue with running rbd-mirror daemons at both sites either -- it doesn't start replicating until it is instruc

[ceph-users] Re: Snapshot mirroring problem

2021-12-15 Thread Torkil Svensgaard
On 15/12/2021 10.17, Arthur Outhenin-Chalandre wrote: Hi Torkil, Hi Arthur On 12/15/21 09:45, Torkil Svensgaard wrote: I'm having trouble getting snapshot replication to work. I have 2 clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS Stream 8/16.2.6. I trying to enable on

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Ilya Dryomov
Hi Torkil, I would recommend sticking to rx-tx to make potential failback back to the primary cluster easier. There shouldn't be any issue with running rbd-mirror daemons at both sites either -- it doesn't start replicating until it is instructed to, either per-pool or per-image. Thanks,

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 15/12/2021 13.44, Arthur Outhenin-Chalandre wrote: Hi Torkil, Hi Arthur On 12/15/21 13:24, Torkil Svensgaard wrote: I'm confused by the direction parameter in the documentation[1]. If I have my data at site-a and want one way replication to site-b should the mirroring be configured as the

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Arthur Outhenin-Chalandre
Hi Torkil, On 12/15/21 13:24, Torkil Svensgaard wrote: > I'm confused by the direction parameter in the documentation[1]. If I > have my data at site-a and want one way replication to site-b should the > mirroring be configured as the documentation example, directionwise? What you are describin

[ceph-users] Re: what does "Message has implicit destination" mean

2021-12-15 Thread Janne Johansson
Den ons 15 dec. 2021 kl 09:35 skrev Marc : > The message is being held because: > > Message has implicit destination Usually stuff like "the maillist wasn't in the To: field, but only CC: or BCC:" -- May the most significant bit of your life be positive. _

[ceph-users] RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
Hi I'm confused by the direction parameter in the documentation[1]. If I have my data at site-a and want one way replication to site-b should the mirroring be configured as the documentation example, directionwise? E.g. rbd --cluster site-a mirror pool peer bootstrap create --site-name site

[ceph-users] Re: MAX AVAIL capacity mismatch || mimic(13.2)

2021-12-15 Thread Md. Hejbul Tawhid MUNNA
Hi, Our total number of hdd-OSD is 40. 40X5.5TB=220. we are using 3 replica for every pool. So, "Max avail" should show 220/3= 73.3. Am I right? what is the meaning of "variance 1.x". I think we might have wrong configuration , but need to find it. We have some more SSD-OSD, , yeah total capaci

[ceph-users] Re: Snapshot mirroring problem

2021-12-15 Thread Arthur Outhenin-Chalandre
Hi Torkil, On 12/15/21 09:45, Torkil Svensgaard wrote: > I'm having trouble getting snapshot replication to work. I have 2 > clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS > Stream 8/16.2.6. I trying to enable one-way replication from 714-ceph -> > dcn-ceph. I didn't try th

[ceph-users] Snapshot mirroring problem

2021-12-15 Thread Torkil Svensgaard
Hi I'm having trouble getting snapshot replication to work. I have 2 clusters, 714-ceph on RHEL/16.2.0-146.el8cp and dcn-ceph on CentOS Stream 8/16.2.6. I trying to enable one-way replication from 714-ceph -> dcn-ceph. Adding peer: " # rbd mirror pool info Mode: image Site Name: dcn-ceph P

[ceph-users] what does "Message has implicit destination" mean

2021-12-15 Thread Marc
The message is being held because: Message has implicit destination ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] How to clean up data in OSDs

2021-12-15 Thread Nagaraj Akkina
Hello Team, After testing our cluster we removed and recreated all ceph pools which actually cleaned up all users and buckets, but we can still see data in the disks. is there a easy way to clean up all osds without actually removing and reconfiguring them? what can be the best way to solve this

[ceph-users] Re: ceph-mon pacific doesn't enter to quorum of nautilus cluster

2021-12-15 Thread Michael Uleysky
Thanks! As far as I can see, this is the same problem as mine. ср, 15 дек. 2021 г. в 16:49, Chris Dunlop : > On Wed, Dec 15, 2021 at 02:05:05PM +1000, Michael Uleysky wrote: > > I try to upgrade three-node nautilus cluster to pacific. I am updating > ceph > > on one node and restarting daemons.