[ceph-users] Re: RBDMAP clients rendering theirselfs as "Jewel" in "Luminous" ceph cluster

2021-12-07 Thread Konstantin Shalygin
This is userland packages. If you use krbd you should update kernels. Then reboot and remap k > On 8 Dec 2021, at 10:12, Kamil Kuramshin wrote: > > I understand that I should update something. The question is what I have to > update to reach desired result? > > All ceph related stuff on

[ceph-users] Re: Local NTP servers on monitor node's.

2021-12-07 Thread Robert Sander
Am 08.12.21 um 02:34 schrieb mhnx: - Sometimes NTP servers can respond but systemd-timesyncd can not sync the time without manual help. Just my 2¢: Do not use systemd-timesyncd. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] How to move RBD parent images without breaking child links.

2021-12-07 Thread mhnx
Hello. I have 2 different RBD pools in my cluster. Pool 1= Nvme Pool Pool 2 = Sas HDD Pool with Wal+DB on SSD I want to destroy the Nvme pool and re-create it with "4 OSD / 1 Nvme" but I have parent images on the NVME pool and the childs are on the Sas pool. After the re-creation I want to move

[ceph-users] Local NTP servers on monitor node's.

2021-12-07 Thread mhnx
Hello. I've been building Ceph clusters since 2014 and the most annoying and worst failure is the NTP server faults and having different times on Ceph nodes. I've fixed few clusters because of the ntp failure. - Sometimes NTP servers can be unavailable, - Sometimes NTP servers can go crazy. -

[ceph-users] Re: Ganesha + cephfs - multiple exports

2021-12-07 Thread Patrick Donnelly
On Mon, Dec 6, 2021 at 11:45 AM Andras Pataki wrote: > > Hi, > > We have some nodes that need NFS exports of cephfs - and I am trying to > find a way to efficiently export multiple directories. So far I've been > creating an 'EXPORT' block with an 'FSAL { Name=CEPH; }' inside it for > each

[ceph-users] v16.2.7 Pacific released

2021-12-07 Thread David Galloway
We're happy to announce the 7th backport release in the Pacific series. We recommend all users upgrade to this release. Notable Changes --- * Critical bug in OMAP format upgrade is fixed. This could cause data corruption (improperly formatted OMAP keys) after pre-Pacific cluster

[ceph-users] Re: Bug in RGW header x-amz-date parsing

2021-12-07 Thread Casey Bodley
hi Subu, On Tue, Dec 7, 2021 at 12:10 PM Subu Sankara Subramanian wrote: > > Folks, > > Is there a bug in ceph RGW date parsing? > https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.cc#L223 - this > line parses the date in x-amz-date as RFC 2616. BUT the format specified by > Amazon S3

[ceph-users] Bug in RGW header x-amz-date parsing

2021-12-07 Thread Subu Sankara Subramanian
Folks, Is there a bug in ceph RGW date parsing? https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.cc#L223 - this line parses the date in x-amz-date as RFC 2616. BUT the format specified by Amazon S3 is ISO 8601 basic - MMDDTHHMMSSZ (

[ceph-users] Re: 50% IOPS performance drop after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

2021-12-07 Thread ceph
Hi Frank, thanks for the input. Im still a bit sceptical to be honest that this is all, since a.) our bench values are pretty stable over time (natilus times and octopus times) with a variance of maybe 20% which i would put on normal cluster load. Furthermore the HDD pool also halved its

[ceph-users] Re: available space seems low

2021-12-07 Thread Seth Galitzer
After letting the balancer run all night, I have recovered 35TB of additional available space. Average used space on all osds is still 63%, but now with a range of 61-64%, so much better. The client is reporting 144TB total space, which is closer to the 168TB I would expect (504TB total raw

[ceph-users] Ceph OSD spurious read errors and PG autorepair

2021-12-07 Thread Denis Polom
Hi, I'm observing following behavior on our Ceph clusters: On Ceph cluster where I have enabled osd_scrub_auto_repair = true I can observe Spurious read errors warnings. On other Ceph clusters where this option is set to false I don't see this warning. But on ohter hand I have often scrub

[ceph-users] Re: snapshot based rbd-mirror in production

2021-12-07 Thread Eugen Block
Hi, thanks a lot, I appreciate your comforting response. Zitat von Arthur Outhenin-Chalandre : Hi Eugen, On 12/6/21 10:31, Eugen Block wrote: I'm curious if anyone is using this relatively new feature (I believe since Octopus?) in production. I haven't read too much about it in this list,

[ceph-users] Re: 50% IOPS performance drop after upgrade from Nautilus 14.2.22 to Octopus 15.2.15

2021-12-07 Thread ceph
Hi Dan, Josh, thanks for the input, bluefs_buffered_io with true and false, no real differences to be seen (hard to say in a productive cluster. maybe some little percent). We now disabled the write cache on our SSD’s and see a “felt” increase of the performance up to 17k IOPS with 4k blocks

[ceph-users] Re: CentOS 7 and CentOS 8 Stream dependencies for diskprediction module

2021-12-07 Thread Michal Strnad
Did anyone have the same problem? We come across this on every cluster. Thank you -- Michal Strnad On 10/11/21 4:23 PM, Michal Strnad wrote: Hi, Did anyone get the diskprediction-local plugin working on CentOS 7.9 or CentOS 8 Stream? We have the same problem under both version of CentOS.

[ceph-users] Re: mount.ceph ipv4 fails on dual-stack ceph

2021-12-07 Thread Andrej Filipcic
On 07/12/2021 10:56, Stefan Kooman wrote: On 12/7/21 09:52, Andrej Filipcic wrote: Hi, I am trying to mount cephfs on iipv4, where ceph is in dual stack mode, but it fails with: [1692264.203560] libceph: wrong peer, want (1)153.5.68.28:6789/0, got (1)[2001:1470:ff94:d:153:5:68:28]:6789/0

[ceph-users] mount.ceph ipv4 fails on dual-stack ceph

2021-12-07 Thread Andrej Filipcic
Hi, I am trying to mount cephfs on iipv4, where ceph is in dual stack mode, but it fails with: [1692264.203560] libceph: wrong peer, want (1)153.5.68.28:6789/0, got (1)[2001:1470:ff94:d:153:5:68:28]:6789/0 [1692264.213297] libceph: mon2 (1)153.5.68.28:6789 wrong peer at address

[ceph-users] Re: can i pause a ongoing rebalance process?

2021-12-07 Thread Janne Johansson
Den tis 7 dec. 2021 kl 09:16 skrev José H. Freidhof : > > Hello together > > question: i repaired some osd and now the rebalance process are running. we > suffer now performance problems. Can i pause the ongoing rebalance job and > continue it at night? Yes, "ceph osd set norebalance" should

[ceph-users] can i pause a ongoing rebalance process?

2021-12-07 Thread José H . Freidhof
Hello together question: i repaired some osd and now the rebalance process are running. we suffer now performance problems. Can i pause the ongoing rebalance job and continue it at night? thx in advance ___ ceph-users mailing list -- ceph-users@ceph.io