This is userland packages. If you use krbd you should update kernels. Then
reboot and remap
k
> On 8 Dec 2021, at 10:12, Kamil Kuramshin wrote:
>
> I understand that I should update something. The question is what I have to
> update to reach desired result?
>
> All ceph related stuff on
Am 08.12.21 um 02:34 schrieb mhnx:
- Sometimes NTP servers can respond but systemd-timesyncd can not sync
the time without manual help.
Just my 2¢: Do not use systemd-timesyncd.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
Hello.
I have 2 different RBD pools in my cluster.
Pool 1= Nvme Pool
Pool 2 = Sas HDD Pool with Wal+DB on SSD
I want to destroy the Nvme pool and re-create it with "4 OSD / 1 Nvme"
but I have parent images on the NVME pool and the childs are on the
Sas pool.
After the re-creation I want to move
Hello.
I've been building Ceph clusters since 2014 and the most annoying and
worst failure is the NTP server faults and having different times on
Ceph nodes.
I've fixed few clusters because of the ntp failure.
- Sometimes NTP servers can be unavailable,
- Sometimes NTP servers can go crazy.
-
On Mon, Dec 6, 2021 at 11:45 AM Andras Pataki
wrote:
>
> Hi,
>
> We have some nodes that need NFS exports of cephfs - and I am trying to
> find a way to efficiently export multiple directories. So far I've been
> creating an 'EXPORT' block with an 'FSAL { Name=CEPH; }' inside it for
> each
We're happy to announce the 7th backport release in the Pacific series.
We recommend all users upgrade to this release.
Notable Changes
---
* Critical bug in OMAP format upgrade is fixed. This could cause data
corruption (improperly formatted OMAP keys) after pre-Pacific cluster
hi Subu,
On Tue, Dec 7, 2021 at 12:10 PM Subu Sankara Subramanian
wrote:
>
> Folks,
>
> Is there a bug in ceph RGW date parsing?
> https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.cc#L223 - this
> line parses the date in x-amz-date as RFC 2616. BUT the format specified by
> Amazon S3
Folks,
Is there a bug in ceph RGW date parsing?
https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.cc#L223 - this
line parses the date in x-amz-date as RFC 2616. BUT the format specified by
Amazon S3 is ISO 8601 basic - MMDDTHHMMSSZ (
Hi Frank, thanks for the input. Im still a bit sceptical to be honest that this
is all, since a.) our bench values are pretty stable over time (natilus times
and octopus times) with a variance of maybe 20% which i would put on normal
cluster load.
Furthermore the HDD pool also halved its
After letting the balancer run all night, I have recovered 35TB of
additional available space. Average used space on all osds is still 63%,
but now with a range of 61-64%, so much better. The client is reporting
144TB total space, which is closer to the 168TB I would expect (504TB
total raw
Hi,
I'm observing following behavior on our Ceph clusters:
On Ceph cluster where I have enabled
osd_scrub_auto_repair = true
I can observe Spurious read errors warnings. On other Ceph clusters
where this option is set to false I don't see this warning. But on ohter
hand I have often scrub
Hi,
thanks a lot, I appreciate your comforting response.
Zitat von Arthur Outhenin-Chalandre :
Hi Eugen,
On 12/6/21 10:31, Eugen Block wrote:
I'm curious if anyone is using this relatively new feature (I believe
since Octopus?) in production. I haven't read too much about it in
this list,
Hi Dan, Josh,
thanks for the input, bluefs_buffered_io with true and false, no real
differences to be seen (hard to say in a productive cluster. maybe some little
percent).
We now disabled the write cache on our SSD’s and see a “felt” increase of the
performance up to 17k IOPS with 4k blocks
Did anyone have the same problem? We come across this on every cluster.
Thank you
--
Michal Strnad
On 10/11/21 4:23 PM, Michal Strnad wrote:
Hi,
Did anyone get the diskprediction-local plugin working on CentOS 7.9 or
CentOS 8 Stream? We have the same problem under both version of CentOS.
On 07/12/2021 10:56, Stefan Kooman wrote:
On 12/7/21 09:52, Andrej Filipcic wrote:
Hi,
I am trying to mount cephfs on iipv4, where ceph is in dual stack
mode, but it fails with:
[1692264.203560] libceph: wrong peer, want (1)153.5.68.28:6789/0, got
(1)[2001:1470:ff94:d:153:5:68:28]:6789/0
Hi,
I am trying to mount cephfs on iipv4, where ceph is in dual stack mode,
but it fails with:
[1692264.203560] libceph: wrong peer, want (1)153.5.68.28:6789/0, got
(1)[2001:1470:ff94:d:153:5:68:28]:6789/0
[1692264.213297] libceph: mon2 (1)153.5.68.28:6789 wrong peer at address
Den tis 7 dec. 2021 kl 09:16 skrev José H. Freidhof
:
>
> Hello together
>
> question: i repaired some osd and now the rebalance process are running. we
> suffer now performance problems. Can i pause the ongoing rebalance job and
> continue it at night?
Yes, "ceph osd set norebalance" should
Hello together
question: i repaired some osd and now the rebalance process are running. we
suffer now performance problems. Can i pause the ongoing rebalance job and
continue it at night?
thx in advance
___
ceph-users mailing list -- ceph-users@ceph.io
18 matches
Mail list logo