[ceph-users] Re: rados df vs ls

2022-07-12 Thread stuart.anderson
> On Jul 6, 2022, at 10:30 AM, stuart.anderson > wrote: > > I am wondering if it is safe to delete the following pool that rados ls > reports is empty, but rados df indicates has a few thousand objects? Please excuse reposting, but as a Ceph newbie I would really appreciate advice from

[ceph-users] Re: ceph-fs crashes on getfattr

2022-07-12 Thread Gregory Farnum
On Tue, Jul 12, 2022 at 1:46 PM Andras Pataki wrote: > > We also had a full MDS crash a couple of weeks ago due to what seems to > be another back-ported feature going wrong. As soon as I deployed the > 16.2.9 ceph-fuse client, someone asked for an unknown ceph xattr, which > crashed the MDS and

[ceph-users] RGW error Coundn't init storage provider (RADOS)

2022-07-12 Thread Robert Reihs
Hi, We have a problem with deloing radosgw vi cephadm. We have a Ceph cluster with 3 nodes deployed via cephadm. Pool creation, cephfs and block storage are working. ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable) The service specs is like this for the rgw: ---

[ceph-users] Re: Moving MGR from a node to another

2022-07-12 Thread Malte Stroem
Hello Aristide, you cannot migrate an MGR. You remove it on the node where it exists and redeploy it on another. Create an mgr.yml: service_type: mgr placement: hosts: - : - : Just edit the mon and the ip entries and run: ceph orch apply -i mgr.yml Best, Malte Am 12.07.22 um 12:54

[ceph-users] Re: Quincy recovery load

2022-07-12 Thread Sridhar Seshasayee
Hi Chris, While we look into this, I have a couple of questions: 1. Did the recovery rate stay at 1 object/sec throughout? In our tests we have seen that the rate is higher during the starting phase of recovery and eventually tapers off due to throttling by mclock. 2. Can you try

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-12 Thread Dan van der Ster
Hi Igor, Thank you for the reply and information. I confirm that `ceph config set osd bluestore_prefer_deferred_size_hdd 65537` correctly defers writes in my clusters. Best regards, Dan On Tue, Jul 12, 2022 at 1:16 PM Igor Fedotov wrote: > > Hi Dan, > > I can confirm this is a regression

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-12 Thread Igor Fedotov
yep! Thanks and sorry for the confusion. On 7/12/2022 2:23 PM, Konstantin Shalygin wrote: Hi Igor, On 12 Jul 2022, at 14:16, Igor Fedotov wrote: Meanwhile you can adjust bluestore_min_alloc_size_hdd indeed but I'd prefer not to raise it that high as 128K to avoid too many writes being

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-12 Thread Konstantin Shalygin
Hi Igor, > On 12 Jul 2022, at 14:16, Igor Fedotov wrote: > > Meanwhile you can adjust bluestore_min_alloc_size_hdd indeed but I'd prefer > not to raise it that high as 128K to avoid too many writes being deferred > (and hence DB overburden). For clarification, perhaps you mean

[ceph-users] Re: Quincy recovery load

2022-07-12 Thread Chris Palmer
I've created tracker https://tracker.ceph.com/issues/56530 for this, including info on replicating it on another cluster. On 11/07/2022 17:41, Chris Palmer wrote: Correction - it is the Acting OSDs that are consuming CPU, not the UP ones On 11/07/2022 16:17, Chris Palmer wrote: I'm seeing a

[ceph-users] Re: pacific doesn't defer small writes for pre-pacific hdd osds

2022-07-12 Thread Igor Fedotov
Hi Dan, I can confirm this is a regression introduced by https://github.com/ceph/ceph/pull/42725. Indeed strict comparison is a key point in your specific case but generally  it looks like this piece of code needs more redesign to better handle fragmented allocations (and issue deferred

[ceph-users] Moving MGR from a node to another

2022-07-12 Thread Aristide Bekroundjo
Hi dear all, I have cluster 16.2 deployed with Cephadm. I would like to move/migrate my MGR deamon from a node to another node. Please how can I proceed ? BR, ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to

[ceph-users] Re: Status occurring several times a day: CEPHADM_REFRESH_FAILED

2022-07-12 Thread E Taka
Yes, "_admin" is set. After some restarting and redeploying the problem seemed to disappear. Thanks anyway. Erich Am Fr., 8. Juli 2022 um 14:18 Uhr schrieb Adam King : > Hello, > > Does the MGR node have an "_admin" label on it? > > Thanks, > - Adam King > > On Fri, Jul 8, 2022 at 4:23 AM E