[ceph-users] Re: Change max backfills

2021-09-24 Thread David Orman
With recent releases, 'ceph config' is probably a better option; do keep in mind this sets things cluster-wide. If you're just wanting to target specific daemons, then tell may be better for your use case. # get current value ceph config get osd osd_max_backfills # set new value to 2, for

[ceph-users] Re: *****SPAM***** Re: Corruption on cluster

2021-09-24 Thread David Schulz
Thanks Everyone!  Updating the clients to 4.18.0.305.19.1 did indeed fix the issue. -Dave On 2021-09-21 11:42 a.m., Dan van der Ster wrote: > [△EXTERNAL] > > > > It's this: >

[ceph-users] Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

2021-09-24 Thread Chris
Awesome! I had no idea that's where this was pulling it from! However... Both of the SSDs do have rotational set to 0 :( root@ceph05:/sys/block# cat sd{r,s}/queue/rotational 0 0 I found a line in cephadm.log that also agrees; this one is from docker: "sys_api": { "removable": "0",

[ceph-users] Re: 16.2.6 CEPHADM_REFRESH_FAILED New Cluster

2021-09-24 Thread Adam King
It looks like the output from a ceph-volume command was too long to handle. If you run "cephadm ceph-volume -- inventory --format=json" (add "--with-lsm" if you've turned on device_enhanced_scan) manually on each host do any of them fail in a similar fashion? On Fri, Sep 24, 2021 at 1:37 PM Marco

[ceph-users] Re: Successful Upgrade from 14.2.22 to 15.2.14

2021-09-24 Thread Stefan Kooman
On 9/24/21 08:33, Rainer Krienke wrote: Hallo Dan, I am also running a productive  14.2.22 Cluster with 144 HDD-OSDs and I am thinking if I should stay with this release or upgrade to octopus. So your info is very valuable... One more question: You described that OSDs do an expected fsck

[ceph-users] 16.2.6 CEPHADM_REFRESH_FAILED New Cluster

2021-09-24 Thread Marco Pizzolo
Hello Everyone, If you have any suggestions on the cause, or what we can do I'd certainly appreciate it. I'm seeing the following on a newly stood up cluster using Podman on Ubuntu 20.04.3 HWE: Thank you very much Marco Sep 24, 2021, 1:24:30 PM [ERR] cephadm exited with an error code: 1,

[ceph-users] How you loadbalance your rgw endpoints?

2021-09-24 Thread Szabo, Istvan (Agoda)
Hi, Wonder how you guys do it due to we will always have limitation on the network bandwidth of the loadbalancer. Or if no balancer what to monitor if 1 rgw maxed out? I’m using 15rgw. Ty ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: Successful Upgrade from 14.2.22 to 15.2.14

2021-09-24 Thread Dan van der Ster
Hi Rainer, On Fri, Sep 24, 2021 at 8:33 AM Rainer Krienke wrote: > > Hallo Dan, > > I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I > am thinking if I should stay with this release or upgrade to octopus. So > your info is very valuable... > > One more question: You

[ceph-users] Re: Successful Upgrade from 14.2.22 to 15.2.14

2021-09-24 Thread Rainer Krienke
Hallo Dan, I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I am thinking if I should stay with this release or upgrade to octopus. So your info is very valuable... One more question: You described that OSDs do an expected fsck and that this took roughly 10min. I guess

[ceph-users] Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

2021-09-24 Thread Eugen Block
Hi, as a workaround you could just set the rotational flag by yourself: echo 0 > /sys/block/sd[X]/queue/rotational That's the one ceph-volume is searching for and it should at least enable you to deploy the rest of the OSDs. Of course, you'll need to figure out why the rotational flag is