With recent releases, 'ceph config' is probably a better option; do
keep in mind this sets things cluster-wide. If you're just wanting to
target specific daemons, then tell may be better for your use case.
# get current value
ceph config get osd osd_max_backfills
# set new value to 2, for
Thanks Everyone! Updating the clients to 4.18.0.305.19.1 did indeed fix
the issue.
-Dave
On 2021-09-21 11:42 a.m., Dan van der Ster wrote:
> [△EXTERNAL]
>
>
>
> It's this:
>
Awesome! I had no idea that's where this was pulling it from! However...
Both of the SSDs do have rotational set to 0 :(
root@ceph05:/sys/block# cat sd{r,s}/queue/rotational
0
0
I found a line in cephadm.log that also agrees; this one is from docker:
"sys_api": {
"removable": "0",
It looks like the output from a ceph-volume command was too long to handle.
If you run "cephadm ceph-volume -- inventory --format=json" (add
"--with-lsm" if you've turned on device_enhanced_scan) manually on each
host do any of them fail in a similar fashion?
On Fri, Sep 24, 2021 at 1:37 PM Marco
On 9/24/21 08:33, Rainer Krienke wrote:
Hallo Dan,
I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I
am thinking if I should stay with this release or upgrade to octopus. So
your info is very valuable...
One more question: You described that OSDs do an expected fsck
Hello Everyone,
If you have any suggestions on the cause, or what we can do I'd certainly
appreciate it.
I'm seeing the following on a newly stood up cluster using Podman on Ubuntu
20.04.3 HWE:
Thank you very much
Marco
Sep 24, 2021, 1:24:30 PM [ERR] cephadm exited with an error code: 1,
Hi,
Wonder how you guys do it due to we will always have limitation on the network
bandwidth of the loadbalancer.
Or if no balancer what to monitor if 1 rgw maxed out? I’m using 15rgw.
Ty
___
ceph-users mailing list -- ceph-users@ceph.io
To
Hi Rainer,
On Fri, Sep 24, 2021 at 8:33 AM Rainer Krienke wrote:
>
> Hallo Dan,
>
> I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I
> am thinking if I should stay with this release or upgrade to octopus. So
> your info is very valuable...
>
> One more question: You
Hallo Dan,
I am also running a productive 14.2.22 Cluster with 144 HDD-OSDs and I
am thinking if I should stay with this release or upgrade to octopus. So
your info is very valuable...
One more question: You described that OSDs do an expected fsck and that
this took roughly 10min. I guess
Hi,
as a workaround you could just set the rotational flag by yourself:
echo 0 > /sys/block/sd[X]/queue/rotational
That's the one ceph-volume is searching for and it should at least
enable you to deploy the rest of the OSDs. Of course, you'll need to
figure out why the rotational flag is
10 matches
Mail list logo