Hi again,
and change the value with something like this
ceph tell osd.* injectargs '--mon_osd_full_ratio 0.96'
Udo
On 01.11.2016 21:16, Udo Lembke wrote:
> Hi Marcus,
>
> for a fast help you can perhaps increase the mon_osd_full_ratio?
>
> What values do you have?
> Please post the output of
Hi Marcus,
for a fast help you can perhaps increase the mon_osd_full_ratio?
What values do you have?
Please post the output of (on host ceph1, because osd.0.asok)
ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep
full_ratio
after that it would be helpfull to use on all hosts
m
Subject: [ceph-users] Need help! Ceph backfill_toofull and
recovery_wait+degraded
Hi all,
i have a big problem and i really hope someone can help me!
We are running a ceph cluster since a year now. Version is: 0.94.7 (Hammer)
Here is some info:
Our osd map is:
ID WEIGHT TYPE NAME
if you have the default crushmap and osd pool default size = 3, then
ceph creates 3 copies of each object. and store
it on 3 separate nodes.
so the best way to solve your space problems is to try to even out the
space between your hosts. either by adding disks to ceph1 ceph2 ceph3,
or by
Hi all,
i have a big problem and i really hope someone can help me!
We are running a ceph cluster since a year now. Version is: 0.94.7 (Hammer)
Here is some info:
Our osd map is:
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 26.67998 root default