Good morning Joao,
thanks for your feedback! We do actually have three managers running:
cluster:
id: 26c0c5a8-d7ce-49ac-b5a7-bfd9d0ba81ab
health: HEALTH_WARN
1/3 mons down, quorum server5,server3
services:
mon: 3 daemons, quorum server5,server3, out of quorum: server2
mgr: 0(active), standbys: 1, 2
osd: 57 osds: 57 up, 57 in
data:
pools: 3 pools, 1344 pgs
objects: 580k objects, 2256 GB
usage: 6778 GB used, 30276 GB / 37054 GB avail
pgs: 1344 active+clean
io:
client: 17705 B/s rd, 14586 kB/s wr, 21 op/s rd, 70 op/s wr
Joao Eduardo Luis <[email protected]> writes:
> This looks a lot like a bug I fixed a week or so ago, but for which I
> currently don't recall the ticket off the top of my head. It was basically a
> crash each time a "ceph osd df" was called, if a mgr was not available after
> having set the
> luminous osd require flag. I will check the log in the morning to figure out
> whether you need to upgrade to a newer version or if this is a corner case
> the fix missed. In the mean time, check if you have ceph-mgr running, because
> that's
> the easy work around (assuming it's the same bug).
>
> -Joao
--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com