I get this too, since I last rebooted a server (one of three).
ceph -s says:
cluster:
id: a8c34694-a172-4418-a7dd-dd8a642eb545
health: HEALTH_OK
services:
mon: 3 daemons, quorum box1,box2,box3
mgr: box3(active), standbys: box1, box2
osd: N osds: N up, N in
rgw: 3 daemons active
mgr dashboard says:
Overall status: HEALTH_WARN
MON_DOWN: 1/3 mons down, quorum box1,box3
I wasn't going to worry too much. I'll check logs and restart an mgr then.
Sean
On Fri, 4 May 2018, John Spray said:
> On Fri, May 4, 2018 at 7:21 AM, Tracy Reed <[email protected]> wrote:
> > My ceph status says:
> >
> > cluster:
> > id: b2b00aae-f00d-41b4-a29b-58859aa41375
> > health: HEALTH_OK
> >
> > services:
> > mon: 3 daemons, quorum ceph01,ceph03,ceph07
> > mgr: ceph01(active), standbys: ceph-ceph07, ceph03
> > osd: 78 osds: 78 up, 78 in
> >
> > data:
> > pools: 4 pools, 3240 pgs
> > objects: 4384k objects, 17533 GB
> > usage: 53141 GB used, 27311 GB / 80452 GB avail
> > pgs: 3240 active+clean
> >
> > io:
> > client: 4108 kB/s rd, 10071 kB/s wr, 27 op/s rd, 331 op/s wr
> >
> > but my mgr dashboard web interface says:
> >
> >
> > Health
> > Overall status: HEALTH_WARN
> >
> > PG_AVAILABILITY: Reduced data availability: 2563 pgs inactive
> >
> >
> > Anyone know why the discrepency? Hopefully the dashboard is very
> > mistaken! Everything seems to be operating normally. If I had 2/3 of my
> > pgs inactive I'm sure all of my rbd backing my VMs would be blocked etc.
>
> A situation like this probably indicates that something is going wrong
> with the mon->mgr synchronisation of health state (it's all calculated
> in one place and the mon updates the mgr every few seconds).
>
> 1. Look for errors in your monitor logs
> 2. You'll probably find that everything gets back in sync if you
> restart a mgr daemon
>
> John
>
> > I'm running ceph-12.2.4-0.el7.x86_64 on CentOS 7. Almost all filestore
> > except for one OSD which recently had to be replaced which I made
> > bluestore. I plan to slowly migrate everything over to bluestore over
> > the course of the next month.
> >
> > Thanks!
> >
> > --
> > Tracy Reed
> > http://tracyreed.org
> > Digital signature attached for your safety.
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com