On Fri, Sep 22, 2017 at 6:48 PM, Michael Kuriger <[email protected]> wrote:
> I have a few running ceph clusters.  I built a new cluster using luminous,
> and I also upgraded a cluster running hammer to luminous.  In both cases, I
> have a HEALTH_WARN that I can't figure out.  The cluster appears healthy
> except for the HEALTH_WARN in overall status.  For now, I’m monitoring
> health from the “status” instead of “overall_status” until I can find out
> what the issue is.
>
>
>
> Any ideas?  Thanks!

There is a setting called mon_health_preluminous_compat_warning (true
by default), that forces the old overall_status field to WARN, to
create the awareness that your script is using the old health output.

If you do a "ceph health detail -f json" you'll see an explanatory message.

We should probably have made that visible in "status" too (or wherever
we output the overall_status as warning like this) -
https://github.com/ceph/ceph/pull/17930

John

>
>
> # ceph health detail
>
> HEALTH_OK
>
>
>
> # ceph -s
>
>   cluster:
>
>     id:     11d436c2-1ae3-4ea4-9f11-97343e5c673b
>
>     health: HEALTH_OK
>
>
>
> # ceph -s --format json-pretty
>
>
>
> {
>
>     "fsid": "11d436c2-1ae3-4ea4-9f11-97343e5c673b",
>
>     "health": {
>
>         "checks": {},
>
>         "status": "HEALTH_OK",
>
>         "overall_status": "HEALTH_WARN"
>
>
>
> <snip>
>
>
>
>
>
>
>
> Mike Kuriger
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to