On Tue, May 28, 2019 at 11:50:01AM -0700, Gregory Farnum wrote:
  You’re the second report I’ve seen if this, and while it’s confusing,
  you should be Abel to resolve it by restarting your active manager
  daemon.
Maybe this is related? http://tracker.ceph.com/issues/40011

  On Sun, May 26, 2019 at 11:52 PM Lars Täuber <[1][email protected]>
  wrote:

    Fri, 24 May 2019 21:41:33 +0200
    Michel Raabe <[2][email protected]> ==> Lars Täuber
    <[3][email protected]>, [4][email protected] :
    >
    > You can also try
    >
    > $ rados lspools
    > $ ceph osd pool ls
    >
    > and verify that with the pgs
    >
    > $ ceph pg ls --format=json-pretty | jq -r '.pg_stats[].pgid' | cut
    -d.
    > -f1 | uniq
    >
    Yes, now I know but I still get this:
    $ sudo ceph -s
    […]
      data:
        pools:   5 pools, 1153 pgs
    […]
    and with all other means I get:
    $ sudo ceph osd lspools | wc -l
    3
    Which is what I expect, because all other pools are removed.
    But since this has no bad side effects I can live with it.
    Cheers,
    Lars
    _______________________________________________
    ceph-users mailing list
    [5][email protected]
    [6]http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

References

  1. mailto:[email protected]
  2. mailto:[email protected]
  3. mailto:[email protected]
  4. mailto:[email protected]
  5. mailto:[email protected]
  6. http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Jan Fajerski
Engineer Enterprise Storage
SUSE Linux GmbH, GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to