Hello,
for a fresh setup ceph cluster I see a strange difference in the number
of existing pools in the output of ceph -s and what I know that should
be there: no pools at all.
I set up a fresh Nautilus cluster with 144 OSDs on 9 hosts. Just to play
around I created a pool named rbd with
$ ceph osd pool create rbd 512 512 replicated
In ceph -s I saw the pool but also saw a warning:
cluster:
id: a-b-c-d-e
health: HEALTH_WARN
too few PGs per OSD (21 < min 30)
So I experimented around, removed the pool (ceph osd pool remove rbd)
and it was gone in ceph osd lspools, and created a new one with some
more PGs and repeated this a few times with larger PG nums. In the end
in the output of ceph -s I see that 4 pools do exist:
cluster:
id: a-b-c-d-e
health: HEALTH_OK
services:
mon: 3 daemons, quorum c2,c5,c8 (age 8h)
mgr: c2(active, since 8h)
osd: 144 osds: 144 up (since 8h), 144 in (since 8h)
data:
pools: 4 pools, 0 pgs
objects: 0 objects, 0 B
usage: 155 GiB used, 524 TiB / 524 TiB avail
pgs:
but:
$ ceph osd lspools
<empty>
Since I deleted each pool I created, 0 pools is the correct answer.
I could add another "ghost" pool by creating another pool named rbd with
only 512 PGs and then delete it again right away. ceph -s would then
show me 5 pools. This is the way I came from 3 to 4 "ghost pools".
This does not seem to happen if I use 2048 PGs for the new pool which I
do delete right afterwards. In this case the pool is created and ceph -s
shows one pool more (5) and if delete this pool again the counter in
ceph -s goes back to 4 again.
How can I fix the system so that ceph -s also understands that are
actually no pools? There must be some inconsistency. Any ideas?
Thanks
Rainer
--
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287
1001312
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com