All;
I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x OSD per
host), and I'm trying to add a 4th host for gateway purposes.
The radosgw process keeps dying with:
2019-06-07 15:59:50.700 7fc4ef273780 0 ceph version 14.2.1
(d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process radosgw,
pid 17588
2019-06-07 15:59:51.358 7fc4ef273780 0 rgw_init_ioctx ERROR:
librados::Rados::pool_create returned (34) Numerical result out of range (this
can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num
or mon_max_pg_per_osd exceeded)
2019-06-07 15:59:51.396 7fc4ef273780 -1 Couldn't init storage provider (RADOS)
The .rgw.root pool already exists.
ceph status returns:
cluster:
id: 1a8a1693-fa54-4cb3-89d2-7951d4cee6a3
health: HEALTH_OK
services:
mon: 3 daemons, quorum S700028,S700029,S700030 (age 30m)
mgr: S700028(active, since 47h), standbys: S700030, S700029
osd: 6 osds: 6 up (since 2d), 6 in (since 3d)
data:
pools: 5 pools, 448 pgs
objects: 12 objects, 1.2 KiB
usage: 722 GiB used, 65 TiB / 66 TiB avail
pgs: 448 active+clean
and ceph osd tree returns:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 66.17697 root default
-5 22.05899 host S700029
2 hdd 11.02950 osd.2 up 1.00000 1.00000
3 hdd 11.02950 osd.3 up 1.00000 1.00000
-7 22.05899 host S700030
4 hdd 11.02950 osd.4 up 1.00000 1.00000
5 hdd 11.02950 osd.5 up 1.00000 1.00000
-3 22.05899 host s700028
0 hdd 11.02950 osd.0 up 1.00000 1.00000
1 hdd 11.02950 osd.1 up 1.00000 1.00000
Any thoughts on what I'm missing?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
[email protected]
www.PerformAir.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com