Certainly.
Output of ceph osd df:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR
PGS STATUS
2 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
227 up
3 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
221 up
4 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
226 up
5 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
222 up
0 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
217 up
1 hdd 11.02950 1.00000 11 TiB 120 GiB 51 MiB 0 B 1 GiB 11 TiB 1.07 1.00
231 up
TOTAL 66 TiB 722 GiB 306 MiB 0 B 6 GiB 65 TiB 1.07
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
Thank you,
Dominic Hilsbos
Perform Air International Inc.
________________________________________
From:
Sent: Saturday, June 08, 2019 3:35 AM
To: Dominic Hilsbos
Subject: Re: [ceph-users] radosgw dying
Can you post this?
ceph osd df
On Fri, Jun 7, 2019 at 7:31 PM
<[email protected]<mailto:[email protected]>> wrote:
All;
I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x OSD per
host), and I'm trying to add a 4th host for gateway purposes.
The radosgw process keeps dying with:
2019-06-07 15:59:50.700 7fc4ef273780 0 ceph version 14.2.1
(d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process radosgw,
pid 17588
2019-06-07 15:59:51.358 7fc4ef273780 0 rgw_init_ioctx ERROR:
librados::Rados::pool_create returned (34) Numerical result out of range (this
can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num
or mon_max_pg_per_osd exceeded)
2019-06-07 15:59:51.396 7fc4ef273780 -1 Couldn't init storage provider (RADOS)
The .rgw.root pool already exists.
ceph status returns:
cluster:
id: 1a8a1693-fa54-4cb3-89d2-7951d4cee6a3
health: HEALTH_OK
services:
mon: 3 daemons, quorum S700028,S700029,S700030 (age 30m)
mgr: S700028(active, since 47h), standbys: S700030, S700029
osd: 6 osds: 6 up (since 2d), 6 in (since 3d)
data:
pools: 5 pools, 448 pgs
objects: 12 objects, 1.2 KiB
usage: 722 GiB used, 65 TiB / 66 TiB avail
pgs: 448 active+clean
and ceph osd tree returns:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 66.17697 root default
-5 22.05899 host S700029
2 hdd 11.02950 osd.2 up 1.00000 1.00000
3 hdd 11.02950 osd.3 up 1.00000 1.00000
-7 22.05899 host S700030
4 hdd 11.02950 osd.4 up 1.00000 1.00000
5 hdd 11.02950 osd.5 up 1.00000 1.00000
-3 22.05899 host s700028
0 hdd 11.02950 osd.0 up 1.00000 1.00000
1 hdd 11.02950 osd.1 up 1.00000 1.00000
Any thoughts on what I'm missing?
Thank you,
Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
[email protected]
www.PerformAir.com<http://www.PerformAir.com>
_______________________________________________
ceph-users mailing list
[email protected]<mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Shawn Iverson, CETL
Director of Technology
Rush County Schools
765-932-3901 option 7
[email protected]<mailto:[email protected]>
[https://docs.google.com/uc?export=download&id=0Bw5iD0ToYvs_Zkh4eEs3R01yWXc&revid=0Bw5iD0ToYvs_QWpBK2Y2ajJtYjhOMDRFekZwK2xOamk5Q3Y0PQ][https://docs.google.com/uc?export=download&id=1aBrlQou4gjB04FY-twHN_0Dn3GHVNxqa&revid=0Bw5iD0ToYvs_RnQ0eDhHcm95WHBFdkNRbXhQRXpoYkR6SEEwPQ][Cybersecurity]
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com