All;

Thank you to all who assisted, this was the problem!

My default PG/pool was too high for my total OSD count, and it was unable to 
create all of these pools.

I remove the other pools I had created, and reduced the default PGs / pool, and 
radosgw was able to create all of its default pools, and is now running 
properly.

Tank you,

Dominic L. Hilsbos, MBA
Director - Information Technology
Perform Air International Inc.
dhils...@performair.com
www.PerformAir.com
________________________________________
From: Torben Hørup [tor...@t-hoerup.dk]
Sent: Sunday, June 09, 2019 11:12 AM
To: Paul Emmerich
Cc: Dominic Hilsbos; Ceph Users
Subject: Re: [ceph-users] radosgw dying

For just core rgw services it will need these 4
.rgw.root



                          default.rgw.control



                                                   default.rgw.meta
default.rgw.log

When creating buckets and uploading data RGW will need additional 3:

default.rgw.buckets.index
default.rgw.buckets.non-ec
default.rgw.buckets.data

/Torben


On 09.06.2019 19:34, Paul Emmerich wrote:

> rgw uses more than one pool. (5 or 6 IIRC)
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Jun 9, 2019 at 7:00 PM <dhils...@performair.com> wrote:
>
> Huan;
>
> I get that, but the pool already exists, why is radosgw trying to
> create one?
>
> Dominic Hilsbos
>
> Get Outlook for Android
>
> On Sat, Jun 8, 2019 at 2:55 AM -0700, "huang jun" <hjwsm1...@gmail.com>
> wrote:
>
> From the error message, i'm decline to that 'mon_max_pg_per_osd' was
> exceed,
> you can check the value of it, and its default value is 250, so you
> can at most have 1500pgs(250*6osds),
> and for replicated pools with size=3, you can have 500pgs for all
> pools,
> you already have 448pgs, so the next pool can create at most
> 500-448=52pgs.
>
> 于2019年6月8日周六 下午2:41写道:
>>
>> All;
>>
>> I have a test and demonstration cluster running (3 hosts, MON, MGR, 2x
>> OSD per host), and I'm trying to add a 4th host for gateway purposes.
>>
>> The radosgw process keeps dying with:
>> 2019-06-07 15:59:50.700 7fc4ef273780  0 ceph version 14.2.1
>> (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process
>> radosgw, pid 17588
>> 2019-06-07 15:59:51.358 7fc4ef273780  0 rgw_init_ioctx ERROR:
>> librados::Rados::pool_create returned (34) Numerical result out of
>> range (this can be due to a pool or placement group misconfiguration,
>> e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
>> 2019-06-07 15:59:51.396 7fc4ef273780 -1 Couldn't init storage provider
>> (RADOS)
>>
>> The .rgw.root pool already exists.
>>
>> ceph status returns:
>> cluster:
>> id:     1a8a1693-fa54-4cb3-89d2-7951d4cee6a3
>> health: HEALTH_OK
>>
>> services:
>> mon: 3 daemons, quorum S700028,S700029,S700030 (age 30m)
>> mgr: S700028(active, since 47h), standbys: S700030, S700029
>> osd: 6 osds: 6 up (since 2d), 6 in (since 3d)
>>
>> data:
>> pools:   5 pools, 448 pgs
>> objects: 12 objects, 1.2 KiB
>> usage:   722 GiB used, 65 TiB / 66 TiB avail
>> pgs:     448 active+clean
>>
>> and ceph osd tree returns:
>> ID CLASS WEIGHT   TYPE NAME        STATUS REWEIGHT PRI-AFF
>> -1       66.17697 root default
>> -5       22.05899     host S700029
>> 2   hdd 11.02950         osd.2        up  1.00000 1.00000
>> 3   hdd 11.02950         osd.3        up  1.00000 1.00000
>> -7       22.05899     host S700030
>> 4   hdd 11.02950         osd.4        up  1.00000 1.00000
>> 5   hdd 11.02950         osd.5        up  1.00000 1.00000
>> -3       22.05899     host s700028
>> 0   hdd 11.02950         osd.0        up  1.00000 1.00000
>> 1   hdd 11.02950         osd.1        up  1.00000 1.00000
>>
>> Any thoughts on what I'm missing?
>>
>> Thank you,
>>
>> Dominic L. Hilsbos, MBA
>> Director - Information Technology
>> Perform Air International Inc.
>> dhils...@performair.com
>> www.PerformAir.com
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> Thank you!
> HuangJun
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to