Re: [ceph-users] Increasing time to save RGW objects

2016-02-10 Thread Saverio Proto
What kind of authentication you use against the Rados Gateway ? We had similar problem authenticating against our Keystone server. If the Keystone server is overloaded the time to read/write RGW objects increases. You will not see anything wrong on the ceph side. Saverio 2016-02-08 17:49

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 20:07, Kris Jurka a écrit : > > > On 2/9/2016 10:11 AM, Lionel Bouton wrote: > >> Actually if I understand correctly how PG splitting works the next spike >> should be times smaller and spread over times the period (where >> is the number of subdirectories created during each

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Samuel Just
There was a patch at some point to pre-split on pg creation (merged in ad6a2be402665215a19708f55b719112096da3f4). More generally, bluestore is the answer to this. -Sam On Tue, Feb 9, 2016 at 11:34 AM, Lionel Bouton wrote: > Le 09/02/2016 20:18, Lionel Bouton a

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Kris Jurka
On 2/9/2016 10:11 AM, Lionel Bouton wrote: Actually if I understand correctly how PG splitting works the next spike should be times smaller and spread over times the period (where is the number of subdirectories created during each split which seems to be 15 according to OSDs' directory

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 20:18, Lionel Bouton a écrit : > Le 09/02/2016 20:07, Kris Jurka a écrit : >> >> On 2/9/2016 10:11 AM, Lionel Bouton wrote: >> >>> Actually if I understand correctly how PG splitting works the next spike >>> should be times smaller and spread over times the period (where >>> is

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Wade Holler
Hi there, What is the best way to "look at the rgw admin socket " to see what operations are taking a long time ? Best Regards Wade On Mon, Feb 8, 2016 at 12:16 PM Gregory Farnum wrote: > On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: > > > > I've been

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Kris Jurka
On 2/8/2016 9:16 AM, Gregory Farnum wrote: On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: I've been testing the performance of ceph by storing objects through RGW. This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW instances. Initially the storage

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Hi, Le 09/02/2016 17:07, Kris Jurka a écrit : > > > On 2/8/2016 9:16 AM, Gregory Farnum wrote: >> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: >>> >>> I've been testing the performance of ceph by storing objects through >>> RGW. >>> This is on Debian with Hammer using 40

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Lionel Bouton
Le 09/02/2016 19:11, Lionel Bouton a écrit : > Actually if I understand correctly how PG splitting works the next spike > should be times smaller and spread over times the period (where > is the number of subdirectories created during each split which > seems to be 15 typo : 16 > according to

Re: [ceph-users] Increasing time to save RGW objects

2016-02-09 Thread Gregory Farnum
On Tue, Feb 9, 2016 at 8:07 AM, Kris Jurka wrote: > > > On 2/8/2016 9:16 AM, Gregory Farnum wrote: >> >> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: >>> >>> >>> I've been testing the performance of ceph by storing objects through RGW. >>> This is on

Re: [ceph-users] Increasing time to save RGW objects

2016-02-08 Thread Gregory Farnum
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote: > > I've been testing the performance of ceph by storing objects through RGW. > This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW > instances. Initially the storage time was holding reasonably steady, but