What kind of authentication you use against the Rados Gateway ?
We had similar problem authenticating against our Keystone server. If
the Keystone server is overloaded the time to read/write RGW objects
increases. You will not see anything wrong on the ceph side.
Saverio
2016-02-08 17:49
Le 09/02/2016 20:07, Kris Jurka a écrit :
>
>
> On 2/9/2016 10:11 AM, Lionel Bouton wrote:
>
>> Actually if I understand correctly how PG splitting works the next spike
>> should be times smaller and spread over times the period (where
>> is the number of subdirectories created during each
There was a patch at some point to pre-split on pg creation (merged in
ad6a2be402665215a19708f55b719112096da3f4). More generally, bluestore
is the answer to this.
-Sam
On Tue, Feb 9, 2016 at 11:34 AM, Lionel Bouton
wrote:
> Le 09/02/2016 20:18, Lionel Bouton a
On 2/9/2016 10:11 AM, Lionel Bouton wrote:
Actually if I understand correctly how PG splitting works the next spike
should be times smaller and spread over times the period (where
is the number of subdirectories created during each split which
seems to be 15 according to OSDs' directory
Le 09/02/2016 20:18, Lionel Bouton a écrit :
> Le 09/02/2016 20:07, Kris Jurka a écrit :
>>
>> On 2/9/2016 10:11 AM, Lionel Bouton wrote:
>>
>>> Actually if I understand correctly how PG splitting works the next spike
>>> should be times smaller and spread over times the period (where
>>> is
Hi there,
What is the best way to "look at the rgw admin socket " to see what
operations are taking a long time ?
Best Regards
Wade
On Mon, Feb 8, 2016 at 12:16 PM Gregory Farnum wrote:
> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
> >
> > I've been
On 2/8/2016 9:16 AM, Gregory Farnum wrote:
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
I've been testing the performance of ceph by storing objects through RGW.
This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW
instances. Initially the storage
Hi,
Le 09/02/2016 17:07, Kris Jurka a écrit :
>
>
> On 2/8/2016 9:16 AM, Gregory Farnum wrote:
>> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
>>>
>>> I've been testing the performance of ceph by storing objects through
>>> RGW.
>>> This is on Debian with Hammer using 40
Le 09/02/2016 19:11, Lionel Bouton a écrit :
> Actually if I understand correctly how PG splitting works the next spike
> should be times smaller and spread over times the period (where
> is the number of subdirectories created during each split which
> seems to be 15
typo : 16
> according to
On Tue, Feb 9, 2016 at 8:07 AM, Kris Jurka wrote:
>
>
> On 2/8/2016 9:16 AM, Gregory Farnum wrote:
>>
>> On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
>>>
>>>
>>> I've been testing the performance of ceph by storing objects through RGW.
>>> This is on
On Mon, Feb 8, 2016 at 8:49 AM, Kris Jurka wrote:
>
> I've been testing the performance of ceph by storing objects through RGW.
> This is on Debian with Hammer using 40 magnetic OSDs, 5 mons, and 4 RGW
> instances. Initially the storage time was holding reasonably steady, but
11 matches
Mail list logo