> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Curley, Matthew
> Sent: Thursday, October 01, 2015 5:33 PM
>
> We've been trying to reproduce the allocator performance impact on 4K
> random reads seen in the Hackathon
I was able reproduce it with this config a lot of time,4k randread,
intel s3610 ssd, testing with a small rbd which can be keep in buffer memory.
I think around 150-200k iops by osd, I was able to trigger it easily.
auth_cluster_required = none
auth_service_required = none
>>also - more clients would be better (or worse, depending on how you look at
>>it).
It's quite possible, if I remember, I could trigger more easily with fio with a
lot of numjobs (30-40)
- Mail original -
De: "Dałek, Piotr"
À: "Curley, Matthew"
On Fri, 2 Oct 2015, Tom Nakamura wrote:
> Hi Sage,
> Thank you for the reply,
>
> On Thu, Oct 1, 2015, at 05:33 AM, Sage Weil wrote:
> > > - Each message is an object with some unique ID. Use omap to store all
> > > its features in the same object.
> > > - For each time period (which will have to
> -Original Message-
> From: Alexandre DERUMIER [mailto:aderum...@odiso.com]
> Sent: Friday, October 02, 2015 1:26 PM
> To: Dałek, Piotr
> Cc: Curley, Matthew; ceph-devel
> Subject: Re: Reproducing allocator performance differences
>
> >>I use rados bench -t 128 for that :)
>
> I'm not
>>I use rados bench -t 128 for that :)
I'm not sure it's exactly the same
on my rados bench tests,
I need to launch multiple "rados bench" process in parallel to scale at high
speed.
(like fio -numjobs which create multiple fio process )
- Mail original -
De: "Dałek, Piotr"
Hi Sage,
Thank you for the reply,
On Thu, Oct 1, 2015, at 05:33 AM, Sage Weil wrote:
> > - Each message is an object with some unique ID. Use omap to store all
> > its features in the same object.
> > - For each time period (which will have to be pre-specified to, say, an
> > hour), we have an
Hi Josh,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the rbd suite (http://tracker.ceph.com/issues/12701#note-61). Do you
think the hammer branch is ready for QE to start their own round of testing ?
Cheers
P.S.
Hi Yehuda,
The next hammer release as found at https://github.com/ceph/ceph/tree/hammer
passed the rgw suite (http://tracker.ceph.com/issues/12701#note-58).
Do you think the hammer branch is ready for QE to start their own round of
testing ?
Cheers
P.S.
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
> ow...@vger.kernel.org] On Behalf Of Alexandre DERUMIER
> Sent: Friday, October 02, 2015 8:55 AM
>
> >>also - more clients would be better (or worse, depending on how you look
> at it).
>
> It's quite
On Fri, Oct 2, 2015 at 9:48 PM, Nicholas Krause wrote:
> This removes unused goto labels in decode crush map functions related
> to error paths due to them never being used on any error path for these
> particular functions in the file, osdmap.c.
>
> Signed-off-by: Nicholas
11 matches
Mail list logo