Stephen,
It's a 1 RBD volume (preconditioned) of 2 TB size from one physical client box.
fio-rbd script I am running with 10 jobs and each with 64 QD.
For mixed workload it is with QD = 8 and num_job= 1 and 10.


Thanks & Regards
Somnath

-----Original Message-----
From: Blinick, Stephen L [mailto:stephen.l.blin...@intel.com] 
Sent: Thursday, September 03, 2015 1:02 PM
To: Somnath Roy
Cc: ceph-devel
Subject: RE: Ceph Write Path Improvement

Somnath -- thanks for publishing all the data, will be great to look at it 
offline.  I didn't find this info:  How many RBD volumes, and what size, did 
you use for your mixed tests?  Was it just one RBD w/ num_jobs=1 & 10?  Also 
how many client systems were necessary to drive the workload on the 4 storage 
nodes?      

I saw the same behavior quite a while back when playing with ramdisk journal... 
Not a lot of improvement.

Thanks,

Stephen

-----Original Message-----
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Somnath Roy
Sent: Thursday, September 3, 2015 10:42 AM
To: Mark Nelson; Robert LeBlanc
Cc: ceph-devel
Subject: RE: Ceph Write Path Improvement

Yes, As Mark said I will collect all the data and hopefully I can present in 
the next performance meeting.
BTW, I have tested with Hammer code base + NvRAM journal initially, but that 
performance is very spiky with ~10% performance gain (at max). I thought there 
is no point of collecting more data with that config.
That's why I have introduced a new throttling scheme that should benefit in all 
the scenarios.

Thanks & Regards
Somnath

-----Original Message-----
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Thursday, September 03, 2015 9:42 AM
To: Robert LeBlanc; Somnath Roy
Cc: ceph-devel
Subject: Re: Ceph Write Path Improvement

On 09/03/2015 11:23 AM, Robert LeBlanc wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Somnath,
>
> I'm having a hard time with your slide deck. Am I understanding 
> correctly that the default Hammer install was performed on SSDs with 
> co-located journals, but the optimized code was performed on the same 
> SSDs but the journal was in NVRAM? If so I'm having a hard time 
> understanding how these tests can be comparable. I really like the 
> performance gains you are seeing, but I'm trying to understand how 
> much the optimized code alone helps performance.

Hi Robert,

We talked about this a bit at the weekly performance meeting.  I think Somnath 
just hasn't gotten a chance to do those tests yet and is planning on doing them 
in the coming weeks.  I believe he started out with hammer on the SSDs and then 
tried to figure out how to tweak things to make the NVRAM configuration perform 
better.  Now he has to go back and retest the original configuration but with 
the new code.

Mark

________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

  칻 & ~ &   +-  ݶ  w  ˛   m  ^  b  ^n r   z   h    &   G   h ( 階 ݢj"  
 m     z ޖ   f   h   ~ m 
N�����r��y����b�X��ǧv�^�)޺{.n�+���z�]z���{ay�ʇڙ�,j��f���h���z��w���
���j:+v���w�j�m��������zZ+�����ݢj"��!�i

Reply via email to