Hi,
Just a heads up I hope , you are aware of this tool:
http://ceph.com/pgcalc/
Regards,
Vikhyat
On 02/11/2015 09:11 AM, Sumit Gaur wrote:
Hi ,
I am not sure why PG numbers have not given that much importance in
the ceph documents, I am seeing huge variation in performance number
by changing PG numbers.
Just an example
*without SSD* :*
*
36 OSD HDD => PG count 2048 gives me random write (1024K bz)
performance of 550 MBps
*with SSD :*
6 SSD for journals + 24 OSD HDD => PG count 2048 gives me random write
(1024K bz) performance 250 MBps
if I change it to
6 SSD for journals + 24 OSD HDD => PG count 512 gives me random write
(1024K bz) performance 700 MBps
Variation of PG numbers make SSD looks bad in number. I am bit
confused here with this behaviour.
Thanks
sumit
On Mon, Feb 9, 2015 at 11:36 AM, Gregory Farnum <[email protected]
<mailto:[email protected]>> wrote:
On Sun, Feb 8, 2015 at 6:00 PM, Sumit Gaur <[email protected]
<mailto:[email protected]>> wrote:
> Hi
> I have installed 6 node ceph cluster and doing a performance
bench mark for
> the same using Nova VMs. What I have observed that FIO random
write reports
> around 250 MBps for 1M block size and PGs 4096 and 650MBps for
iM block size
> and PG counts 2048 . Can some body let me know if I am missing
any ceph
> Architecture point here ? As per my understanding PG numbers are
mainly
> involved in calculating the hash and should not effect
performance so much.
PGs are also serialization points within the codebase, so depending on
how you're testing you can run into contention if you have multiple
objects within a single PG that you're trying to write to at once.
This isn't normally a problem, but for a single benchmark run the
random collisions can become noticeable.
-Greg
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com