I have some notes about sharing performance results to mailing list
like ceph-user.  Not directly related to the topic but I think it
worth mentioning.

I suggest we provide more supporting materials when posting
performance data when possible.  It may seem lengthy and boring but it
really helps others to interpret the data.

* List hardware configurations for all involved components;
* List software version, configurations;
* Describe the methodology of testing (e.g. how many instances of VMs,
workloads, what was done to avoid page cache influence, why specific
IO size was chosen...)
* Your own interpretation of the data.

Here's one example:
--------------------------------------
 5 x Ceph Storage Nodes:
   Processors: 2 x Intel Xeon E6-2670 @ 2.6GHz (HT on)
   Mem: 128 GB (8 x 16 GB DDR3)
   HBA: LSI 9205 (JBOD)
   Disks:
       OSDs: 24 x Seagate ST91000640NS 2TB connected to LSI HBA
       Journal: 6 x Samsung 840 EVO 500GB, connected to SATA interface
on motherboard
   NICs: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2

2 x Ceph clients:
   Processors: 2 x Intel Xeon E6-2670 @ 2.6GHz (HT on)
   Mem: 128 GB (8 x 16 GB DDR3)
   Disks: 2 x Samsung 840 EVO 500GB
   NICs: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2

1 Ceph-mon is running on one of the storage node; clients and storage
nodes are connected to same 24-port 10GbE switch.

Ceph 0.67.7 Dumpling (link to ceph.conf) with in-house patch (link to
patch or git branch)
FIO 2.0.8
 ... detailed fio parameters
OS: Ubuntu 12.10
Kernel: 3.5.0-22
KVM (if used)

List other software tunings applied to the system, such as readahead
size of file system, specific partition parameter of XFS, TCP/IP stack
tuning, ulimit...
------------------------------------


-- 
Regards
Huang Zhiteng
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to