Hi!

As I know:

C60X = SAS2 = 3Gbps 
LS2308 = 6Gbps
Onboard SATA3 = 6Gbps (usually only 2 ports)
Onboard SATA2 = 3Gbps (4-6 ports)

We use Intel S2600 motherboards and R2224GZ4 platforms in our Hammer 
evaluation instance. C60X connected to 4-drive 2.5 bay: 2 small SAS drives for 
OS.
2xS3700 200Gb SSDs for journals mounted on internal bays (on the plastic 
airflow 
cage) and connected to SATA3 ports. Remaining 20x2.5 front drive bays connected
to integrated RMS25CB080 raid (IR mode, LSI2208) and promoted as single-drive
RAID0 OSDs (SAS 10k drives).



Megov Igor
CIO, Yuterra



________________________________________
От: ceph-users <ceph-users-boun...@lists.ceph.com> от имени Jan Schermer 
<j...@schermer.cz>
Отправлено: 16 июня 2015 г. 15:09
Кому: ceph-users
Тема: [ceph-users] Slightly OT question - LSI SAS 2308 / 9207-8i        
performance

I apologise for a slightly OT question, but this controller is on the Inktank 
recommended hardware list, so someone might have an idea

I have 3 different controllers for my OSDs in the cluster

1) LSI SAS 2308 aka 9207-8i in IT mode (the target is to have this one 
everywhere)
2) a few Intel integrated C606 SAS
3) 1-2 Intel integrated SATA controllers

What I am seeing is that with any SSD that normaly has 30K IOPS on the Intel 
HBA achieves at most 100 on the LSI SAS 2308  (maybe 200 since I’m testing fio 
with filesystem). Those writes are synchronous and tested with fio —direct=1 
—sync=1
This seems abysmal and I have no idea what is causing that. There’s no (?) 
utility, and no settings for the HBA when in IT target/initiator mode that I 
know of, write cache is enabled (disabling makes no difference).

I am seeing this on all hosts that have this card, with different (17/19) 
firmwares, with different drives - only commonality is mpt2sas driver that is 
the same everywhere, and the brand of SSDs - but those SSDs perform much better 
when put in a different HBA.

Has anybody seen this? Real workloads seem to be unaffected so far, but it 
could quickly become a bottleneck once we upgrade to Giant.

Thanks

Jan


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to