On Wednesday, June 26, 2013, Oliver Fuckner wrote:
Hi,
I am fairly new to ceph and just built my first 4 systems.
I use:
Supermicro X9SCL-Board with E3-1240 (4*3.4GHz) CPU and 32GB RAM
LSI 9211-4i SAS HBA with 24 SATA disks and 2 SSDs (Intel 3700, 100GB), all
connected through a 6GBit-SAS expander
CentOS 6.4 with Kernel 2.6.32-358.11.1, 64bit
ceph 0.61.4
Intel 10GigEthernet NICs are used to connect the nodes together
xfs is used on journal and osds
The SSDs are configured in a mdadm raid1 and used for journals.
The SSDs can write 400MBytes/sec each, but the sum of all disks is exactly
half of it, 200MBytes/sec.
So there are 2 journal writes for every write to the osd?
No.
Is this expected behaviour? Why?
No, but at a guess your expanders aren't behaving properly. Alternatively,
your SSDs don't handle twelve write streams so well -- that's quite a lot
of oversubscription.
I would test the write behavior of your disks independently of Ceph (but
simultaneously!) and see what happens.
-Greg
This can be seen with real load and rados bench write -t 64
Details:
SSDs:
mdadm creation:
mdadm --create /dev/md2 --run --raid-devices=2 --level=raid1 --name=ssd
/dev/sdc /dev/sdd
mkfs.xfs -f -i size=2048 /dev/md2
mount options in /etc/fstab:
/dev/md2 /data/journal xfs rw,noatime,discard 1 2
OSDs:
mkfs.xfs -i size=2048 /dev/$24-disks
ceph-osd -i $ID --mkfs --mkkey --mkjournal --osd-data /data/osd.slotx
--osd-journal /data/journal/slotx
Journals are limited to 2GByte per osd via ceph.conf
Thanks,
Oliver
__**_
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/**listinfo.cgi/ceph-users-ceph.**comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com