> Try to create e.g. 20 (small) rbd devices, putting them all in a lvm vg, > creating a logical volume (Raid0) with > 20 stripes and e.g. stripeSize 1MB (better bandwith) or 4kb (better io) - or > use md-raid0 (it's maybe 10% faster - but not that flexible):
BTW - we use this approach for VMware using
- one LVM LV (raid0: 20 stripes, 1MB stripe size ) LUN based on
- one VG containing 20 rbd's (each 40GB) based on
- a ceph pool with 24osd, 3 replicates inside our
- ceph cluster, 3 nodes x 8 x 4TB OSD's, 2 x 10GBit
- published by scst (Fibre channel, 4 GBit QLA) to vSphere ESX.
IOmeter (one worker, one disk) inside a w2k8r2 vm @esx tells me
iometer: 270/360 MB/sec write/read (1MByte block size, 4
outstanding IOs)
And - important - other vm's share the bandwidth from 20 rbd volumes - so, now,
our 4GBit fibrchannel is the bottle neck - not the (one) rbd volume anymore.
Also, we will add a flashcache in front of the raid0 LV to bust the 4k IO's -
at the moment, 4k is terrible slow
iometer: 4/14 MB/sec write/read (4k block size, 8 outstanding
IOs)
with a 10 GByte flashcache, it's about
iometer: 14/60 MB/sec write/read (4k block size, 8 outstanding
IOs)
regards
Danny
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
