Hi David,
Yes, I meant no separate partitions for WAL and DB
I am using 2 x 10 GB bonded ( BONDING_OPTS="mode=4 miimon=100
xmit_hash_policy=1 lacp_rate=1") for cluster and 1 x 1GB for public
Disks are
Vendor Id : TOSHIBA
Product Id : PX05SMB040Y
State : Online
Disk Type : SAS,Solid State Device
Capacity : 372.0 GB
On 22 January 2018 at 11:24, David Turner <[email protected]> wrote:
> Disk models, other hardware information including CPU, network config?
> You say you're using Luminous, but then say journal on same device. I'm
> assuming you mean that you just have the bluestore OSD configured without a
> separate WAL or DB partition? Any more specifics you can give will be
> helpful.
>
> On Mon, Jan 22, 2018 at 11:20 AM Steven Vacaroaia <[email protected]>
> wrote:
>
>> Hi,
>>
>> I'll appreciate if you can provide some guidance / suggestions regarding
>> perfomance issues on a test cluster ( 3 x DELL R620, 1 Entreprise SSD, 3 x
>> 600 GB ,Entreprise HDD, 8 cores, 64 GB RAM)
>>
>> I created 2 pools ( replication factor 2) one with only SSD and the other
>> with only HDD
>> ( journal on same disk for both)
>>
>> The perfomance is quite similar although I was expecting to be at least 5
>> times better
>> No issues noticed using atop
>>
>> What should I check / tune ?
>>
>> Many thanks
>> Steven
>>
>>
>>
>> HDD based pool ( journal on the same disk)
>>
>> ceph osd pool get scbench256 all
>>
>> size: 2
>> min_size: 1
>> crash_replay_interval: 0
>> pg_num: 256
>> pgp_num: 256
>> crush_rule: replicated_rule
>> hashpspool: true
>> nodelete: false
>> nopgchange: false
>> nosizechange: false
>> write_fadvise_dontneed: false
>> noscrub: false
>> nodeep-scrub: false
>> use_gmt_hitset: 1
>> auid: 0
>> fast_read: 0
>>
>>
>> rbd bench --io-type write image1 --pool=scbench256
>> bench type write io_size 4096 io_threads 16 bytes 1073741824 pattern
>> sequential
>> SEC OPS OPS/SEC BYTES/SEC
>> 1 46816 46836.46 191842139.78
>> 2 90658 45339.11 185709011.80
>> 3 133671 44540.80 182439126.08
>> 4 177341 44340.36 181618100.14
>> 5 217300 43464.04 178028704.54
>> 6 259595 42555.85 174308767.05
>> elapsed: 6 ops: 262144 ops/sec: 42694.50 bytes/sec: 174876688.23
>>
>> fio /home/cephuser/write_256.fio
>> write-4M: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
>> iodepth=32
>> fio-2.2.8
>> Starting 1 process
>> rbd engine: RBD version: 1.12.0
>> Jobs: 1 (f=1): [r(1)] [100.0% done] [66284KB/0KB/0KB /s] [16.6K/0/0 iops]
>> [eta 00m:00s]
>>
>>
>> fio /home/cephuser/write_256.fio
>> write-4M: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
>> fio-2.2.8
>> Starting 1 process
>> rbd engine: RBD version: 1.12.0
>> Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/14464KB/0KB /s] [0/3616/0 iops]
>> [eta 00m:00s]
>>
>>
>> SSD based pool
>>
>>
>> ceph osd pool get ssdpool all
>>
>> size: 2
>> min_size: 1
>> crash_replay_interval: 0
>> pg_num: 128
>> pgp_num: 128
>> crush_rule: ssdpool
>> hashpspool: true
>> nodelete: false
>> nopgchange: false
>> nosizechange: false
>> write_fadvise_dontneed: false
>> noscrub: false
>> nodeep-scrub: false
>> use_gmt_hitset: 1
>> auid: 0
>> fast_read: 0
>>
>> rbd -p ssdpool create --size 52100 image2
>>
>> rbd bench --io-type write image2 --pool=ssdpool
>> bench type write io_size 4096 io_threads 16 bytes 1073741824 pattern
>> sequential
>> SEC OPS OPS/SEC BYTES/SEC
>> 1 42412 41867.57 171489557.93
>> 2 78343 39180.86 160484805.88
>> 3 118082 39076.48 160057256.16
>> 4 155164 38683.98 158449572.38
>> 5 192825 38307.59 156907885.84
>> 6 230701 37716.95 154488608.16
>> elapsed: 7 ops: 262144 ops/sec: 36862.89 bytes/sec: 150990387.29
>>
>>
>> [root@osd01 ~]# fio /home/cephuser/write_256.fio
>> write-4M: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32
>> fio-2.2.8
>> Starting 1 process
>> rbd engine: RBD version: 1.12.0
>> Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/20224KB/0KB /s] [0/5056/0 iops]
>> [eta 00m:00s]
>>
>>
>> fio /home/cephuser/write_256.fio
>> write-4M: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
>> iodepth=32
>> fio-2.2.8
>> Starting 1 process
>> rbd engine: RBD version: 1.12.0
>> Jobs: 1 (f=1): [r(1)] [100.0% done] [76096KB/0KB/0KB /s] [19.3K/0/0 iops]
>> [eta 00m:00s]
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com