I understand that my SSD is not suitable for journal. I want to test ceph using 
existing components before buy more expensive SSD (such as intel dc s3700).
I run fio with those options:
[global]
ioengine=libaio
invalidate=1
ramp_time=5
iodepth=1
runtime=300
time_based
direct=1 
bs=4k
size=1m
filename=/mnt/test.file
sync=1
fsync=1
direct=1
[seq-write]
stonewall
rw=write
Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/16232KB/0KB /s] [0/4058/0 iops] [eta 
00m:00s]
seq-write: (groupid=0, jobs=1): err= 0: pid=338872: Fri Oct 23 19:59:38 2015
write: io=4955.1MB, bw=16916KB/s, iops=4229, runt=299999msec
slat (usec): min=8, max=270, avg=14.62, stdev= 3.56
clat (usec): min=42, max=7673, avg=198.72, stdev=60.81
lat (usec): min=101, max=7689, avg=213.85, stdev=62.71
clat percentiles (usec):
| 1.00th=[ 137], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 165],
| 30.00th=[ 181], 40.00th=[ 189], 50.00th=[ 193], 60.00th=[ 197],
| 70.00th=[ 203], 80.00th=[ 209], 90.00th=[ 227], 95.00th=[ 334],
| 99.00th=[ 386], 99.50th=[ 402], 99.90th=[ 486], 99.95th=[ 524],
| 99.99th=[ 796]
lat (usec) : 50=0.01%, 100=0.01%, 250=92.61%, 500=7.31%, 750=0.07%
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=6.21%, sys=19.22%, ctx=2580951, majf=0, minf=239
IO depths : 1=101.7%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1268728/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4955.1MB, aggrb=16916KB/s, minb=16916KB/s, maxb=16916KB/s, 
mint=299999msec, maxt=299999msec
Disk stats (read/write):
sdh: ios=0/3869292, merge=0/3, ticks=0/210080, in_queue=208900, util=68.54%
Not so bad, but 16MB/sec with sequental 4k blocks.


>Пятница, 23 октября 2015, 16:35 +02:00 от Jan Schermer <[email protected]>:
>
>The drive you have is not suitable at all for journal. Horrible, actually.
>
>"test with fio (qd=32,128,256, bs=4k) show very good performance of SSD disk 
>(10-30k write io)."
>
>This is not realistic. Try:
>
>fio --sync=1 --fsync=1 --direct=1 --iodepth=1 --ioengine=aio ....
>
>Jan
>
>On 23 Oct 2015, at 16:31, K K < [email protected] > wrote:
>Hello.
>Some strange things happen with my ceph installation after I was moved journal 
>to SSD disk.
>OS: Ubuntu 15.04 with ceph version 0.94.2-0ubuntu0.15.04.1
>server: dell r510 with PERC H700 Integrated 512MB RAID cache
>my cluster have:
>1 monitor node
>2 OSD nodes with 6 OSD daemons at each server (3Tb HDD SATA 7200 rpm disks XFS 
>system). 
>network: 1Gbit to hypervisor and 1 Gbit among all ceph nodes
>ceph.conf:
>[global]
>public network = 10.12.0.0/16
>cluster network = 192.168.133.0/24
>auth cluster required = cephx
>auth service required = cephx
>auth client required = cephx
>filestore xattr use omap = true
>filestore max sync interval = 10
>filestore min sync interval = 1
>filestore queue max ops = 500
>#filestore queue max bytes = 16 MiB
>#filestore queue committing max ops = 4096
>#filestore queue committing max bytes = 16 MiB
>filestore op threads = 20
>filestore flusher = false
>filestore journal parallel = false
>filestore journal writeahead = true
>#filestore fsync flushes journal data = true
>journal dio = true
>journal aio = true
>osd pool default size = 2 # Write an object n times.
>osd pool default min size = 1 # Allow writing n copy in a degraded state.
>osd pool default pg num = 333
>osd pool default pgp num = 333
>osd crush chooseleaf type = 1
>
>[client]
>rbd cache = true
>rbd cache size = 1024000000
>rbd cache max dirty = 128000000
>[osd]
>osd journal size = 5200
>#osd journal = /dev/disk/by-partlabel/journal-$id
>Without SSD as a journal i have a ~112MB/sec throughput
>After I was added SSD 64Gb ADATA for a journal disk and create 6 raw 
>partitions. And I get a very slow bandwidth with rados bench:
>Total time run: 302.350730
>Total writes made: 1146
>Write size: 4194304
>Bandwidth (MB/sec): 15.161
>Stddev Bandwidth: 11.5658
>Max bandwidth (MB/sec): 52
>Min bandwidth (MB/sec): 0
>Average Latency: 4.21521
>Stddev Latency: 1.25742
>Max latency: 8.32535
>Min latency: 0.277449
>
>iostat show a few write io (no more than 200):
>
>Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await 
>w_await svctm %util
>sdh 0.00 0.00 0.00 8.00     0.00 1024.00    256.00 129.48  2120.50  0.00   
>2120.50 124.50 99.60
>sdh 0.00 0.00 0.00 124.00 0.00 14744.00  237.81 148.44  1723.81  0.00   
>1723.81  8.10 100.40
>sdh 0.00 0.00 0.00 114.00 0.00 13508.00  236.98 144.27  1394.91  0.00   
>1394.91  8.77 100.00
>sdh 0.00 0.00 0.00 122.00 0.00 13964.00  228.92 122.99  1439.74  0.00   
>1439.74  8.20 100.00
>sdh 0.00 0.00 0.00 161.00 0.00 19640.00  243.98 154.98  1251.16  0.00   
>1251.16  6.21 100.00
>sdh 0.00 0.00 0.00 11.00   0.00 1408.00    256.00 152.68   717.09   0.00   
>717.09    90.91 100.00
>sdh 0.00 0.00 0.00 154.00 0.00 18696.00  242.81 142.09  1278.65  0.00   
>1278.65  6.49 100.00
>test with fio (qd=32,128,256, bs=4k) show very good performance of SSD disk 
>(10-30k write io).
>Can anybody help me? Can someone faced with similar problem? 
>_______________________________________________
>ceph-users mailing list
>[email protected]
>http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to