Yes, reads will be affected a lot for mix read/write scenarios as Ceph is 
serializing ops on a PG. Write path is inefficient and that is affecting reads 
in turn.
Hope you are following all the config settings (shards/threads, pg numbers etc. 
etc.) already discussed in the community.
You may want to try with bigger QD and see if it is improving or not.
BTW, try with jewel or latest master (if not already) and you should see mix 
read/write performance improvement as write performance has improved in Jewel..

Thanks & Regards
Somnath

From: ceph-users [mailto:[email protected]] On Behalf Of min 
fang
Sent: Monday, May 02, 2016 8:18 PM
To: ceph-users
Subject: [ceph-users] performance drop a lot when running fio mix read/write

Hi,  I run randow fio with rwmixread=70, and found read iops is 707, write is 
303. (reference the following).  This value is less than random write and read 
value. The 4K random write IOPs is 529 and 4k randread IOPs is 11343.  Apart 
from rw type is different, other parameters are all same.
I do not understand why mix write and read will have so huge impact on 
performance. All random IOs. thanks.


fio -filename=/dev/rbd2 -direct=1 -iodepth 64 -thread -rw=randrw -rwmixread=70 
-ioengine=libaio -bs=4k -size=100G -numjobs=1 -runtime=1000 -group_reporting 
-name=mytest1
mytest1: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.8
time     7423  cycles_start=1062103697308843
Starting 1 thread
Jobs: 1 (f=1): [m(1)] [100.0% done] [2144KB/760KB/0KB /s] [536/190/0 iops] [eta 
00m:00s]
mytest1: (groupid=0, jobs=1): err= 0: pid=7425: Sat Apr 30 08:55:14 2016
  read : io=2765.2MB, bw=2830.5KB/s, iops=707, runt=1000393msec
    slat (usec): min=2, max=268, avg= 8.93, stdev= 4.17
    clat (usec): min=203, max=1939.9K, avg=34039.43, stdev=93674.48
     lat (usec): min=207, max=1939.9K, avg=34048.93, stdev=93674.50
    clat percentiles (usec):
     |  1.00th=[  516],  5.00th=[  836], 10.00th=[ 1112], 20.00th=[ 1448],
     | 30.00th=[ 1736], 40.00th=[ 6944], 50.00th=[13376], 60.00th=[17280],
     | 70.00th=[21888], 80.00th=[30848], 90.00th=[49920], 95.00th=[103936],
     | 99.00th=[552960], 99.50th=[675840], 99.90th=[880640], 99.95th=[954368],
     | 99.99th=[1105920]
    bw (KB  /s): min=  350, max= 5944, per=100.00%, avg=2837.77, stdev=1272.84
  write: io=1184.8MB, bw=1212.8KB/s, iops=303, runt=1000393msec
    slat (usec): min=2, max=310, avg= 9.35, stdev= 4.50
    clat (msec): min=5, max=2210, avg=131.60, stdev=226.47
     lat (msec): min=5, max=2210, avg=131.61, stdev=226.47
    clat percentiles (msec):
     |  1.00th=[    9],  5.00th=[   13], 10.00th=[   15], 20.00th=[   20],
     | 30.00th=[   25], 40.00th=[   34], 50.00th=[   44], 60.00th=[   61],
     | 70.00th=[   84], 80.00th=[  125], 90.00th=[  449], 95.00th=[  709],
     | 99.00th=[ 1037], 99.50th=[ 1139], 99.90th=[ 1369], 99.95th=[ 1450],
     | 99.99th=[ 1663]
    bw (KB  /s): min=   40, max= 2562, per=100.00%, avg=1215.62, stdev=564.19
    lat (usec) : 250=0.01%, 500=0.60%, 750=1.94%, 1000=2.95%
    lat (msec) : 2=18.69%, 4=2.46%, 10=4.21%, 20=22.05%, 50=26.40%
    lat (msec) : 100=9.65%, 250=4.64%, 500=2.76%, 750=2.13%, 1000=1.11%
    lat (msec) : 2000=0.39%, >=2000=0.01%
  cpu          : usr=0.83%, sys=1.47%, ctx=971080, majf=0, minf=1
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=707885/w=303294/d=0, short=r=0/w=0/d=0, 
drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=2765.2MB, aggrb=2830KB/s, minb=2830KB/s, maxb=2830KB/s, 
mint=1000393msec, maxt=1000393msec
  WRITE: io=1184.8MB, aggrb=1212KB/s, minb=1212KB/s, maxb=1212KB/s, 
mint=1000393msec, maxt=1000393msec

Disk stats (read/write):
  rbd2: ios=707885/303293, merge=0/0, ticks=24085792/39904840, 
in_queue=64045864, util=100.00%


PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to