Hi 
I am trying to compare the performance of locally attached block device (SSD) 
with a network attached block device SSD)and I am seeing results for sequential 
reads and writes using that I cannot explain. The other results (random reads, 
writes etc) are as expected, i.e. local is better than remote.

Here is my setup
- Two machines connected back-to-back by a 10G link
- Running RHEL 6.4 (Santiago), 2.6.32-358.6.1.el6.x86_64
- Running nbd v2.9.20 (http://nbd.sourceforge.net/)
- Running fio v2.1.2
- Using identical SSD on both machines - Samsung 840 PRO, 128G
- all 128G exported as rw volume

I have my fio commands and output (only relevant portions) below. I cannot 
understand how the network device can have high throughput than local device. I 
see that when I use smaller block sizes to measure iops, the numbers are as 
expected (local > remote). 
Has anyone tried fio on nbd? does fio measure a transaction done when it sees 
the block-io request handed off to the virtual device and assume TCP will take 
care of completing the transaction? I can see that it might do so for posted 
operations such as writes, but reads?

Any clues?
thx,
Kishore


-------------------------------------------------------------------------------------------------------------

# Sequential Write Bandwidth test: Locally attached SSD
fio --name=writebw --filename=/dev/sdb --direct=1 --rw=write --bs=1m 
--numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 
--iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap 
--time_based --ioengine=libaio --group_reporting > sdb_seqwrite_bw.out

---- output----
writebw: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
writebw: (groupid=0, jobs=4): err= 0: pid=18689: Tue Nov 19 10:27:08 2013
  write: io=73932MB, bw=252024KB/s, iops=245, runt=300393msec
    slat (msec): min=1, max=262, avg=96.57, stdev=62.94
    clat (msec): min=74, max=808, avg=423.21, stdev=90.93
     lat (msec): min=194, max=912, avg=519.78, stdev=68.03
...
#### Sequential Write Bandwidth test: Network attached SSD (nbd0 is exposed as 
a NetworkBlockDevice)
fio --name=writebw --filename=/dev/nbd0 --direct=1 --rw=write --bs=1m 
--numjobs=4 --iodepth=32 --direct=1 --iodepth_batch=16 
--iodepth_batch_complete=16 --runtime=300 --ramp_time=5 --norandommap 
--time_based --ioengine=libaio --group_reporting > ndb0_seqwrite_bw.out

---- output----
writebw: (g=0): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
writebw: (groupid=0, jobs=4): err= 0: pid=8570: Mon Nov 18 18:18:34 2013
  write: io=142764MB, bw=487036KB/s, iops=475, runt=300163msec
    slat (msec): min=1, max=865, avg=134.08, stdev=70.20
    clat (msec): min=35, max=865, avg=134.61, stdev=70.20
     lat (msec): min=148, max=1029, avg=268.65, stdev=103.63
...
#-----------------------------------------------------------------------------
#### Sequential Read Bandwidth test: Locally attached SSD

fio --name=readbw --filename=/dev/sdb --direct=1 --rw=read --bs=1m --numjobs=4 
--iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 
--runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio 
--group_reporting > sdb_seqread_bw.out

---- output----
readbw: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
readbw: (groupid=0, jobs=4): err= 0: pid=18943: Tue Nov 19 10:47:32 2013
  read : io=81596MB, bw=278285KB/s, iops=271, runt=300247msec
    slat (msec): min=2, max=181, avg=96.16, stdev=48.97
    clat (msec): min=58, max=1161, avg=375.17, stdev=192.42
     lat (msec): min=223, max=1335, avg=471.37, stdev=197.63
...
#### Sequential Read Bandwidth test: Network attached SSD (nbd0 is exposed as a 
NetworkBlockDevice)

fio --name=readbw --filename=/dev/ndb0 --direct=1 --rw=read --bs=1m --numjobs=4 
--iodepth=32 --direct=1 --iodepth_batch=16 --iodepth_batch_complete=16 
--runtime=300 --ramp_time=5 --norandommap --time_based --ioengine=libaio 
--group_reporting > ndb0_seqread_bw.out

---- output----
readbw: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=32
...
readbw: (groupid=0, jobs=4): err= 0: pid=8781: Mon Nov 18 18:38:56 2013
  read : io=115692MB, bw=394691KB/s, iops=385, runt=300155msec
    slat (msec): min=77, max=405, avg=165.67, stdev=25.84
    clat (msec): min=39, max=405, avg=166.11, stdev=25.58
     lat (msec): min=168, max=621, avg=332.26, stdev=28.49
...
#-----------------------------------------------------------------------------
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to