Hi iostat i am not running on Network interface. I am running on my storage device( SSD card:: iostat /dev/nvme0n1 1).
Sateesh On Tue, Nov 10, 2015 at 1:47 AM, Piotr Rybicki <[email protected] > wrote: > W dniu 2015-11-10 o 04:01, satish kondapalli pisze: > > Hi, >> >> I am running performance test between fuse vs libgfapi. I have a >> single node, client and server are running on same node. I have NVMe SSD >> device as a storage. >> >> My volume info:: >> >> [root@sys04 ~]# gluster vol info >> Volume Name: vol1 >> Type: Distribute >> Volume ID: 9f60ceaf-3643-4325-855a-455974e36cc7 >> Status: Started >> Number of Bricks: 1 >> Transport-type: tcp >> Bricks: >> Brick1: 172.16.71.19:/mnt_nvme/brick1 >> Options Reconfigured: >> performance.cache-size: 0 >> performance.write-behind: off >> performance.read-ahead: off >> performance.io-cache: off >> performance.strict-o-direct: on >> >> >> fio Job file:: >> >> [global] >> direct=1 >> runtime=20 >> time_based >> ioengine=gfapi >> iodepth=1 >> volume=vol1 >> brick=172.16.71.19 >> rw=read >> size=128g >> bs=32k >> group_reporting >> numjobs=1 >> filename=128g.bar >> >> While doing sequential read test, I am not seeing any data transfer on >> device with iostat tool. Looks like gfapi engine is reading from the >> cache because i am reading from same file with different block sizes. >> >> But i disabled io cache for my volume. Can someone help me from where >> fio is reading the data? >> > > Hi. > > It is normal - not seeing traffic on ethernet interface, when using native > RDMA protocol (not TCP via IPoIB). > > Try perfquery -x , to see traffic counters increase on RDMA interface. > > Regards > Piotr Rybicki > _______________________________________________ > Gluster-users mailing list > [email protected] > http://www.gluster.org/mailman/listinfo/gluster-users >
_______________________________________________ Gluster-devel mailing list [email protected] http://www.gluster.org/mailman/listinfo/gluster-devel
