if you use direct=1 you should configure ganeshaa to use O_SYNC option this
way you save some round trip time.

------------------------------------------
Sven Oehme
Scalable Storage Research
email: [email protected]
Phone: +1 (408) 824-8904
IBM Almaden Research Lab
------------------------------------------



From:   Deepak Naidu <[email protected]>
To:     Malahal Naineni <[email protected]>, "Kaleb S. KEITHLEY"
            <[email protected]>
Cc:     "[email protected]"
            <[email protected]>,
            "[email protected]"
            <[email protected]>,
            "[email protected]"
            <[email protected]>
Date:   08/16/2016 06:11 PM
Subject:        Re: [Nfs-ganesha-devel] [Nfs-ganesha-support] NFS-Ganehsa
            performance seems very low comapred with kernel.nfs



>> Since you reported slower 4K but quite good 1M results, is it possible
that ganesha's latency is higher
Yes, I think so. Bcos even if I try 32k IO its comparably slower with
kernel.nfs. But as I increase the IO block size ie 125K or 1M the
performance gets somewhere near(lower) to kernel.nfs.
I have "nb_worker = 256" based on Kaleb S. KEITHLEY [email protected] on
other thread that didn’t help either. One thing clearly visible is the NFS
server load increases when using NFS-ganesha , so it looks like it’s purely
the NFS-ganesha application latency.


>> Did you try sequential 4K workload?
Yes, it’s not bad as 4K write. But as I said in above comment the load
increases  when using NFS ganesha.

>> Not very familiar with fio, can you increase the fio parallel threads
and see if that makes any difference.
Nothing, it’s the same its gets longer time to complete the job. I tried 45
threads it says 14hrs to complete. This is the randwrite.

Based on the various IO workload I try it looks like “only random writes”
are the worst performer when using “FIO” tool. I haven’t tried other
benchmark tools.


--
Deepak












From: Malahal Naineni [mailto:[email protected]]
Sent: Tuesday, August 16, 2016 7:21 AM
To: Deepak Naidu
Cc: [email protected]
Subject: Re: [Nfs-ganesha-devel] [Nfs-ganesha-support] NFS-Ganehsa
performance seems very low comapred with kernel.nfs

Deepak, V2.3.2 and older use net.core.rmem_max/wmem_max. By default this
value is low on many systems. V2.3.3 should be using net.ipv4.tcp_rmem/wmem
entries which are usually tuned by default on many systems. You probably
configured the former manually. I don't see an issue with your network
settings.

Since you reported slower 4K but quite good 1M results, is it possible that
ganesha's latency is higher and this is the reason for the slow 4K random
workload? Did you try sequential 4K workload? I imagine the sequential 4K
might turn into a bigger NFS write resulting in OK performance,

Not very familiar with fio, can you increase the fio parallel threads and
see if that makes any difference.

To see the latency, you could do "tcpdump -i <iface> -w <filename> port
nfs" and then run "tshark -q -z rpc,srt,100003,3 -r <filename>". This
tshark will give NFSv3 response for each type of NFSv3 op. You may need a
different command for NFSv4. Also, "mountstats" on the client also gives
pretty good statistics. See how mountstats statistics differ with ganesha
vs kNFS workload.

Regards, Malahal.






On Tue, Aug 16, 2016 at 2:34 AM, Deepak Naidu <[email protected]> wrote:
Thanks Malahal for your note.

>> What version are you using?
I am using NFS-Ganesha Release = V2.3.2. I can try to install Release =
V2.3.3 & see if that helps.

>> Most likely due to network settings
What settings is are expected & where(which config file). Can you provide
some specifics.

Below are my /etc/sysctl.conf settings for network, for both client &
server.

# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.core.rmem_default = 16777216
net.core.rmem_max = 16777216

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.core.wmem_default = 16777216
net.core.wmem_max = 16777216

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 65535

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 300000

#Enable auto tuning
net.ipv4.tcp_moderate_rcvbuf = 1

#To increase TCP speed, disable timestamps
net.ipv4.tcp_timestamps=0

# Enable IP spoofing protection, turn on source route verification
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Turn on the tcp_window_scaling
net.ipv4.tcp_window_scaling = 1

# Increase the tcp-time-wait buckets pool size to prevent simple DOS
attacks
net.ipv4.tcp_max_tw_buckets = 1440000



From: Malahal Naineni [mailto:[email protected]]
Sent: Monday, August 15, 2016 11:58 PM
To: Deepak Naidu
Subject: Re: [Nfs-ganesha-devel] [Nfs-ganesha-support] NFS-Ganehsa
performance seems very low comapred with kernel.nfs

Most likely due to network settings. What version are you using? Use V2.3.3
that has a fix for letting kernel tune the socket buffers (commit
58883960f9b4d2f0dcd506ea883343ad75881656).
Regards, Malahal.

On Mon, Aug 15, 2016 at 3:49 PM, Deepak Naidu <[email protected]> wrote:
Thanks Frank.

--
Deepak

From: Frank Filz [mailto:[email protected]]
Sent: Monday, August 15, 2016 1:13 PM
To: Deepak Naidu; [email protected]
Cc: 'nfs-ganesha-devel'
Subject: RE: [Nfs-ganesha-support] NFS-Ganehsa performance seems very low
comapred with kernel.nfs

Forwarding to the nfs-ganesha-devel list.

Most folks do not follow the support list.

Frank

From: Deepak Naidu [mailto:[email protected]]
Sent: Monday, August 15, 2016 9:48 AM
To: [email protected]
Subject: Re: [Nfs-ganesha-support] NFS-Ganehsa performance seems very low
comapred with kernel.nfs

I tried to add nb_worker threads to 40 and cache = yes with no changes in
the behavior,  the throughput is not as same as kernel.nfs.

Is this how much NFS-Ganesha can perform? anyone who have the similar
issue ?

--
Deepak

On Aug 12, 2016, at 4:33 PM, Deepak Naidu <[email protected]> wrote:
      Hello,

      I am trying to POC scalable NFS without compromising the performance
      on the NFS throughput.

      I tired comparing NFS-Ganesha & kernel.nfs performance, but I see
      NFS-Ganesha performance to be very slow. Not sure if it’s something
      todo with my config or this is what is expected.
      I am still hopefull that I can get the NFS-Ganesha perform better, as
      I see it literally stick at 500-550 IOPS range. Any help is
      appreciated.

      NFS Server spec:  40Gig NFS server, 256 GB RAM, 32CPUs, Raid-5 SSD
      providing 7TB space. In short very decent server for NFS.
      NFS Client spec:  10Gig NFS client, 2GB RAM, 1VPCU on a VM.

      With the above elements being same for NFS-Ganesha & kernel.nfs I see
      very large performance difference.

      ======kernel.nfs========

      IO Command using FIO tool:                       fio --name=rand
      --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1
      --size=2g --numjobs=32 --group_reporting
      Performance:                                                    Jobs:
      32 (f=32): [w(32)] [2.7% done] [0KB/21728KB/0KB /s] [0/5432/0 iops]
      [eta 50m:15s]
      NFS mount options(below):

      [root@GClient0 ~]# nfsstat -m
      /mnt/nfs from 1.2.3.4:/mnt/nfs_server
      Flags:
      
rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=1.2.3.55,local_lock=none,addr=1.2.3.4

      [root@GClient0 ~]#

      Kernel.nfs Config(beow):

      cat /etc/exports

      /mnt/nfs_server   *(rw,no_root_squash)



      ======ganesha.nfs========

      IO Command using FIO tool:                       fio --name=rand
      --ioengine=libaio --iodepth=32 --rw=randwrite --bs=4k --direct=1
      --size=2g --numjobs=32 --group_reporting
      Performance:                                                    Jobs:
      32 (f=32): [w(32)] [0.6% done] [0KB/2116KB/0KB /s] [0/529/0 iops]
      [eta 09h:00m:46s]
      NFS mount options(below):

      [root@GClient0 ~]# nfsstat -m
      /mnt/nfs from 1.2.3.4:/mnt/nfs_server
      Flags:
      
rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=1.2.3.55,local_lock=none,addr=1.2.3.4

      NFS-Ganesha-config(below):

      cat /etc/ganesha/ganesha.conf

      EXPORT_DEFAULTS
      {
              SecType = sys;

              # Restrict all exports to NFS v4 unless otherwise specified
              Protocols = 4,3;
      }


      EXPORT
      {
              # Export Id (mandatory, each EXPORT must have a unique
      Export_Id)
              Export_Id = 1;

              # Exported path (mandatory)
              Path = /mnt/nfs_server;

              # Pseudo Path (required for NFS v4)
              Pseudo = /mnt/nfs_server;

              # Required for access (default is None)
              # Could use CLIENT blocks instead
              Access_Type = RW;
              Squash = No_Root_Squash;

              # Exporting FSAL
              FSAL {
                      Name = XFS;
              }

      }

      =========nfs-ganesha config end=============



This email message is for the sole use of the intended recipient(s) and may
contain confidential information.  Any unauthorized review, use, disclosure
or distribution is prohibited.  If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of the
original message.


                                                                
      This email has been checked for viruses by Avast          
      antivirus software.                                       
      www.avast.com                                             
                                                                




------------------------------------------------------------------------------

What NetFlow Analyzer can do for you? Monitors network bandwidth and
traffic
patterns at an interface-level. Reveals which users, apps, and protocols
are
consuming the most bandwidth. Provides multi-vendor support for NetFlow,
J-Flow, sFlow and other flows. Make informed decisions using capacity
planning reports. http://sdm.link/zohodev2dev
_______________________________________________
Nfs-ganesha-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


------------------------------------------------------------------------------

_______________________________________________
Nfs-ganesha-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel


------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to