On Sun, Nov 27, 2005 at 01:27:38AM -0800, Mike Eubanks wrote:
> On Sat, 2005-11-26 at 21:49 -0500, Chuck Swiger wrote:
> > Mike Eubanks wrote:
> > > As soon as I mount my NFS file systems, the network load increases to a
> > > constant 80%-90% of network bandwidth, even when the file systems are
> > > not in use.  NFS stats on the client machine (nfsstat -c) produce the
> > > following:
> > [ ... ]
> > > Fsstat and Requests are increasing very rapidly.  Both the client and
> > > server are i386 5.4-STABLE machines.  Is this behaviour normal?
> > 
> > Sort of.  Some fancy parts of X like file-manager/exporer applications tend 
> > to 
> > call fstat() a lot, but it's probably tunable, and if you enable NFS 
> > attribute 
> > caching that will help a lot.
> 
>   Thank you for the reply Chuck.  It seems that it is something to do
> with Gnome.  I haven't done an upgrade to 2.12 yet, but the difference
> did happen when I refreshed my user configuration to remove any stale
> config files.  Using the "top -mio" command I get the following:
> 
> VCSW  IVCSW   READ  WRITE  FAULT  TOTAL PERCENT COMMAND
>   38     56      0      0      0      0   0.00% libgtop_server
>   94     16      0      0      0      0   0.00% Xorg
>    4      0      0      0      0      0   0.00% top
>    0      0      0      0      0      0   0.00% mozilla-bin
>  115     40      0      0      0      0   0.00% multiload-appl
>   42      1      0      0      0      0   0.00% anjuta-bin
>    0      0      0      0      0      0   0.00% evolution-2.2
>  130      9      0      0      0      0   0.00% gnome-terminal
>   15     10      0      0      0      0   0.00% clock-applet
>   42      0      0      0      0      0   0.00% mixer_applet2
>   10      0      0      0      0      0   0.00% metacity
>    3      0      0      0      0      0   0.00% nautilus
>    4      0      0      0      0      0   0.00% wnck-applet

That doesn't look like it is showing a problem to me.  In particular
it is indicating 0 I/O.

>                                      +---- file-manager/explorer?
>                                      |
> client.220312819 > server.nfs: 96 fsstat [|nfs]
> server.nfs > client.220312819: reply ok 168 fsstat POST: DIR 755 ids
> 1001/0 [|nfs]
> client.220312820 > server.nfs: 96 fsstat [|nfs]
> server.nfs > client.220312820: reply ok 168 fsstat POST: DIR 755 ids
> 1001/0 [|nfs]
> client.220312821 > server.nfs: 96 fsstat [|nfs]
> server.nfs > client.220312821: reply ok 168 fsstat POST: DIR 755 ids 0/0
> [|nfs]
> client.220312822 > server.nfs: 96 fsstat [|nfs]
> server.nfs > client.220312822: reply ok 168 fsstat POST: DIR 755 ids 0/0
> [|nfs]
> client.220312823 > server.nfs: 96 fsstat [|nfs]
> server.nfs > client.220312823: reply ok 168 fsstat POST: DIR 755 ids 0/0
> [|nfs]
> 
> If this is enough evidence for the file-manager/explore,

It's evidence that something is peforming NFS I/O, but it doesn't show
what.  Perhaps you needed to also use the top -S flag, or to sort the
output by typing 'ototal'.

> I'll just have
> to accept it for now.  I can't find anything about tuning them.  As far
> as attribute caching, do you mean the `-o ac*' options to mount_nfs?  I
> also noticed two sysctl values, although, I left them unmodified.
> 
> vfs.nfs.access_cache_timeout: 2
> vfs.nfs4.access_cache_timeout: 60

Increase the former (you're not using nfs4).  Try 60 seconds, for
example.  The downside is that you'll have to wait up to a minute for
access changes on the server to be visible to the client, but that's
usually not a big deal unless you're accessing a lot of dynamically
created and destroyed files.

Kris

Attachment: pgpYNDnhjEB5j.pgp
Description: PGP signature

Reply via email to