On Thu, Jul 03, 2008 at 03:28:29PM +0200, Elmar Pruesse wrote:
> Hello Knut!
> 
> I second your request! :)
> 
> We are also using NFS with generally good performance. It works
> completely fine with a compute cluster and dozens of workstations
> connected to it. But you are right, performance in a SunRay environment
> is less than optimal.
> 
> Apparently, there is no fair scheduling of file I/O requests by the
> Linux NFS client. If one user starts a large copy job, the next may wait
> seconds for his/her 'ls' to return.
> 
> Does anyone know what causes this, or even better how to fix it?

The following is what I've gathered from following LKML;  if someone knows any
of this to be incorrect, please do correct :)


The VM (virtual memory) subsystem generally interacts with the regular
filsystems and the buffer cache to ensure that large I/O jobs run smoothly on
the systems by applying memory pressure in clever ways.

Trond, who's in charge of the NFS client (and who's done a fantastic job on
this) has not (for whatever reason) changed the VM to accomodate the needs of
NFS.  And, the VM guy does not care to understand NFS it seems.

So, the end result is that memory pressure is not applied and caches are not
managed the way they should be, when the NFS client is doing I/O. Writes tend
to be spectacularly bad, reads are usually quite ok in my experience.

I have no idea if it is fixed yet, as I run distro kernels and not the bleeding
edge.

> The
> only idea that comes to my mind is moving from a single mounted '/home'
> to using automounter, but I'm not so sure it will help. Currently,
> everything is moved through a single nfs3/tcp link. But would more
> mountpoints change that?

I doubt it. We use automounter and one user can easily hang a system
(temporarily) for the others by doing large NFS writes.

-- 

 / jakob

_______________________________________________
SunRay-Users mailing list
[email protected]
http://www.filibeto.org/mailman/listinfo/sunray-users

Reply via email to