I have a HPC application that consists of a compute grid of Linux systems which 
read a common set of data from read-only NFS mounts and then use that data to 
produce computations which are consumed by an IBM mainframe.  

I have run into some problems with the Linux kernel and I am wondering if I 
would have a similar issue on OSx86.  There is significant work involved in a 
port, so I would like to get an idea if I would have the same issues prior to 
undertaking this development effort.

The NFS workload is 100% read-only (RO mount) and consists of the following:

GETATTR 31.34%
LOOKUP 6.61%
[b] ACCESS 44.15% [/b]
READ 11.98%
READDIRPLUS 5.21%

The problem is that there is considerable cost (in the deployment and 
maintenance of NAS appliances)so each and every NFS operation has a cost 
associated with it.  [b] In our case 44% of our NFS operations are unnecessary, 
as we don't use any POSIX ACL's in our application.  By eliminating these RPC 
calls, we will save a considerable amount of money on storage [/b]  

If we were able to disable the ACCESS RPC, then that would help us to get a 
higher density of compute nodes to NAS.  In the Linux 2.4 kernels there was a 
"noacl" option that would suppress these calls, and this is exactly what we 
need for our application.  However, that was dropped in the 2.6 kernels, and 
will not be reintroduced.

With that context, I have a few questions about the OS NFS client:

1) Does the OS NFS client support a way to "suppress" ACCESS calls?
2) Does the OS NFS client have a memory cache similar to file system buffers?  
Our application does a huge amount of disk I/O and we currently use the NFS 
memory cache in the Linux kernel to reduce our NFS ops/sec.  For reference 32GB 
of RAM is enough to prevent 95% of disk I/O
3) Does the OS NFS client support an attribute cache, and the nocto option?
-- 
This message posted from opensolaris.org

Reply via email to