No, this is an NFS server setting, but I'm not sure that it's tunable on
Isilon. On Linux and Solaris, it defaults to some very low value, which is
fine for sequential I/O but really slows down random I/O. On Linux,
RPCNFSDCOUNT can be tuned from the default of 8 to 512, which is fine as
long as
We did some quick research and "NFS thread" controls don't apply in our
situation and can't be set. Or are you referring to the mountlimit value
for the devclass?
On Mon, May 14, 2018 at 9:31 AM, Skylar Thompson wrote:
> This sounds pretty good to me. If you can, I would boost
Thanks for the ideas. Much appreciated. I also looked at some AIX stats
today and they have pointed me at a few hypervisor tuning options.
Cheers
Steve.
On Mon, 14 May 2018, 23:32 Skylar Thompson, wrote:
> This sounds pretty good to me. If you can, I would boost your NFS
This sounds pretty good to me. If you can, I would boost your NFS thread
count past the number of CPUs that you have, since a lot of NFS is just
waiting for the disks to respond. You still need a thread for that, but it
won't consume much CPU.
On Mon, May 14, 2018 at 08:27:27AM -0400, Zoltan
Do you see consistent NFS throughput, or is it bursty? We've never used
Isilon as storage for TSM, but we have had problems generally with too-low
NFS timeouts causing NFS to back off for too long. You can also see this
problem manifest itself with NFS timeout messages in the kernel log.
On Fri,
Very interesting. This supports my idea on how I want to layout the
new/replacement server. The old server is only 16-threads and certainly
could not handle dedup (we can't afford any appliances like DD) since it is
bucking under the current backups traffic. The new server has 72-threads as
well
AIX does supported NFSv4. I'd love to get the time to try it out, but
DataDomains only support NVSv4 with the newest release and we are several
levels back.
https://www.redbooks.ibm.com/redbooks/pdfs/sg246657.pdf
When our admins found that multiple/concurrent mount points helped for