"Matt W. Benjamin" <m...@cohortfs.com> wrote on 07/29/2015 01:38:14 PM:

> From: "Matt W. Benjamin" <m...@cohortfs.com>
> To: Marc Eshel/Almaden/IBM@IBMUS
> Cc: "NFS Ganesha Developers (Nfs-ganesha-
> de...@lists.sourceforge.net)" <Nfs-ganesha-devel@lists.sourceforge.net>
> Date: 07/29/2015 01:38 PM
> Subject: Re: inode cache
> 
> Hi Marc,
> 
> Probably.  I was writing to malahal in irc that we have code changes 
that
> will reduce lock contention for xprt->xp_lock a LOT, and more 
> changes coming that
> will address latency in dispatch and reduce locking in SAL.  The first
> of those changes will be coming in hopefully still this week.
> 
> One thing I think could be out of whack is the lru lane selector, I cand
> send a hotfix if we have a skewed object-lane distribution in LRU. 
> Alternatively,
> there is tuning for #partitions and the size of a per-partition hash 
table in
> both the cache_inode "hash" and HashTable (used a lot of other places) 
which
> could apply, if that's the bottleneck.
> 
> Do you have a trivial reproducer to experiment with?

This is a customer application so I can not share it but please send me a 
patch and I can report the before and after numbers.
Thanks, Marc.

> 
> Matt
> 
> ----- "Marc Eshel" <es...@us.ibm.com> wrote:
> 
> > Hi Matt,
> > I see bad perfromance when stating milloins of files, the inode cache
> > is set to 1.5 million. Are there any configuration changes that I can
> > make to the inode cache, even code changes of some hard coded values
> > that will help with performance of big nuber of files?
> > Thanks , Marc.
> 
> -- 
> Matt Benjamin
> CohortFS, LLC.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
> 
> http://cohortfs.com
> 
> tel.  734-761-4689 
> fax.  734-769-8938 
> cel.  734-216-5309 
> 


------------------------------------------------------------------------------
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to