Hello,

Is it usual to see 200’000-400’000 open files for a single ganesha process? Or 
does this indicate that something ist wrong?

We have some issues with ganesha (on spectrum scale protocol nodes)  reporting 
NFS3ERR_IO in the log. I noticed that the affected nodes have a large number of 
open files, 200’000-400’000 open files per daemon (and 500 threads and about 
250 client connections). Other nodes have 1’000 – 10’000 open files by ganesha 
only and don’t show the issue.

If someone could explain how ganesha decides which files to keep open and which 
to close that would help, too. As NFSv3 is stateless the client doesn’t 
open/close a file, it’s the server to decide when to close it? We do have a few 
NFSv4 clients, too.

Are there certain access patterns that can trigger such a large number of open 
file? Maybe traversing and reading a large number of small files?

Thank you,
Heiner

I did count the open files  by counting the entries in /proc/<pid of 
ganesha>/fd/ . With several 100k entries I failed to do a ‘ls -ls’ to list all 
the symbolic links, hence I can’t relate the open files to different exports 
easily.

I did post this to the ganesha mailing list, too.
--
=======================
Heinrich Billich
ETH Zürich
Informatikdienste
Tel.: +41 44 632 72 56
[email protected]
========================


_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to