Hello Frederik,

Just some addition, maybe its of interest to someone:  The number of max open 
files for Ganesha is based on maxFilesToCache. Its. 80%of maxFilesToCache up to 
an upper and lower limits of 2000/1M. The active setting is visible in 
/etc/sysconfig/ganesha.

Cheers,

Heiner

On 19.09.19, 16:37, "gpfsug-discuss-boun...@spectrumscale.org on behalf of 
Frederik Ferner" <gpfsug-discuss-boun...@spectrumscale.org on behalf of 
frederik.fer...@diamond.ac.uk> wrote:

    Heiner,
    
    we are seeing similar issues with CES/ganesha NFS, in our case it 
    exclusively with NFSv3 clients.
    
    What is maxFilesToCache set to on your ganesha node(s)? In our case 
    ganesha was running into the limit of open file descriptors because 
    maxFilesToCache was set at a low default and for now we've increased it 
    to 1M.
    
    It seemed that ganesha was never releasing files even after clients 
    unmounted the file system.
    
    We've only recently made the change, so we'll see how much that improved 
    the situation.
    
    I thought we had a reproducer but after our recent change, I can now no 
    longer successfully reproduce the increase in open files not being released.
    
    Kind regards,
    Frederik
    
    On 19/09/2019 15:20, Billich  Heinrich Rainer (ID SD) wrote:
    > Hello,
    > 
    > Is it usual to see 200’000-400’000 open files for a single ganesha 
    > process? Or does this indicate that something ist wrong?
    > 
    > We have some issues with ganesha (on spectrum scale protocol nodes) 
    >   reporting NFS3ERR_IO in the log. I noticed that the affected nodes 
    > have a large number of open files, 200’000-400’000 open files per daemon 
    > (and 500 threads and about 250 client connections). Other nodes have 
    > 1’000 – 10’000 open files by ganesha only and don’t show the issue.
    > 
    > If someone could explain how ganesha decides which files to keep open 
    > and which to close that would help, too. As NFSv3 is stateless the 
    > client doesn’t open/close a file, it’s the server to decide when to 
    > close it? We do have a few NFSv4 clients, too.
    > 
    > Are there certain access patterns that can trigger such a large number 
    > of open file? Maybe traversing and reading a large number of small files?
    > 
    > Thank you,
    > 
    > Heiner
    > 
    > I did count the open files  by counting the entries in /proc/<pid of 
    > ganesha>/fd/ . With several 100k entries I failed to do a ‘ls -ls’ to 
    > list all the symbolic links, hence I can’t relate the open files to 
    > different exports easily.
    > 
    > I did post this to the ganesha mailing list, too.
    > 
    > -- 
    > 
    > =======================
    > 
    > Heinrich Billich
    > 
    > ETH Zürich
    > 
    > Informatikdienste
    > 
    > Tel.: +41 44 632 72 56
    > 
    > heinrich.bill...@id.ethz.ch
    > 
    > ========================
    > 
    > 
    > _______________________________________________
    > gpfsug-discuss mailing list
    > gpfsug-discuss at spectrumscale.org
    > http://gpfsug.org/mailman/listinfo/gpfsug-discuss
    > 
    
    
    
    -- 
    This e-mail and any attachments may contain confidential, copyright and or 
privileged material, and are for the use of the intended addressee only. If you 
are not the intended addressee or an authorised recipient of the addressee 
please notify us of receipt by returning the e-mail and do not use, copy, 
retain, distribute or disclose the information in or attached to the e-mail.
    Any opinions expressed within this e-mail are those of the individual and 
not necessarily of Diamond Light Source Ltd. 
    Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
attachments are free from viruses and we cannot accept liability for any damage 
which you may sustain as a result of software viruses which may be transmitted 
in or with the message.
    Diamond Light Source Limited (company no. 4375679). Registered in England 
and Wales with its registered office at Diamond House, Harwell Science and 
Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
    _______________________________________________
    gpfsug-discuss mailing list
    gpfsug-discuss at spectrumscale.org
    http://gpfsug.org/mailman/listinfo/gpfsug-discuss
    

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to