its not really per job - connections to the http(s) interface also use file descriptors so you need to factor in how many clients you will have connecting and if they use persistent connections and the slaves and type of slaves, how many plugins you have installed - which version of the JDK you have and how its tuned (storing the compiled machine code). The upper burst limit may also depend on what your job does and what type it is - e.g. archiving files will create some FDs whilst the file is being copied across (they should be closed afterwards).

I used 10k for several thousand jobs with a few hundred users. I was monitoring it for a while (several years ago) and it did stabalize - I can't recall the exact number.

That's not to say there isn't a leak somewhere - but if you up the limit and track it with 'lsof' over a period of days/weeks I think you may see that it is relativley stable, if not you should at least see a common trend of something that is constantly increasing. (a combination of awk sort and uniq should be able to show any upward trends - if you have an executor on the same OS as the master you could also do this through jenkins and use the plot plugin to visualize :-)

/James





On 07/01/2015 22:05, Sean Last wrote:
Ok, i'll up the limit, but is there any metric I can use for what's reasonable versus what's worrisome in an average case/per job?

On Wednesday, January 7, 2015 5:00:55 PM UTC-5, James Nord wrote:

    The default max file handles in most Linux installs (1024) for
    Jenkins is woefully inadequate.

    Java itself will have many open files to libs and jars, jenkins
    will then have the for jobs, users and slaves.

    I recall also that the lib used by the fd plugin doesn't count all
    for descriptors and I think I submitted a patch. It certainly will
    only get descriptors opened after it is hooked up which is after
    java itself has got may handles which can give you a significant
    difference.

    I would up the limit and then run a periodic check on the file
    handles to check that there are no leaks over time.

    On 7 January 2015 20:21:50 GMT+00:00, Sean Last <[email protected]
    <javascript:>> wrote:

        Yes, restarting jenkins completely clears the open files for
        the jenkins user, even though the jenkins application is
        unaware of the open files.

        On Wednesday, January 7, 2015 3:06:39 PM UTC-5, LesMikesell
        wrote:

            On Wed, Jan 7, 2015 at 1:55 PM, Sean Last
            <[email protected]> wrote:
            > And I know I could just up the open files limit for the
            jenkins user, but
            > I'd really like to know why this is happening so it
            doesn't just keep
            > growing until it's full regardless of where I put the
            limit.

            Off the top of my head, a bug in the JVM you are using
            sounds likely.
Have you tried different versions or checked its issues? And does a
            jenkins restart drop the number significantly compared to
            after
            running a long time?

-- Les Mikesell
            [email protected]


--
You received this message because you are subscribed to the Google Groups "Jenkins 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/54AE4746.5040504%40teilo.net.
For more options, visit https://groups.google.com/d/optout.

Reply via email to