Just as an additional argument for this feature, I'd like to add that on Linux 
you are limited to 1024 file descriptors.  Even if I were able to allocate all 
of them for Logback appenders and if they sent the "end-of-session" token on 
completion, it would still not be enough because I have a requirement to be 
able to handle more than that many *concurrent* jobs in the system.  In general 
this shouldn't be a problem since most jobs will log messages relatively 
infrequently and some may be very long running (days).  But I cannot guarantee 
that any number of jobs will not all log a message within some fixed time 
period (30 minutes or otherwise), which would exhaust the FDs with the current 
implementation.  In this scenario I'm not all that concerned about the 
performance overhead of starting and stopping appenders; I'm much more 
concerned about correctness and not exhausting limited resources.

In general I agree with your point that it is reasonable for the application to 
attempt to convey that it is finished with the logger.  But considering that 
SiftingAppender has the potential to allocate and hold an large number of file 
descriptors, I think some configuration knobs are warranted.  The default 
behavior with my changes is exactly as before, and the implementation is 
actually notably less complex due to replacing the DIY linked list 
implementation in AppenderTracker with a LinkedHashMap.

-Tommy


________________________________________
From: [email protected] [[email protected]] on behalf of 
Becker, Thomas [[email protected]]
Sent: Wednesday, October 24, 2012 3:53 PM
To: logback developers list
Subject: Re: [logback-dev] How to contribute to logback?

I won't say that I couldn't reconfigure the code so that such an 
"end-of-session" point could be identified.  I will say that I don't think that 
is a general solution to the problem.  In your scenario you are correct that 
with my changes the 101st request will result in the oldest appender getting 
closed and a new one getting opened.  And that "thrashing" will continue as 
long as we hover above the 100 request mark yes.  But things will work, and 
work as they should.  If performance is degraded my option is to decide if I 
can afford to increase this maximum (which keep in mind is the maximum I 
deliberately chose, since the default is unbounded), or address it some other 
way.  My application will not go down in flames because I can't open a socket 
or some such thing that requires an FD since my logging system has decided it 
can consume as many as it wants.  I would consider a temporary performance 
degradation to be preferable to failure, wouldn't you?

Regards,
Tommy

-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf 
Of ceki
Sent: Wednesday, October 24, 2012 3:35 PM
To: logback developers list
Subject: Re: [logback-dev] How to contribute to logback?

On 24.10.2012 20:46, Becker, Thomas wrote:
> Thanks, I'll look into changing the configuration to use elements.
>

 > I was not aware of the FINALIZE_SESSION marker, though I don't think  > it 
 > would work for our use case.  My RFE was originally to just make  > the 
 > appender timeout configurable.  But then I thought about it more  > and 
 > decided the real problem was that there is no way to cap the  > number of 
 > sub-appenders (and the scarce resources they consume, like  > FDs) that can 
 > be spun up in response to a burst of activity.  In our  > case, we expose a 
 > job engine to clients and use SiftingAppender to  > direct each job to its 
 > own log.  But when we get a flood of new job  > submissions, we ran out of 
 > FDs which cripples the system in all sorts  > of ways that should not be 
 > affected by logging.  But now we can cap  > the number of appenders we want 
 > to allow, and clients don't need to  > know to pass a marker stating they're 
 > done with the logger.  So I  > guess I'm saying that although the marker is 
 > nice, the maxAppenders  > setting is more like a safety valve to keep Bad 
 > Things from happ
 ening

Capping the max number of sub-appender sound like what *not* to do in your 
scenario. For example, if the cap is 100 and 101 requests are received in a 
short amount of time, then you will be prematurely opening and closing 
sub-appenders in the scenario you describe.
Reconfiguring an sub-appender is not exactly cheap.

I reiterate my question. Can you identify an end-of-session point in your code 
after which resources can be released?

--
Ceki
65% of statistics are made up on the spot 
_______________________________________________
logback-dev mailing list
[email protected]
http://mailman.qos.ch/mailman/listinfo/logback-dev
_______________________________________________
logback-dev mailing list
[email protected]
http://mailman.qos.ch/mailman/listinfo/logback-dev
_______________________________________________
logback-dev mailing list
[email protected]
http://mailman.qos.ch/mailman/listinfo/logback-dev

Reply via email to