Jim Gallacher <[EMAIL PROTECTED]> writes:

> Nicolas Lehuen wrote:
>> In that case, setting up the logging handler should be done by the user, 
>> making sure that it is set up only once per interpreter, even in the 
>> context of a multi-threaded MPM. It's not a trivial thing ; looks like 
>> this is a job for PythonImport.
>
> Except that you won't have a server reference to get the virtual host 
> configuration. If you are using a custom log for each virtual host, 
> won't your error messages end up in the wrong log?

I was not arguing for a mod_python managed logging creation. Only that
the mod_python distribution include a module that provides the glue
between Apache logging and Python logging.

Having mod_python create the logger would not be right IMHO because it
is unnecessary for code that doesn't use logging or uses Apache
logging directly.


> Here are some further things to consider if anyone wants to persue it. 
> Consider the following code:
>
> import logging
> from mod_python import apache
> from proposed_mp_logging_module import ApacheLogHandler
>
> def handler(req)
>      req.content_type = 'text/plain'
>      log = logging.getLogger('request')
>      hdlr = ApacheLogHandler(req)
>      log.addHandler(hdlr)
>      log.setLevel(logging.INFO)
>      msg =  'Its all good'
>      log.info('%s' % msg)
>      req.write('check the logs for "%s"' % msg)
>      apache.OK
>
> All fine and dandy. But isn't logger a singleton? So each time a request 
> is processed we'll be adding a reference to the request object, which 
> will never get garbage collected and result in a memory leak.

There might be a fault with my design here. To help me understand what
you're saying can you confirm that when you say:

  "But isn't logger a singleton" 

do you mean:

  "But isn't 'log' a singleton?" (ie: 'log' refers to the instance
  variable in your example)


> Furthermore, you can't depend on the request object being valid once 
> the request processing has completed. At some point request_tp_clear (in 
> requestobject.c) will get called and request->server will be set to 
> NULL. (At least I think this is correct). My gut tells me you'll get 
> some random seg faults.

There are only two ways this can happen I think:

1. the user saves the logger for later (maybe in a thread) and tries
   to log after the request has finished

   This would be user error and therefore acceptable IMHO.

   I should point out that this is unlikely to happen since loggers
   tend to be very local objects with small scopes.

2. the logging record doesn't get flushed before the request has
   completed so that when it is flushed the correct state is no longer
   available.

   This is also possible, clearly it would be a bug because the
   logging record should be flushed immediately (if that's the
   implementation of the handler)


> Also, is there not an assumption that the logger instance is in effect 
> global? So let's say you change the level in one request such as 
> log.setLevel(logging.CRITICAL). In mpm-prefork or mpm-worker models 
> changing the log level will not propagate to the other child processes. 
>   I expect most users will find this confusing.

Not sure why you think that.

If mod_python included a module based on my design there would be a
clear understanding that it was specifically for request derived
logging (the most common case).

It's particularly clear from my design because it takes a request as
an argument. You couldn't create one of my log handlers globally for
example.

Also the natural thing to do with logging is to have one per unit of
code.


> As I said before I don't think this is as trivial to implement as it 
> might seem at first blush.

I think it has all of the same problems as any other multi-programming
(threads or processes) code. It's true that a lot of users don't
understand those issues but that's not exclusive to logging.


Nic

Reply via email to