Nic Ferrier wrote:
Jim Gallacher <[EMAIL PROTECTED]> writes:
My gut was right. Your current version segfaults (using mpm-prefork). It
might work if you register a cleanup to remove the reference to your
logging handler (untested):
req.register_cleanup(log.removeHandler, hdlr)
The problem here is that we are depending on the user to add that line
to their code to avoid the segfault. I don't like the idea of including
code in mod_python which may result in a segfault, even if the user is
warned of the dangers in the docs.
Here's a new version of my logging module which fixes these problems.
# A log handler for mod-python
import logging
from mod_python import apache
class ApacheLogHandler(logging.Handler):
"""A handler class which sends all logging to Apache."""
def __init__(self, request):
"""
Initialize the handler (does nothing)
"""
logging.Handler.__init__(self)
self.request = request
def cleanup(data = None):
self.request = None
request.register_cleanup(cleanup)
# Set up the thing
self.level_mapping = { }
self.level_mapping[logging.CRITICAL] = apache.APLOG_CRIT
self.level_mapping[logging.ERROR] = apache.APLOG_ERR
self.level_mapping[logging.WARNING] = apache.APLOG_WARNING
self.level_mapping[logging.INFO] = apache.APLOG_INFO
self.level_mapping[logging.debug] = apache.APLOG_DEBUG
def emit(self, record):
"""Emit a record."""
msg = self.format(record)
if self.request != None:
self.request.log_error(msg, self.level_mapping[record.levelno])
# We *could* log to apache here:
#else:
# apache.log(msg)
# End
There is a typo in your code: s/logging.debug/logging.DEBUG/
Beyond that it still segfaults for me. The other problem is that you are
not removing the handler instance from the logging instance so you still
have a memory leak. 100k requests would result in 100k handler
instances. Oh, and there might be a bit of a performance issue when
emit() gets fired 100k times. ;) Or should I assume that the user will
still be required to include req.register_cleanup(log.removeHandler,
hdlr) in their code?
There is one remaining problem that I am aware of. When you do:
logger = logging.getLogger("webapp")
you are always gauranteed to get the same logger. That's bad in a
multi-programming environment.
The traditional approach to this is to create your logger with a
unique name, eg:
logger = logging.getLogger("webapp" + thread_id)
It may be better to use the interpreter name.
But clearly we'd be relying on users to do that.
Those darn users again!
I could come up with a logging factory to mitigate that.
I'm not sure what you have in mind, but would the usage be the same as
for the standard logging module? One of the advantages of following the
standard is that people can write familiar looking code and things just
work they expect.
Regards,
Jim