Hi Martin, This is just another thought. mod_wsgi will typically have multiple processes and multiple threads that it uses to process requests. There is nothing stopping you from opening a log file and writing to it using normal file operations.
However, normal issues of multi-process / multi-thread file writing apply to you. This is complicated and full of pitfalls. See this for a very brief overview. http://stackoverflow.com/a/12239037/64004 There are many solutions to this problem, but simplicity are not typically their tagline. One very basic approach would be to log entries to redis http://redis.io/topics/data-types#lists. Then have a separate single process (managed by mod_wsgi or not) that is responsible for reading these log entries and writing them to a file or filtering them or alerting you or whatever you have in mind. Redis would provide the atomicity you need. I'd also like to copy and paste a comment from G.D. on this list from 3 years ago. It had to do with using mod_wsgi to spawn background threads for application "work" Using this technique you could appropriately set the user and group of this process so that it had authority to write to your log files. Unless it is intended to specifically affect data in memory for the application serving the requests, then create a distinct mod_wsgi daemon process group consisting of a single process of a single thread. The single thread is just to keep memory use to a minimum as the request handler thread would never actually be used as we aren't going to delegate any request handling to the process anyway. Having that, use WSGImportScript to import a script of startup of that daemon process and from that spawn the background threads which are going to do the work. For example: WSGIDaemonProcess application-process processes=5 threads=5 WSGIDaemonProcess tasks-process processes=1 threads=1 WSGIScriptAlias / /some/path/application.wsgi process-group=application-process application-group=%{GLOBAL} WSGIImportScript /some/path/tasks.py process-group=tasks-process application-group=%{GLOBAL} The /some/path/tasks.py script should create the background threads which are to do the work for the background tasks. If you want to try and be a bit more graceful about shutdown, then adapt the code from: http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode#Monitoring_For_Code_Changes That code creates a background thread and then uses a Queue object as a sleep mechanism between runs. An atexit callback is used to push an object on the Queue object to flag that process is being shutdown. Doing it this way with dedicated daemon process group just for background tasks means that restarting the application process by touching the WSGI script file doesn't interfere with the background tasks process. If you need to restart the background tasks process then you can also send it a SIGTERM to restart and that will not affect main application process. Use of a separate daemon process group just for background tasks means you don't have to worry about multi process issues and synchronisation. Obviously though it just cannot affect what is in memory of the application process. You would still have to use a background thread in application process for just stuff where want to affect what is in memory. In either case, restart Apache will obviously kill off/restart both application and background tasks processes. J On Mon, Mar 17, 2014 at 2:11 PM, Martin Matusiak <[email protected]>wrote: > Hi Joonas, > > I've thought about that doing that, but unfortunately this only gives > me the ability to log to apache's error log. I'd like to be able to > log different kinds of information to a few specialized log files. > > > Martin > > 2014-03-17 17:46 GMT+01:00 Joonas Lehtolahti <[email protected]>: > > On Mon, 17 Mar 2014 13:55:28 +0200, Martin Matusiak <[email protected] > > > > wrote: > > > >> Hi, > >> > >> I've been searching around for some information about how to do logging > to > >> a shared file in conjunction with mod_wsgi in multiprocess mode and I > >> haven't been able to find anything concrete. > >> > >> In my set-up I want to run say 100 workers and have them all log to the > >> same file. The stdlib logging module does not seem to support this. Is > >> there anything specifically in mod_wsgi that makes it possible? If not, > >> what's the best way to do it? > > > > > > One way is to write to environ['wsgi.errors'], which is a file-like > handler > > mapping to the Apache Error log file. That's what I have been using for > > debugging purposes. It might or might not be suitable for your needs, > > though, especially if the logging is not error-related. But that is > > something mod_wsgi provides. > > > > Cheers, > > > > Joonas > > > > > > -- > > You received this message because you are subscribed to the Google Groups > > "modwsgi" group. > > To unsubscribe from this group and stop receiving emails from it, send an > > email to [email protected]. > > To post to this group, send email to [email protected]. > > Visit this group at http://groups.google.com/group/modwsgi. > > For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups > "modwsgi" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To post to this group, send email to [email protected]. > Visit this group at http://groups.google.com/group/modwsgi. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "modwsgi" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/modwsgi. For more options, visit https://groups.google.com/d/optout.
