A long time again (June last year), I blogged about a whole bunch of new stuff 
I had implemented for mod_wsgi to add performance monitoring into the core of 
mod_wsgi.

That post can be found at:

    
http://blog.dscpl.com.au/2015/06/implementing-request-monitoring-within.html 
<http://blog.dscpl.com.au/2015/06/implementing-request-monitoring-within.html>

What I implemented worked really well and was much more efficient and had much 
lower overheads than trying to do the same thing as a WSGI middleware, as 
implemented by various performance monitoring products out there.

I didn’t release the changes at the time as there seemed to be zero interest 
and some people had actually been quite negative about prior attempts to add 
performance monitoring features to mod_wsgi. I started on a new job after that 
so have been side tracked ever since doing more interesting stuff.

Because I don’t want to leave all that work sitting to the side for ever, and 
at least want to start using it myself and perhaps extending it to add some 
other bits I had wanted to do, I have finally merged it all into the main 
development branch for mod_wsgi.

If you track the development branch of the github repo and use mod_wsgi out of 
it, would be much appreciated if you update and at least let me know if core 
functionality of mod_wsgi is still all working fine.

If you are actually interested in performance monitoring and want to play with 
this stuff, the blog post covers the main part of what the changes were adding, 
the only thing missing was how to register to get the events. That is as simple 
as doing:

    import mod_wsgi

    def event_handler(name, **kwargs):
        print('EVENT', name, kwargs)

        if name == 'request_started':
            ...
        elif name == 'request_finished':
            ...
        elif name == 'request_exception’:
            …

    mod_wsgi.subscribe_events(event_handler)


One other useful feature for dealing with the separate events, is that you can 
also call:

    request = mod_wsgi.request_data()

This function gives you back a dictionary associated with the current request. 
You can use this like a thread local to pass any cached data between the 
callbacks for each of the event types.

You could for example save away data from the request_started event so that 
when request_finished fires you can access it again to work out final results 
or metrics to be generated.

That function can also be called from anywhere in application code to save away 
data as well. You could thus for example use decorators applied to request 
handlers in your code, to save away the module:class:name of a function 
handling a request and then associate any time metric data with that when 
recording it.

I will be aiming next to try and get together some of my test code I had for 
reporting metrics into datadog and influxdb back into shape and possibly 
revisit my mod_wsgi-metrics package and add them into there at least as an 
example. Anyway, once I work out more the state of things, I will blog more 
about and also release this all as mod_wsgi version 4.5 and see where things go 
from there.

If you are into performance monitoring, I hope you will be interested enough to 
dig into this before then and give me feedback.

Thanks.

Graham

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.

Reply via email to