> On 23 Nov 2015, at 9:10 AM, Dev Mukherjee <[email protected]> wrote:
>
> On Sun, Nov 1, 2015 at 9:05 PM, Graham Dumpleton <[email protected]
> <mailto:[email protected]>> wrote:
>
>> On 1 Nov 2015, at 11:58 am, Dev Mukherjee <[email protected]
>> <mailto:[email protected]>> wrote:
>>
>>
>> where the two WSGI endpoints point to routers provided by the two frameworks.
>
> I actually high discourage the use of WSGIScriptAliasMatch as it can do
> unexpected things as far as it effect on the relationship between SCRIPT_NAME
> and PATH_INFO. There is generally very little need for it.
>
> The configuration about could be done as:
>
> WSGIScriptAlias /api/ /srv/app/wsgi/app2.wsgi
> WSGIScriptAlias / /srv/app/wsgi/app1.wsgi
>
>
> Thanks for pointing that out :-)
>
> How would I go about configuring something similar in mod_wsgi-express? Or
> just point me to documentation and I can take it from there.
The intent with mod_wsgi-express is that it is primarily intended for running a
single WSGI application in one daemon process group.
To that end, the preferred setup if needing to host multiple WSGI applications
within the same URL namespace of one hostname, is to use nginx or some other
proxy in front of mod_wsgi-express. The front end would then be configured to
route requests just for that host and the subset of URLs to the appropriate
mod_wsgi-express instance. To ensure that original request details get through
to the WSGI application properly, mod_wsgi-express has various options to say
what are the trusted proxy headers and proxy so that request details can be
fixed up.
The reason mod_wsgi-express is going down this path is that a primary reason
that mod_wsgi-express was created was as the basis for a much simpler way of
running WSGI applications inside of Docker with a curated configuration thereby
avoiding general problem that Apache isn’t set up correctly for Python. As well
as mod_wsgi-express I also have Docker images I have been working on to develop
a best of bread Docker solution for hosting Python web applications. They go
well beyond the official Docker Python images as far as what are best practices
that should be used and using techniques to ensure everything works properly
and you don’t open yourself up to security issues.
That all said, there are two ways that one can still introduce additional WSGI
applications so that mod_wsgi-express can host more than one WSGI application.
The first can be used where you have a primary WSGI application but just need
to add some small additional WSGI scripts to perform minor tasks. In this
approach, the additional WSGI applications run in the same process space as the
existing primary WSGI application. Because of that this can only be used where
all the WSGI applications will not interfere with each other. That is, you
couldn’t use this to host two Django instances by itself.
For this you would run a command like:
mod_wsgi-express start-server --document-root htdocs —add-handler .wsgi
loader.py site/wsgi.py
The ‘—add-handler’ argument allows one to specify a WSGI application to be
passed the request when static files in the docs directory are requested which
have a specific extension. This can be used to create special dynamic handlers
to process static resource requests.
In this case we are actually going to use a handler which loads up the WSGI
script file and executes the WSGI application it contains.
The loader.py file for this is:
import sys
import imp
import hashlib
def application(environ, start_response):
script = environ['SCRIPT_FILENAME']
name = '_script_%s' % hashlib.md5(script).hexdigest()
# Check if module exists.
if name in sys.modules:
module = sys.modules[name]
else:
# Doesn't so may need to load it.
try:
imp.acquire_lock()
# Check if module exists again now that have lock.
if name not in sys.modules:
# Load script file as module.
module = imp.new_module(name)
module.__file__ = script
with open(script, 'r') as fp:
code = compile(fp.read(), script, 'exec',
dont_inherit=True)
exec(code, module.__dict__)
sys.modules[name] = module
else:
module = sys.modules[name]
finally:
imp.release_lock()
application = getattr(module, 'application')
return application(environ, start_response)
A URL would then by default for second application be something like:
/subapp.wsgi
One can if need be do some stuff so that .wsgi extension isn’t in URL, but that
still the —include-file option which is mentioned below for second way.
The second way of doing things is provide your own Apache configuration snippet
and use a more traditional configuration to add in an extra WSGI application.
Doing it this way you can create an extra daemon process group and delegate it
to run in that. Thus technically you could run multiple Django instances.
For this you would run a command line:
mod_wsgi-express start-server —include-file extra.conf site/wsgi.py
In extra.conf you would then have:
WSGIDaemonProcess extra-app
WSGIScriptAlias /suburl /Users/graham/Projects/mod_wsgi/tests/environ.wsgi \
process-group=extra-app application-group=%{GLOBAL}
<Directory /Users/graham/Projects/mod_wsgi/tests>
Order allow,deny
Allow from all
</Directory>
You are obviously then back to ensuring you set up the daemon process group
properly if the defaults for mod_wsgi module aren’t appropriate. The
mod_wsgi-express main application daemon process group had a lot of overrides
applied for timeouts and things to make it more robust that default Apache
module settings.
For example, generated mod_wsgi-express config for main daemon process group
has something like:
WSGIDaemonProcess localhost:8000 \
display-name='(wsgi:localhost:8000:502)' \
home='/Users/graham/Projects/mod_wsgi' \
threads=5 \
maximum-requests=0 \
python-path='' \
python-eggs='/tmp/mod_wsgi-localhost:8000:502/python-eggs' \
lang='en_AU.UTF-8' \
locale='en_AU.UTF-8' \
listen-backlog=100 \
queue-timeout=45 \
socket-timeout=60 \
connect-timeout=15 \
request-timeout=60 \
inactivity-timeout=0 \
deadlock-timeout=60 \
graceful-timeout=15 \
eviction-timeout=0 \
shutdown-timeout=5 \
send-buffer-size=0 \
receive-buffer-size=0 \
response-buffer-size=0 \
server-metrics=Off
Just note that the main Apache child processes configuration is based of the
processes/threads used for the main WSGI application. If extra applications in
separate daemon processes got a lot of traffic, you would want to use
—max-clients option to ensure that the Apache child processes were given more
capacity for proxying requests to the now multiple daemon process groups. By
default Apache child process worker threads is something like 1.5 *
(processes*threads), with a minimum floor of 10 so don’t starve things for
static file requests.
So hope that gives you some things to think about it and can still talk about
it off line if want.
Graham
--
You received this message because you are subscribed to the Google Groups
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/modwsgi.
For more options, visit https://groups.google.com/d/optout.