W dniu 13.12.2010 21:11, Roberto De Ioris pisze: > >> On Fri, Dec 10, 2010 at 16:46, Roberto De Ioris<[email protected]> wrote: >>>> Regarding the second setup, I have to choose sockets/ports for each >>>> application and type that information twice: once in nginx config and >>> once >>>> in supervisord config (this is the process manager we use). Maybe I >>> should >>>> use a configuration tool like fabric. >>> >>> This is the main issue (doing double configurations), so my suggestion >>> is >>> looking at this wiki page: >>> >>> http://projects.unbit.it/uwsgi/wiki/CustomRouting >>> >>> It is for 0.9.7-dev but you can use the development version for routing >>> and the stables one for your apps >> >> I read the wiki page as you suggested. Thanks to custom routing, I can >> point nginx to a "main" uWSGI instance that will route each request to >> the appropriate "application" uWSGI instance. nginx only needs to know >> the socket of the "main" uWSGI instance. This simplifies nginx >> configuration. But I still have to launch one uWSGI instance for each >> application, using a process manager like supervisord. >> >> My question: Is it desirable and feasible to keep the idea of "dynamic >> applications", using the UWSGI_SCRIPT parameter, but instead of >> running the application in a sub-interpreter, run it in new worker >> process? Maybe we could have something along the line of mod_wsgi >> daemon mode, automatically launching a worker process for each >> application, killing the process if it is inactive for too long, and >> reload it if the script file was changed. >> >> Not sure if it is a good or bad idea... >> >> Nicolas Grilly >> _______________________________________________ >> > > Hi Nicolas, i am thinking about it by ages :) > > Some weeks ago i posted an idea about a uWSGI "emperor" mode that can > spawn uwsgi instances on demand (by passing it special UWSGI vars, in the > same way you do for dynamic apps). > > This is still something to implement (but it is not hard as it could > sounds), but now i am thinking about a new api function: > > uwsgi.spawn(uwsgi_options) > > In this way the router itself can spawn uwsgi processes: > > def application(e, s): > > node = '127.0.0.1:3031' > > fd = uwsgi.connect(node) > > if fd< 0: > uwsgi.spawn({'socket':node,'master':True, 'processes':4}) > > fd = uwsgi.connect(node) > if fd>= 0: > for part in uwsgi.send_message(fd, 0, 0, e): > yield part > > This is similar to what Cherokee does, but i found basing the existance of > a process only on connecting to it very flaky, so i would prefer something > like this: > > > def application(e, s): > > node = '127.0.0.1:3031' > > if not uwsgi.is_alive(node): > uwsgi.spawn({'socket':node,'master':True, 'processes':4}) > > > for part in uwsgi.send_message(fd, 0, 0, e): > yield part > > > where is_alive could send a uwsgi-ping request, or something similar >
Great idea with the uwsgi.spawn(uwsgi_options). I am for all workers calculate the load. If it is close to 100% it could spawn a new process :] eg. Personally I think the supervisord is a weak solution. Long ago I resigned from it. Instead, I wrote a script (in Python), which reads a configuration file that contains the path to the nginx, uWSGI sockets directory to the list of projects and their locations. It provides: - Automagically create a configuration for both nginx and uWSGI (comand line) - Nginx runs at startup according to the number of servers uWSGI (each define any number of workers) - During the shutdown process finishes nginx uWSGI Not uWSGI monitors processes. But it is not needed. I still did not happen to uWSGI itself is laid out. The script does not work in the background. Simply using the os.system() uWSGI fires, it checks that got up and finishes his work. In this way forget that there are any background processes uWSGI :] Start a new project (uwsgi application) is limited to add a line to your configuration: 'Project_name | Domain | worker-count' and execute the command:./uwsgi-guard apply /etc/init.d/nginx has been suitably modified by adding at start 'uwsgi-guard start' and at stop 'uwsgi-guard stop' I looked at this CustomRouting and I think that is cool, but too much should not depend on the amount of fresh and dynamically developing project which is uwsgi. In this way, I have nginx on the front, each side has its own configuration file, each project has its own socket and the master process uWSGI. I have been developing an application in the appropriate directory and forgot that I have to configure anything. Of course you can write to the simple web-admin. Which also check whether the processes are working properly and check their performance statistics (eg. cpu load, io load, req/s ...) If and so I'm using Nginx, and he has a great LB, why write another in uWSGI? (I assume it is a simple web-based application). Is the only problem was to create several configuration files (a template) and run a few processes. IMHO strange building structures (not necessarily required) in a production environment is not healthy. But of course I have nothing against that uWSGI had the following structure: grandmaster process (some supervisor, API that allows to create new projects and their monitoring) . master process (the current uWSGI) . . worker -- Łukasz Wróblewski www.nri.pl - Nowoczesne Rozwiązania Internetowe www.hostowisko.pl - Profesjonalny i tani hosting www.katalog-polskich-firm.pl - Najlepszy darmowy katalog firm _______________________________________________ uWSGI mailing list [email protected] http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
