W dniu 10.12.2010 16:46, Roberto De Ioris pisze:
>
>> Hello,
>>
>> We want to host ~20 distinct WSGI applications using nginx and uWSGI on a
>> single machine. I am looking for advice on these two candidate setups:
>>
>> 1/ Run a single uWSGI instance and use the dynamic application mode to
>> dynamically create a sub-interpreter for each application?
>>
>> 2/ Run one separate uWSGI instance for each application?
>>
>> Which setup would you recommend? What are the advantages and drawbacks of
>> each setup? Any other possible setup?
>
>
> There is surely no one-fit-for-all response, i will try to comment
> on your statements and i will give a personal suggestion at the end
>
>>
>> Regarding the first setup, I am afraid of having 20 different interpreters
>> running in the same process. For example, what about reloading an upgraded
>> application?
>
> reloading/upgrading (if done in graceful mode) is not a real issue (except
> for a first request being very slow). What scares me of having too much
> apps
> in a single process, is the big memory usage (i prefer to have more little
> processes and less big one, and remember that on 32bit system you have at
> max a couple of gigs of memory for process) and the not-perfect isolation
> of python interprerers. For me, 20 apps are too much for a single process,
> i suggest you to avoid this approach
>
>>
>> Regarding the second setup, I have to choose sockets/ports for each
>> application and type that information twice: once in nginx config and once
>> in supervisord config (this is the process manager we use). Maybe I should
>> use a configuration tool like fabric.
>>
>
> This is the main issue (doing double configurations), so my suggestion is
> looking at this wiki page:
>
> http://projects.unbit.it/uwsgi/wiki/CustomRouting
>
> It is for 0.9.7-dev but you can use the development version for routing
> and the stables one for your apps
>

Today I wondered exactly the same :]
In my case, on one machine I have about 30 web pages. All require uWSGI.
The machine has 16 cores. So I think that it would be good to each app 
have few workers and take advantage of these cores. But if I give the 
six workers in the process is replaced by the 180 process. A bit much.

Approach 1: I thought it was just up to 16 workers (vhost). However, any 
change in one of the applications require a reboot. In heavy traffic 
slowed down a lot the first run, and if somehow uWSGI did not rise 
properly, all parties have died.

Despite the large number of processes is a safer option 2. For this 
particular load can be monitored by observing its application process.


By the way: Is the documentation EmbeddedModule will be expanded to 
include hidden options like worker_id, mem, log ... etc. ?



-- 
Łukasz Wróblewski
www.nri.pl  - Nowoczesne Rozwiązania Internetowe
www.hostowisko.pl  - Profesjonalny i tani hosting
www.katalog-polskich-firm.pl  - Najlepszy darmowy katalog firm
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to