I'm not using any APMs. Mostly I have a cron checking for non-200s on a 
simple health check endpoint or relying on AWS metrics.

I've added the following settings and will see how it goes: 
WSGIRestrictEmbedded On, WSGIApplicationGroup %{GLOBAL}, WSGIPythonOptimize 
1

I'll also try out more processes and less threads.

On Monday, February 17, 2020 at 10:31:23 PM UTC-7, Graham Dumpleton wrote:
>
> Are you using any sort of application performance monitoring (APM) 
> product, such as APM products from New Relic, DataDog, AppDyamics or 
> Elastic?
>
> What specifically are the 5xx errors you are getting?
>
> As Jason mentioned less threads per process and more processes is always 
> generally better. Even if you don't use a full APM product, if you have a 
> way of collecting metrics, you could extract metrics data out of mod_wsgi 
> using its event callbacks. One of the things you can work out from that is 
> capacityutilisation. This will show whether you have way too many threads 
> per process, with potential for bottlenecking on one process due to uneven 
> distribution of requests across processes and GIL side effects if things 
> are a bit CPU bound.
>
> Also, do you have:
>
>     WSGIRestrictEmbedded On
>
> set so that you avoid starting Python interpreters in the main Apache 
> child processes.
>
> If you only have the one Python WSGI application, you should also be 
> setting:
>
>     WSGIApplicationGroup %{GLOBAL}
>
> Graham
>
> On 18 Feb 2020, at 12:55 pm, Andrew Charles <[email protected] 
> <javascript:>> wrote:
>
> Ubuntu 18.04.04
> Apache 2.4.29 (event)
> mod_wsgi 4.5.17
> Python 3.6.8
> Django 2.2.10
>
> WSGIScriptAlias / ...wsgi.py
> WSGIDaemonProcess name processes=8 threads=30 queue-timeout=45 socket-
> timeout=60 request-timeout=60 inactivity-timeout=0 startup-timeout=45 
> deadlock-timeout=60 graceful-timeout=15 eviction-timeout=0 python-path=...
> base/ python-home=...virtualenv/
> WSGIProcessGroup name
>
>
> AWS ec2 c5.xlarge 4 CPUs 8GB Mem (ASG autoscaling between 4 and 10 
> instances) behind an ELB
> Averaging 100,000,000 requests per day (107 mil today)
>
> We have a few django api endpoints that are very simple, which only hit a 
> local or separate redis cache, no db hits. We relay data to firehose but 
> use django-q to offload those tasks. Requests take around 200ms but a fair 
> number are 400-500ms. The ELB reports the average as 60ms. Each instances 
> uses between 4-5GB mem. I've been trying to get more performance out of our 
> instances and reduce our 5XXs. I previously tried 3 processes and the 
> default (15) threads. I've been researching the best ways to change 
> settings but it seems like it's unique to every setup and there's no easy 
> rules to follow. I'm looking for suggestions or at least someone to tell me 
> I'm on the right track.
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/modwsgi/f7e65e6b-8975-4292-ba05-1e347945a6d1%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/modwsgi/f7e65e6b-8975-4292-ba05-1e347945a6d1%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/modwsgi/3f47e0fe-eb70-4f66-804f-d7c8facaa3fc%40googlegroups.com.

Reply via email to