Thanks for the response here are details
1 mod_wsgi version is 4.5.7
2 its used as embedded mode
3 basically this app get images in request and crop those images and return
, average time it took is around 3 to 5 seconds

On Sat, Dec 5, 2020 at 10:22 AM Graham Dumpleton <graham.dumple...@gmail.com>
wrote:

> Also, in addition to what I already asked, what version of mod_wsgi is
> being used?
>
> Graham
>
> On 5 Dec 2020, at 4:18 pm, Graham Dumpleton <graham.dumple...@gmail.com>
> wrote:
>
> What is the mod_wsgi part of the Apache configuration?
>
> Need to know if you are using embedded mode or daemon mode and how it is
> set up.
>
> Also, what is the request throughput to the Django application and what is
> average and worst case response times?
>
> Graham
>
> On 5 Dec 2020, at 3:19 pm, Zohaib Ahmed Hassan <
> zohaib.hassan78...@gmail.com> wrote:
>
> We have an ec2 instance 4vcpu and 16gb of ram which is running Apache
> server with mpm event behind an aws ELB (application load balancer). This
> server serve just Images requested by our other applications although for
> most of application we are uasing cloudfront for caching but one app is
> directly sending request on server .  Now Apache memory usage reached to
> 70% every day but it did not come down we have to restart server every
> time. Earier will old Apache 2.2 version and worker mpm without load
> balncer we were not having this issue. I have tried different configuration
> for MPM EVENT and Apache but its not working. Here is apache2.conf
>
>
>     Timeout 120   # also tried the timeout with 300
>     KeepAlive On
>     MaxKeepAliveRequests 100
>     KeepAliveTimeout 45 # varies this setting from 1 seconds to 300
>
>
> Here is load balancer setting
>
>  - Http and https listener
>
>  - Idle timeout is 30
>
> Mpm event
>
>     <IfModule mpm_event_module>
>         StartServers            2
>         MinSpareThreads         50
>         MaxSpareThreads         75
>         ThreadLimit                      64
>         #ServerLimit               400
>         ThreadsPerChild          25
>         MaxRequestWorkers        400
>         MaxConnectionsPerChild   10000
> </IfModule>
>
>  1. When i change MaxRequestWorkers to 150 with MaxConnectionsPerChild  0
> and ram usage reached 47 percent system health checks are failed and new
> instance is launched by auto scaling group. Seems like worker limit is
> reached which already happend when this instance was working with 8GB Ram.
>  2. Our other server which are just running with simple django site and
> django rest frame apis are working fine with default values for MPM and
> apache configured on installation.
>  3. I have also tried the configuration with KeepAliveTimeout equals to 2,
> 3 and 5 seconds as well but it did not work .
>  4. I have also follow this link [enter link description here][1] it
> worked somewhat better but memory usage is not coming down.
>
> here is the recent error  log
>
>     [Fri Dec 04 07:45:21.963290 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:22.964362 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:23.965432 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:24.966485 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:25.967281 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:26.968328 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:27.969392 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:28.970449 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:29.971505 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:30.972548 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:31.973593 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:32.974644 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:33.975697 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:34.976753 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>     [Fri Dec 04 07:45:35.977818 2020] [mpm_event:error] [pid 5232:tid
> 139782245895104] AH03490: scoreboard is full, not at
> MaxRequestWorkers.Increase ServerLimit.
>
> top command result
>
>
>
>
>
>     3296 www-data  20   0 3300484 469824  58268 S   0.0  2.9   0:46.46
> apache2
>      2544 www-data  20   0 3359744 453868  58292 S   0.0  2.8   1:24.53
> apache2
>      1708 www-data  20   0 3357172 453524  58208 S   0.0  2.8   1:02.85
> apache2
>       569 www-data  20   0 3290880 444320  57644 S   0.0  2.8   0:37.53
> apache2
>      3655 www-data  20   0 3346908 440596  58116 S   0.0  2.7   1:03.54
> apache2
>      2369 www-data  20   0 3290136 428708  58236 S   0.0  2.7   0:35.74
> apache2
>      3589 www-data  20   0 3291032 382260  58296 S   0.0  2.4   0:50.07
> apache2
>      4298 www-data  20   0 3151764 372304  59160 S   0.0  2.3   0:18.95
> apache2
>      4523 www-data  20   0 3140640 310656  58032 S   0.0  1.9   0:07.58
> apache2
>      4623 www-data  20   0 3139988 242640  57332 S   3.0  1.5   0:03.51
> apache2
>
> What is wrong in the configuration that is causing high memory?
>
>
>   [1]:
> https://aws.amazon.com/premiumsupport/knowledge-center/apache-backend-elb/
>
> --
> You received this message because you are subscribed to the Google Groups
> "modwsgi" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to modwsgi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/modwsgi/f10fbec6-d2b9-4486-a63b-e1fe80f45ddbn%40googlegroups.com
> <https://groups.google.com/d/msgid/modwsgi/f10fbec6-d2b9-4486-a63b-e1fe80f45ddbn%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "modwsgi" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/modwsgi/9FMOGfbmTgg/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> modwsgi+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/modwsgi/B006AA7E-E305-4358-AD4A-4EDBE229FA5C%40gmail.com
> <https://groups.google.com/d/msgid/modwsgi/B006AA7E-E305-4358-AD4A-4EDBE229FA5C%40gmail.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"modwsgi" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to modwsgi+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/modwsgi/CAGxhakKtpo2rRNFy0tLiOaykKep4187nPjZnxJnvLnXeFSxa%3Dg%40mail.gmail.com.

Reply via email to