Re: [users@httpd] Apache VirtualHost Config Tool management

2023-05-22 Thread rbowen
On Thu, 2023-04-27 at 12:53 +0200, Carlos García Gómez wrote:
> Hello,
>  
> I am looking for a tool that makes it easier for me to manage the all
>  virtual hosts that I have configured.
>  
> Some tool that allows me to have a database of all the virtual hosts
> and that allows me to edit their configuration at any given time.
>  
> Is there anything implemented in LDAP, MySQL or some library that
> allows me to parse the configuration and edit it with some frontend?
>  
> I know I'm looking for something complicated but thanks for your
> comments


There are several modules that come with Apache httpd for simplifying
vhost config without introducing any third-party tools.

What I personally use is mod_macro to define a standard vhost layout,
and then all I need in my main configuration file is:

Use VHostMacro domainnamehere.com

Documentation here:
https://httpd.apache.org/docs/2.4/mod/mod_macro.html along with an
example of how you'd use this for vhosts.

Another option is mod_vhost_alias -
https://httpd.apache.org/docs/2.4/mod/mod_vhost_alias.html - I'm not a
big fan, but some people like it.



-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Deepak Goel
On Mon, 22 May 2023, 17:58 Mateusz Kempski, 
wrote:

> @Yann Ylavic:
> There seems to be no difference in configuration file except for some
> comments:
> ```
> diff -Bbde /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.rocky
> 8a
> # See the httpd.conf(5) man page for more information on this
> configuration,
> # and httpd.service(8) on using and configuring the httpd service.
> #
> .
> ```
> I will paste complete Rocky config for completeness at the end of this
> message.
> EnableMMAP and EnableSendfile are set to on, differently to defaults
> specified in comments. This is default system configuration present on
> both Rocky 8 and Centos 7. I removed httpd and all config files and
> installed it again to make sure.
> ```
> # Defaults if commented: EnableMMAP On, EnableSendfile Off
> #
> #EnableMMAP off
> EnableSendfile on
> ```
>
> Both systems return:
> ```
> ulimit -n
> 1024
> ```
> There is no custom config for httpd:
> ```
> stat /etc/systemd/system/httpd.service.d
> stat: cannot stat ‘/etc/systemd/system/httpd.service.d’: No such file
> or directory
> ```
> systemd itself seems to not apply any limit on both systems:
> ```
> grep LimitNOFILE /etc/systemd/system.conf
> #DefaultLimitNOFILE=
> ```
>
> After enabling trace there is a lot of messenges about too many open
> connections and killing keepalive connections. Example snippet:
> ```
> [Mon May 22 11:57:40.146451 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(1808): Too many open connections (73), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.146455 2023] [mpm_event:trace1] [pid 7710:tid
> 139800984155904] event.c(1811): Idle workers: 0
> [Mon May 22 11:57:40.146457 2023] [mpm_event:trace1] [pid 7710:tid
> 139800984155904] event.c(1574): All workers are busy or dying, will
> close 2 keep-alive co
> nnections
> [Mon May 22 11:57:40.146526 2023] [mpm_event:debug] [pid 7379:tid
> 139801403561728] event.c(492): AH00457: Accepting new connections
> again: 50 active conns (
> 27 lingering/0 clogged/0 suspended), 7 idle workers
> [Mon May 22 11:57:40.146686 2023] [mpm_event:debug] [pid 7511:tid
> 139800849938176] event.c(1808): Too many open connections (86), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.146698 2023] [mpm_event:trace1] [pid 7511:tid
> 139800849938176] event.c(1811): Idle workers: 11
> [Mon May 22 11:57:40.146701 2023] [mpm_event:trace1] [pid 7511:tid
> 139800849938176] event.c(1574): All workers are busy or dying, will
> close 34 keep-alive c
> onnections
> [Mon May 22 11:57:40.146701 2023] [mpm_event:trace1] [pid 7644:tid
> 139801336452864] event.c(1574): All workers are busy or dying, will
> close 2 keep-alive co
> nnections
> [Mon May 22 11:57:40.146812 2023] [mpm_event:debug] [pid 7511:tid
> 139800849938176] event.c(492): AH00457: Accepting new connections
> again: 85 active conns (
> 52 lingering/0 clogged/0 suspended), 6 idle workers
> [Mon May 22 11:57:40.146955 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(492): AH00457: Accepting new connections
> again: 73 active conns (
> 58 lingering/0 clogged/0 suspended), 9 idle workers
> [Mon May 22 11:57:40.148250 2023] [mpm_event:debug] [pid 7162:tid
> 139800841545472] event.c(492): AH00457: Accepting new connections
> again: 48 active conns (
> 25 lingering/0 clogged/0 suspended), 2 idle workers
> [Mon May 22 11:57:40.148562 2023] [mpm_event:debug] [pid 7162:tid
> 139800841545472] event.c(1808): Too many open connections (48), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.148573 2023] [mpm_event:trace1] [pid 7162:tid
> 139800841545472] event.c(1811): Idle workers: 0
> [Mon May 22 11:57:40.148921 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(1808): Too many open connections (70), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.148930 2023] [mpm_event:trace1] [pid 7710:tid
> 139800984155904] event.c(1811): Idle workers: 1
> [Mon May 22 11:57:40.149594 2023] [mpm_event:debug] [pid 7511:tid
> 139800849938176] event.c(1808): Too many open connections (69), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.149603 2023] [mpm_event:trace1] [pid 7511:tid
> 139800849938176] event.c(1811): Idle workers: 1
> [Mon May 22 11:57:40.149630 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(492): AH00457: Accepting new connections
> again: 61 active conns (
> 38 lingering/0 clogged/0 suspended), 2 idle workers
> [Mon May 22 11:57:40.149776 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(1808): Too many open connections (63), not
> accepting new conns in
> this process
> [Mon May 22 11:57:40.149782 2023] [mpm_event:trace1] [pid 7710:tid
> 139800984155904] event.c(1811): Idle workers: 0
> [Mon May 22 11:57:40.149882 2023] [mpm_event:debug] [pid 7710:tid
> 139800984155904] event.c(492): AH00457: Accepting new connections
> again: 61 active conns (
> 38 lingering/0 clogged/0 suspended), 2 idle workers
> [Mon May 22 

Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Mateusz Kempski
@Yann Ylavic:
There seems to be no difference in configuration file except for some comments:
```
diff -Bbde /etc/httpd/conf/httpd.conf /etc/httpd/conf/httpd.conf.rocky
8a
# See the httpd.conf(5) man page for more information on this configuration,
# and httpd.service(8) on using and configuring the httpd service.
#
.
```
I will paste complete Rocky config for completeness at the end of this message.
EnableMMAP and EnableSendfile are set to on, differently to defaults
specified in comments. This is default system configuration present on
both Rocky 8 and Centos 7. I removed httpd and all config files and
installed it again to make sure.
```
# Defaults if commented: EnableMMAP On, EnableSendfile Off
#
#EnableMMAP off
EnableSendfile on
```

Both systems return:
```
ulimit -n
1024
```
There is no custom config for httpd:
```
stat /etc/systemd/system/httpd.service.d
stat: cannot stat ‘/etc/systemd/system/httpd.service.d’: No such file
or directory
```
systemd itself seems to not apply any limit on both systems:
```
grep LimitNOFILE /etc/systemd/system.conf
#DefaultLimitNOFILE=
```

After enabling trace there is a lot of messenges about too many open
connections and killing keepalive connections. Example snippet:
```
[Mon May 22 11:57:40.146451 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(1808): Too many open connections (73), not
accepting new conns in
this process
[Mon May 22 11:57:40.146455 2023] [mpm_event:trace1] [pid 7710:tid
139800984155904] event.c(1811): Idle workers: 0
[Mon May 22 11:57:40.146457 2023] [mpm_event:trace1] [pid 7710:tid
139800984155904] event.c(1574): All workers are busy or dying, will
close 2 keep-alive co
nnections
[Mon May 22 11:57:40.146526 2023] [mpm_event:debug] [pid 7379:tid
139801403561728] event.c(492): AH00457: Accepting new connections
again: 50 active conns (
27 lingering/0 clogged/0 suspended), 7 idle workers
[Mon May 22 11:57:40.146686 2023] [mpm_event:debug] [pid 7511:tid
139800849938176] event.c(1808): Too many open connections (86), not
accepting new conns in
this process
[Mon May 22 11:57:40.146698 2023] [mpm_event:trace1] [pid 7511:tid
139800849938176] event.c(1811): Idle workers: 11
[Mon May 22 11:57:40.146701 2023] [mpm_event:trace1] [pid 7511:tid
139800849938176] event.c(1574): All workers are busy or dying, will
close 34 keep-alive c
onnections
[Mon May 22 11:57:40.146701 2023] [mpm_event:trace1] [pid 7644:tid
139801336452864] event.c(1574): All workers are busy or dying, will
close 2 keep-alive co
nnections
[Mon May 22 11:57:40.146812 2023] [mpm_event:debug] [pid 7511:tid
139800849938176] event.c(492): AH00457: Accepting new connections
again: 85 active conns (
52 lingering/0 clogged/0 suspended), 6 idle workers
[Mon May 22 11:57:40.146955 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(492): AH00457: Accepting new connections
again: 73 active conns (
58 lingering/0 clogged/0 suspended), 9 idle workers
[Mon May 22 11:57:40.148250 2023] [mpm_event:debug] [pid 7162:tid
139800841545472] event.c(492): AH00457: Accepting new connections
again: 48 active conns (
25 lingering/0 clogged/0 suspended), 2 idle workers
[Mon May 22 11:57:40.148562 2023] [mpm_event:debug] [pid 7162:tid
139800841545472] event.c(1808): Too many open connections (48), not
accepting new conns in
this process
[Mon May 22 11:57:40.148573 2023] [mpm_event:trace1] [pid 7162:tid
139800841545472] event.c(1811): Idle workers: 0
[Mon May 22 11:57:40.148921 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(1808): Too many open connections (70), not
accepting new conns in
this process
[Mon May 22 11:57:40.148930 2023] [mpm_event:trace1] [pid 7710:tid
139800984155904] event.c(1811): Idle workers: 1
[Mon May 22 11:57:40.149594 2023] [mpm_event:debug] [pid 7511:tid
139800849938176] event.c(1808): Too many open connections (69), not
accepting new conns in
this process
[Mon May 22 11:57:40.149603 2023] [mpm_event:trace1] [pid 7511:tid
139800849938176] event.c(1811): Idle workers: 1
[Mon May 22 11:57:40.149630 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(492): AH00457: Accepting new connections
again: 61 active conns (
38 lingering/0 clogged/0 suspended), 2 idle workers
[Mon May 22 11:57:40.149776 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(1808): Too many open connections (63), not
accepting new conns in
this process
[Mon May 22 11:57:40.149782 2023] [mpm_event:trace1] [pid 7710:tid
139800984155904] event.c(1811): Idle workers: 0
[Mon May 22 11:57:40.149882 2023] [mpm_event:debug] [pid 7710:tid
139800984155904] event.c(492): AH00457: Accepting new connections
again: 61 active conns (
38 lingering/0 clogged/0 suspended), 2 idle workers
[Mon May 22 11:57:40.149913 2023] [mpm_event:debug] [pid 7511:tid
139800849938176] event.c(492): AH00457: Accepting new connections
again: 63 active conns (
40 lingering/0 clogged/0 suspended), 2 idle workers
[Mon May 22 11:57:40.150210 2023] [mpm_event:debug] [pid 7511:tid
139800849938176] 

Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Deepak Goel
Hi

1. Please post the test results completely. (Sorry but, saying there is no
difference does not help).
2. The memory used in Rocky is comparatively higher than Centos (About
30Meg or so). Also the buff/cache is approx 10 times higher in Rocky than
in Centos.
3. Please also post iotop results too.

P.S: I hope the top commands is run identical for both Rocky and Centos.

Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi"

+91 73500 12833
deic...@gmail.com

Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool

"Plant a Tree, Go Green"

Make In India : http://www.makeinindia.com/home



On Mon, May 22, 2023 at 4:28 PM Mateusz Kempski
 wrote:

> I tested again with settings:
> ```
> KeepAliveTimeout 300
> MaxKeepAliveRequests 0
> ```
> but there was no difference in results barring normal diffs run-to-run.
>
> Below is the top of top from both servers when idle and during test.
>
> Rocky 8 no load:
> ```
> top - 10:49:23 up 4 min,  1 user,  load average: 3.27, 2.56, 1.10
> Tasks: 254 total,   2 running, 252 sleeping,   0 stopped,   0 zombie
> %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> MiB Mem :  15824.7 total,  10682.4 free,283.5 used,   4858.8 buff/cache
> MiB Swap:  0.0 total,  0.0 free,  0.0 used.  15259.3 avail Mem
> ```
> Rocky 8 during test:
> ```
> top - 10:50:29 up 5 min,  1 user,  load average: 4.33, 2.80, 1.28
> Tasks: 232 total,   2 running, 230 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 13.7 us, 16.9 sy,  0.0 ni, 63.9 id,  0.0 wa,  0.0 hi,  5.4 si,
> 0.1 st
> MiB Mem :  15824.7 total,   9863.0 free,529.3 used,   5432.3 buff/cache
> MiB Swap:  0.0 total,  0.0 free,  0.0 used.  15012.2 avail Mem
> ```
> Centos 7 no load:
> ```
> top - 10:52:17 up 0 min,  1 user,  load average: 0.00, 0.00, 0.00
> Tasks: 201 total,   1 running, 200 sleeping,   0 stopped,   0 zombie
> %Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,
> 0.0 st
> KiB Mem : 16264300 total, 15831896 free,   297124 used,   135280 buff/cache
> KiB Swap:0 total,0 free,0 used. 15720740 avail Mem
> ```
> Centos 7 during test:
> ```
> top - 10:53:21 up 1 min,  1 user,  load average: 0.62, 0.16, 0.05
> Tasks: 218 total,   3 running, 215 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 17.6 us, 18.9 sy,  0.0 ni, 60.4 id,  0.0 wa,  0.0 hi,  3.1 si,
> 0.1 st
> KiB Mem : 16264300 total, 14973128 free,   503104 used,   788068 buff/cache
> KiB Swap:0 total,0 free,0 used. 15459544 avail Mem
> ```
>
> On Mon, 22 May 2023 at 10:34, Deepak Goel  wrote:
> >
> > Hi
> >
> > I can see about 8000+ requests have timed out in 'Rocky'. This is mostly
> due to Apache, which is unable to handle the load. Is it possible to
> increase the parameter "KeepAliveTimeout" (and other KeepAlive parameters).
> >
> > Is it also possible for you to post the hardware utilisations for the 2
> different servers (Centos & Rocky)?
> >
> > Deepak
> > "The greatness of a nation can be judged by the way its animals are
> treated - Mahatma Gandhi"
> >
> > +91 73500 12833
> > deic...@gmail.com
> >
> > Facebook: https://www.facebook.com/deicool
> > LinkedIn: www.linkedin.com/in/deicool
> >
> > "Plant a Tree, Go Green"
> >
> > Make In India : http://www.makeinindia.com/home
> >
> >
> > On Mon, May 22, 2023 at 3:49 PM Mateusz Kempski
>  wrote:
> >>
> >> Hi all,
> >> I have two identical VMs - 16GB RAM, 16 vCPUs. One is fresh Centos 7
> >> install, the other is fresh Rocky 8. I installed httpd (on Centos 7
> >> it's version 2.4.6 and on Rocky 8 it's 2.4.37), configured them to
> >> point to the same static default html file and enabled mpm event on
> >> Centos (mpm event is default on Rocky). Then I added following options
> >> to default config on both servers:
> >> ```
> >> 
> >> ThreadsPerChild 25
> >> StartServers 3
> >> ServerLimit 120
> >> MinSpareThreads 75
> >> MaxSpareThreads 3000
> >> MaxRequestWorkers 3000
> >> MaxConnectionsPerChild 0
> >> 
> >> ```
> >> After this is done I performed ab tests with keepalive using different
> >> Centos 7 VM in the same local network. Can you help me understand
> >> these results? On Centos I am able to complete 1 milion requests at
> >> 1000 concurrent connections with little to no errors, however with
> >> version 2.4.37 on Rocky I get a lot of failed requests due to length
> >> and exceptions. Served content is static so I am assuming this is
> >> because keepalive connections are closed by the server. I tried
> >> various configurations of KeeAaliveTimeout, MaxKeepAliveRequests,
> >> AsyncRequestWorkerFactor and other mpm_event options but I got no
> >> better results. The only thing that seems to improve the stability of
> >> keepalive connections is setting threads and servers to the moon for
> >> example:
> >> ```
> >> 
> >> ThreadsPerChild 50
> >> StartServers 120
> >> ServerLimit 120
> >> MinSpareThreads 6000
> >> 

Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Yann Ylavic
Hi,

On Mon, May 22, 2023 at 12:19 PM Mateusz Kempski
 wrote:
>
> Then I added following options
> to default config on both servers:
> ```
> 
> ThreadsPerChild 25
> StartServers 3
> ServerLimit 120
> MinSpareThreads 75
> MaxSpareThreads 3000
> MaxRequestWorkers 3000
> MaxConnectionsPerChild 0
> 
> ```

What is the difference between the two configurations (besides
identical MPM parameters)? Things like EnableMMAP and EnableSendfile
matter too for instance.

Do the two systems have the same `ulimit -n` (or LimitNOFILE in
systemd) for httpd?

Also, do you see errors in the error_log file? Maybe "LogLevel
mpm_event:trace1" could help see what happens while not being too
verbose.


Regards;
Yann.

-
To unsubscribe, e-mail: users-unsubscr...@httpd.apache.org
For additional commands, e-mail: users-h...@httpd.apache.org



Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Mateusz Kempski
I tested again with settings:
```
KeepAliveTimeout 300
MaxKeepAliveRequests 0
```
but there was no difference in results barring normal diffs run-to-run.

Below is the top of top from both servers when idle and during test.

Rocky 8 no load:
```
top - 10:49:23 up 4 min,  1 user,  load average: 3.27, 2.56, 1.10
Tasks: 254 total,   2 running, 252 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  15824.7 total,  10682.4 free,283.5 used,   4858.8 buff/cache
MiB Swap:  0.0 total,  0.0 free,  0.0 used.  15259.3 avail Mem
```
Rocky 8 during test:
```
top - 10:50:29 up 5 min,  1 user,  load average: 4.33, 2.80, 1.28
Tasks: 232 total,   2 running, 230 sleeping,   0 stopped,   0 zombie
%Cpu(s): 13.7 us, 16.9 sy,  0.0 ni, 63.9 id,  0.0 wa,  0.0 hi,  5.4 si,  0.1 st
MiB Mem :  15824.7 total,   9863.0 free,529.3 used,   5432.3 buff/cache
MiB Swap:  0.0 total,  0.0 free,  0.0 used.  15012.2 avail Mem
```
Centos 7 no load:
```
top - 10:52:17 up 0 min,  1 user,  load average: 0.00, 0.00, 0.00
Tasks: 201 total,   1 running, 200 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 16264300 total, 15831896 free,   297124 used,   135280 buff/cache
KiB Swap:0 total,0 free,0 used. 15720740 avail Mem
```
Centos 7 during test:
```
top - 10:53:21 up 1 min,  1 user,  load average: 0.62, 0.16, 0.05
Tasks: 218 total,   3 running, 215 sleeping,   0 stopped,   0 zombie
%Cpu(s): 17.6 us, 18.9 sy,  0.0 ni, 60.4 id,  0.0 wa,  0.0 hi,  3.1 si,  0.1 st
KiB Mem : 16264300 total, 14973128 free,   503104 used,   788068 buff/cache
KiB Swap:0 total,0 free,0 used. 15459544 avail Mem
```

On Mon, 22 May 2023 at 10:34, Deepak Goel  wrote:
>
> Hi
>
> I can see about 8000+ requests have timed out in 'Rocky'. This is mostly due 
> to Apache, which is unable to handle the load. Is it possible to increase the 
> parameter "KeepAliveTimeout" (and other KeepAlive parameters).
>
> Is it also possible for you to post the hardware utilisations for the 2 
> different servers (Centos & Rocky)?
>
> Deepak
> "The greatness of a nation can be judged by the way its animals are treated - 
> Mahatma Gandhi"
>
> +91 73500 12833
> deic...@gmail.com
>
> Facebook: https://www.facebook.com/deicool
> LinkedIn: www.linkedin.com/in/deicool
>
> "Plant a Tree, Go Green"
>
> Make In India : http://www.makeinindia.com/home
>
>
> On Mon, May 22, 2023 at 3:49 PM Mateusz Kempski 
>  wrote:
>>
>> Hi all,
>> I have two identical VMs - 16GB RAM, 16 vCPUs. One is fresh Centos 7
>> install, the other is fresh Rocky 8. I installed httpd (on Centos 7
>> it's version 2.4.6 and on Rocky 8 it's 2.4.37), configured them to
>> point to the same static default html file and enabled mpm event on
>> Centos (mpm event is default on Rocky). Then I added following options
>> to default config on both servers:
>> ```
>> 
>> ThreadsPerChild 25
>> StartServers 3
>> ServerLimit 120
>> MinSpareThreads 75
>> MaxSpareThreads 3000
>> MaxRequestWorkers 3000
>> MaxConnectionsPerChild 0
>> 
>> ```
>> After this is done I performed ab tests with keepalive using different
>> Centos 7 VM in the same local network. Can you help me understand
>> these results? On Centos I am able to complete 1 milion requests at
>> 1000 concurrent connections with little to no errors, however with
>> version 2.4.37 on Rocky I get a lot of failed requests due to length
>> and exceptions. Served content is static so I am assuming this is
>> because keepalive connections are closed by the server. I tried
>> various configurations of KeeAaliveTimeout, MaxKeepAliveRequests,
>> AsyncRequestWorkerFactor and other mpm_event options but I got no
>> better results. The only thing that seems to improve the stability of
>> keepalive connections is setting threads and servers to the moon for
>> example:
>> ```
>> 
>> ThreadsPerChild 50
>> StartServers 120
>> ServerLimit 120
>> MinSpareThreads 6000
>> MaxSpareThreads 6000
>> MaxRequestWorkers 6000
>> MaxConnectionsPerChild 0
>> 
>> ```
>> However it stills throws more errors (~8k) than 2.4.6 on the first set
>> of settings. This problem occurs only when using keepalive. There are
>> no errors when using ab without -k option, although speed is lower.  I
>> can replicate this issue on newest httpd build from source (2.4.57).
>> What is causing this difference of behavior? How can I achieve
>> performance from 2.4.6 on 2.4.37 / 2.4.57 without throwing much more
>> resources at httpd? These errors seems to be a root cause of a problem
>> we have with 502 errors thrown from downstream reverse proxy server
>> when httpd kills keepalive connection prematurely and the proxy is
>> trying to use this connection.
>>
>> Below results of ab tests:
>>
>> Centos 7 VM:
>> ```
>> ab -k -t 900 -c 1000 -n 100 http://centos/
>> This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
>> 

Re: [users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Deepak Goel
Hi

I can see about 8000+ requests have timed out in 'Rocky'. This is mostly
due to Apache, which is unable to handle the load. Is it possible to
increase the parameter "KeepAliveTimeout" (and other KeepAlive parameters).

Is it also possible for you to post the hardware utilisations for the 2
different servers (Centos & Rocky)?

Deepak
"The greatness of a nation can be judged by the way its animals are treated
- Mahatma Gandhi"

+91 73500 12833
deic...@gmail.com

Facebook: https://www.facebook.com/deicool
LinkedIn: www.linkedin.com/in/deicool

"Plant a Tree, Go Green"

Make In India : http://www.makeinindia.com/home


On Mon, May 22, 2023 at 3:49 PM Mateusz Kempski
 wrote:

> Hi all,
> I have two identical VMs - 16GB RAM, 16 vCPUs. One is fresh Centos 7
> install, the other is fresh Rocky 8. I installed httpd (on Centos 7
> it's version 2.4.6 and on Rocky 8 it's 2.4.37), configured them to
> point to the same static default html file and enabled mpm event on
> Centos (mpm event is default on Rocky). Then I added following options
> to default config on both servers:
> ```
> 
> ThreadsPerChild 25
> StartServers 3
> ServerLimit 120
> MinSpareThreads 75
> MaxSpareThreads 3000
> MaxRequestWorkers 3000
> MaxConnectionsPerChild 0
> 
> ```
> After this is done I performed ab tests with keepalive using different
> Centos 7 VM in the same local network. Can you help me understand
> these results? On Centos I am able to complete 1 milion requests at
> 1000 concurrent connections with little to no errors, however with
> version 2.4.37 on Rocky I get a lot of failed requests due to length
> and exceptions. Served content is static so I am assuming this is
> because keepalive connections are closed by the server. I tried
> various configurations of KeeAaliveTimeout, MaxKeepAliveRequests,
> AsyncRequestWorkerFactor and other mpm_event options but I got no
> better results. The only thing that seems to improve the stability of
> keepalive connections is setting threads and servers to the moon for
> example:
> ```
> 
> ThreadsPerChild 50
> StartServers 120
> ServerLimit 120
> MinSpareThreads 6000
> MaxSpareThreads 6000
> MaxRequestWorkers 6000
> MaxConnectionsPerChild 0
> 
> ```
> However it stills throws more errors (~8k) than 2.4.6 on the first set
> of settings. This problem occurs only when using keepalive. There are
> no errors when using ab without -k option, although speed is lower.  I
> can replicate this issue on newest httpd build from source (2.4.57).
> What is causing this difference of behavior? How can I achieve
> performance from 2.4.6 on 2.4.37 / 2.4.57 without throwing much more
> resources at httpd? These errors seems to be a root cause of a problem
> we have with 502 errors thrown from downstream reverse proxy server
> when httpd kills keepalive connection prematurely and the proxy is
> trying to use this connection.
>
> Below results of ab tests:
>
> Centos 7 VM:
> ```
> ab -k -t 900 -c 1000 -n 100 http://centos/
> This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, http://www.apache.org/
>
> Benchmarking 10.1.3.3 (be patient)
> Completed 10 requests
> Completed 20 requests
> Completed 30 requests
> Completed 40 requests
> Completed 50 requests
> Completed 60 requests
> Completed 70 requests
> Completed 80 requests
> Completed 90 requests
> Completed 100 requests
> Finished 100 requests
>
>
> Server Software:Apache/2.4.6
> Server Hostname:10.1.3.3
> Server Port:80
>
> Document Path:  /
> Document Length:7620 bytes
>
> Concurrency Level:  1000
> Time taken for tests:   15.285 seconds
> Complete requests:  100
> Failed requests:67
>   (Connect: 0, Receive: 0, Length: 67, Exceptions: 0)
> Write errors:   0
> Keep-Alive requests:990567
> Total transferred:  7919057974 bytes
> HTML transferred:   7619489460 bytes
> Requests per second:65422.95 [#/sec] (mean)
> Time per request:   15.285 [ms] (mean)
> Time per request:   0.015 [ms] (mean, across all concurrent requests)
> Transfer rate:  505945.41 [Kbytes/sec] received
>
> Connection Times (ms)
>  min  mean[+/-sd] median   max
> Connect:00   4.7  01042
> Processing: 3   15  16.8 13 467
> Waiting:0   14  16.2 13 433
> Total:  3   15  17.7 131081
>
> Percentage of the requests served within a certain time (ms)
>  50% 13
>  66% 15
>  75% 16
>  80% 17
>  90% 21
>  95% 23
>  98% 32
>  99% 44
> 100%   1081 (longest request)
> ```
>
> Rocky 8 VM:
> ```
> ab -k -t 900 -c 1000 -n 100 http://rocky/
> This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
> Licensed to The Apache Software Foundation, 

[users@httpd] Keepalive closing connections prematurely on high load on newer httpd versions

2023-05-22 Thread Mateusz Kempski
Hi all,
I have two identical VMs - 16GB RAM, 16 vCPUs. One is fresh Centos 7
install, the other is fresh Rocky 8. I installed httpd (on Centos 7
it's version 2.4.6 and on Rocky 8 it's 2.4.37), configured them to
point to the same static default html file and enabled mpm event on
Centos (mpm event is default on Rocky). Then I added following options
to default config on both servers:
```

ThreadsPerChild 25
StartServers 3
ServerLimit 120
MinSpareThreads 75
MaxSpareThreads 3000
MaxRequestWorkers 3000
MaxConnectionsPerChild 0

```
After this is done I performed ab tests with keepalive using different
Centos 7 VM in the same local network. Can you help me understand
these results? On Centos I am able to complete 1 milion requests at
1000 concurrent connections with little to no errors, however with
version 2.4.37 on Rocky I get a lot of failed requests due to length
and exceptions. Served content is static so I am assuming this is
because keepalive connections are closed by the server. I tried
various configurations of KeeAaliveTimeout, MaxKeepAliveRequests,
AsyncRequestWorkerFactor and other mpm_event options but I got no
better results. The only thing that seems to improve the stability of
keepalive connections is setting threads and servers to the moon for
example:
```

ThreadsPerChild 50
StartServers 120
ServerLimit 120
MinSpareThreads 6000
MaxSpareThreads 6000
MaxRequestWorkers 6000
MaxConnectionsPerChild 0

```
However it stills throws more errors (~8k) than 2.4.6 on the first set
of settings. This problem occurs only when using keepalive. There are
no errors when using ab without -k option, although speed is lower.  I
can replicate this issue on newest httpd build from source (2.4.57).
What is causing this difference of behavior? How can I achieve
performance from 2.4.6 on 2.4.37 / 2.4.57 without throwing much more
resources at httpd? These errors seems to be a root cause of a problem
we have with 502 errors thrown from downstream reverse proxy server
when httpd kills keepalive connection prematurely and the proxy is
trying to use this connection.

Below results of ab tests:

Centos 7 VM:
```
ab -k -t 900 -c 1000 -n 100 http://centos/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.3.3 (be patient)
Completed 10 requests
Completed 20 requests
Completed 30 requests
Completed 40 requests
Completed 50 requests
Completed 60 requests
Completed 70 requests
Completed 80 requests
Completed 90 requests
Completed 100 requests
Finished 100 requests


Server Software:Apache/2.4.6
Server Hostname:10.1.3.3
Server Port:80

Document Path:  /
Document Length:7620 bytes

Concurrency Level:  1000
Time taken for tests:   15.285 seconds
Complete requests:  100
Failed requests:67
  (Connect: 0, Receive: 0, Length: 67, Exceptions: 0)
Write errors:   0
Keep-Alive requests:990567
Total transferred:  7919057974 bytes
HTML transferred:   7619489460 bytes
Requests per second:65422.95 [#/sec] (mean)
Time per request:   15.285 [ms] (mean)
Time per request:   0.015 [ms] (mean, across all concurrent requests)
Transfer rate:  505945.41 [Kbytes/sec] received

Connection Times (ms)
 min  mean[+/-sd] median   max
Connect:00   4.7  01042
Processing: 3   15  16.8 13 467
Waiting:0   14  16.2 13 433
Total:  3   15  17.7 131081

Percentage of the requests served within a certain time (ms)
 50% 13
 66% 15
 75% 16
 80% 17
 90% 21
 95% 23
 98% 32
 99% 44
100%   1081 (longest request)
```

Rocky 8 VM:
```
ab -k -t 900 -c 1000 -n 100 http://rocky/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 10.1.3.11 (be patient)
Completed 10 requests
Completed 20 requests
Completed 30 requests
Completed 40 requests
Completed 50 requests
Completed 60 requests
Completed 70 requests
Completed 80 requests
Completed 90 requests
Completed 100 requests
Finished 100 requests


Server Software:Apache/2.4.37
Server Hostname:10.1.3.11
Server Port:80

Document Path:  /
Document Length:7620 bytes

Concurrency Level:  1000
Time taken for tests:   19.101 seconds
Complete requests:  100
Failed requests:93159
  (Connect: 0, Receive: 0, Length: 85029, Exceptions: 8130)
Write errors:   0
Keep-Alive requests:912753
Total transferred:  7248228337 bytes
HTML transferred:   6973694460 bytes
Requests per second:52352.12 [#/sec] (mean)
Time per request:   19.101 [ms]